As a kid in Catholic school, I learned that we each had a guardian angel who looked out for us every minute of every day. I remember worrying about doing bad things while he watched, although I also remember making room for my guardian angel in bed. (Maybe I figured I could score my way to heaven.)
We all know that video cameras are peering at us all the time like guardian angels although we don’t realize how many thousands of cameras follow us through a typical day. Some eyes are governmental, many are private. All ostensibly have been installed for our own protection, with the best of intentions, for the common good. And all violate our private space by recording our every move without our knowledge and really without our informed consent.
It’s bad enough when the cables channel our souls to minimum wage rent-a-cops who aren’t paying much attention to their screens, but surveillance is gradually being outsourced to artificially intelligent robots who never blink, who never forget, and who make assumptions about us based on prejudice which is just another term for pattern recognition.
An anonymous group of artists and technologists called the Institute of Applied Autonomy runs a web site that helps users reduce their chances of being watched. The iSee Project pinpoints public space cameras discovered by volunteers and then plots walking routes that minimize exposure. In the iSee map at left (click for larger size), the streets of downtown Manhattan are decorated with hundreds of camera points. I count 30 cameras outlining the New York Federal Reserve Bank at 33 Liberty Street. (Remember the 1995 Bruce Willis caper flick Die Hard: With a Vengeance?)
A short piece in Technology Review describes how Tad Hirsch, a research assistant at the MIT Media Lab, has adapted the web app for a PDA, so you can tap out the path of least surveillance as you stroll down Broadway. I think I’ll just wear a ski mask whenever I go outside from now on.
(Thanks to TheRoBlog.)
The new Sony Qualia 006 is a 70-inch rear projection TV powered by a new liquid crystal on silicon (LCOS) chip. At 71.75″ wide, the TV is more than five inches wider than the chic Mini Cooper sports car made famous on the big screen by the incomparable Charlize Theron in last year’s The Italian Job.
The Mini Cooper is more of an entertainment brand than the Sony TV. After starring in the Theron heist movie, it’s been featured in a video game as well as an amusement park ride. Ironically, some 36,000 Minis were sold in the U.S. last year without running a single television commercial.
The future of the moving image depends on video production getting a lot easier. In the last five years, digital video capture and editing have made TV shows simpler and cheaper to produce, but you still need too many people to make a minute of television: camera operator, sound tech, hair and makeup, video editor and broadcast/DVD engineer. New de-skilled video authoring tools like Serious Magic’s groundbreaking Visual Communicator make desktop video more like desktop publishing: “Think it. Mouse it. Print it.” But we need a big leap forward before TV is easy enough.
A pioneering research effort at UC Berkeley’s Garage Cinema Research applies artificial intelligence techniques to understand the “semantic content and syntactic structure” of video. GCR has posted some sample of their intelligent video productions on their web site. See the thumbnails for some interesting experiments:
Much of the Berkeley language is pretty dense and academic, but the take-home paragraph is on target:
Our research is about making video a data type that humans and computers can create, access, process, reuse, and share according to descriptions of its semantic content and the principles of its syntactic construction. Our research aims to make this process as effortless as possible.
Notice that the next generation video producers will be computers as well as humans.
Our visual environment keeps getting more beautiful. As digital imaging technologies mature, old time sign painters have been replaced by grand format digital systems that can blow up any PhotoShop image to the side of a ten story building. And as this picture of Times Square from Wired New York reveals, a lot of the best looking signs have the best looking people, often with the fewest pieces of clothing.
According to InfoTrends/CAP Ventures, the wide format outdoor graphics market is one of the fastest-growing areas of digital printing. By 2008, giant ink jets are expected to produce more than 1 billion square feet of outdoor graphics in North America.
A couple of weeks ago, I wrote about augmented reality on a personal digital assistant with a link to a video from a Finnish company that created an AR visualization of a new building. This morning at a Vienna University of Technology conference in Austria, a team of developers will demonstrate The Invisible Train, “a mobile, collaborative multi-user Augmented Reality (AR) game, in which players control virtual trains on a real wooden miniature railroad track. These virtual trains are only visible to players through their PDA’s video see-through display as they don’t exist in the physical world. This type of user interface is commonly called the magic lens metaphor.
The computers in our pockets are getting very interesting applications: from calendar, contacts, and to-do lists to mobile telephony, digital imaging, email, Web access, WiFi links, MP3 playback, GPS, multimedia, pocket video and now the “magic lens.” A few more doodads and we’ll finally have reinvented Spock’s tricorder.