The future of the moving image depends on video production getting a lot easier. In the last five years, digital video capture and editing have made TV shows simpler and cheaper to produce, but you still need too many people to make a minute of television: camera operator, sound tech, hair and makeup, video editor and broadcast/DVD engineer. New de-skilled video authoring tools like Serious Magic’s groundbreaking Visual Communicator make desktop video more like desktop publishing: “Think it. Mouse it. Print it.” But we need a big leap forward before TV is easy enough.
A pioneering research effort at UC Berkeley’s Garage Cinema Research applies artificial intelligence techniques to understand the “semantic content and syntactic structure” of video. GCR has posted some sample of their intelligent video productions on their web site. See the thumbnails for some interesting experiments:
Much of the Berkeley language is pretty dense and academic, but the take-home paragraph is on target:
Our research is about making video a data type that humans and computers can create, access, process, reuse, and share according to descriptions of its semantic content and the principles of its syntactic construction. Our research aims to make this process as effortless as possible.
Notice that the next generation video producers will be computers as well as humans.
Booths at the U.S. Maritime Security Expo showcase machine guns, chase boats and bullet-proof barriers. But year to year, products that are smarter and more virtual take over part of the exhibit floor. This week, Singapore-based Stratech Systems Limited (slow, Flash-bound site) demonstarted an array of computer vision systems that keep track of security problems visually. For the maritime crowd the Vessel Image Processing System tracks ships in the harbor and maps location data to maps of restricted areas and early warning threat algorithms.
A more down-to-earth app is the Intelligent Vehicle Access Control System. Identifying cars by license plate scans, cameras in the road bed digitize the undercarriages of cars going into and out of restricted parking areas. Smart software then compares the stored picture of a clean undercarriage, and the system beeps when it finds bomb-like objects (or any other anomalies) affixed to the bottom of the car. (I boosted the red pixels in the demo image to show the detail.)
Every day there are more security cameras pointed at us from the roofs of buildings, the grilles of cop cars and the tops of traffic light poles. Now subterranean cameras will be peeking up our undercarriages as we drive by. There aren’t enough people to watch all those TV screens, but in the future scary smart robots like Stratech’s will be watching out for us, with or without our knowledge.