Automatic TV: Intelligent Video Production at Berkeley’s GCR
The future of the moving image depends on video production getting a lot easier. In the last five years, digital video capture and editing have made TV shows simpler and cheaper to produce, but you still need too many people to make a minute of television: camera operator, sound tech, hair and makeup, video editor and broadcast/DVD engineer. New de-skilled video authoring tools like Serious Magic’s groundbreaking Visual Communicator make desktop video more like desktop publishing: “Think it. Mouse it. Print it.” But we need a big leap forward before TV is easy enough.
A pioneering research effort at UC Berkeley’s Garage Cinema Research applies artificial intelligence techniques to understand the “semantic content and syntactic structure” of video. GCR has posted some sample of their intelligent video productions on their web site. See the thumbnails for some interesting experiments:
Much of the Berkeley language is pretty dense and academic, but the take-home paragraph is on target:
Our research is about making video a data type that humans and computers can create, access, process, reuse, and share according to descriptions of its semantic content and the principles of its syntactic construction. Our research aims to make this process as effortless as possible.
Notice that the next generation video producers will be computers as well as humans.