domingo, 19 de diciembre de 2010

What’s Just Around the Bend? Soon, a Camera May Show You

NOVELTIES

What’s Just Around the Bend? Soon, a Camera May Show You

ANYONE who has witnessed the megapixel one-upmanship in camera ads might think that computer chips run the show in digital photography.
Everett Lawson/M.I.T.
At M.I.T., Ramesh Raskar says that by using a sophisticated processing system, a camera will be able to “look around objects and see what’s beyond.”
Linda A. Cicero/Stanford News Service
Marc Levoy of Stanford is aiming to bring computational photography to conventional cameras and camera phones. His “Frankencamera” is at right.
That’s not true. In most cameras, lenses still form the basic image. Computers have only a toehold, controlling megapixel detectors and features like the shutter. But in research labs, the new discipline of computational photography is gaining ground, taking over jobs that were once the province of lenses.
In the future, the technology of computational photography may guide rescue robots, or endoscopes that need to peer around artery blockages. In camera phones, the technology can already merge two exposures of the same image. One day, it could even change the focus of a picture you’ve already taken.
At the Massachusetts Institute of Technology, one experimental camera has no lens at all: it uses reflected light, computer processing and other tools to let it see around corners.
Ramesh Raskar, leader of the Camera Culture research group at M.I.T., aims his camera and an ultrafast laser attachment at a door half-open into a model room containing simple objects. The laser — the equivalent of a flash — fires pulses shorter than a trillionth of a second. Light bounces off the door, scatters into the room, hits the objects within and then bounces back to the detector. Dr. Raskar traces those bouncing echoes of light photon by photon, based on when and where they land.
From the reflected light, as well as the room’s geometry and mathematical modeling, hededuces the structure of the hidden objects. “If you modify your camera and add sophisticated processing,” he said, “the camera can look around objects and see what’s beyond.”
Steven Seitz, a professor in the department of computer science and engineering at theUniversity of Washington in Seattle, says Dr. Raskar’s technology will have to surmount tough obstacles to go beyond the laboratory. “He’s demonstrated that it can work, but the big questions are when and how it can be deployed,” Dr. Seitz said. “You will need powerful lasers and there will be safety issues. But the work is exciting as a prototype.”
Shree K. Nayar, chairman of the computer science department at Columbia University, does research that includes computational photography. “The data megapixel sensors gather is just an intermediate step on the way to a picture,” he said. “We are interested in how you design a camera that goes hand in hand with computation to create a new kind of picture.”
Many images produced by computational photography are seen mainly in research — for example, in shots where the focus has been changed after the fact. But inexpensive applications for ordinary camera phones are also starting to appear, said Marc Levoy, a professor of computer science and electrical engineering at Stanford.
“A year ago this wasn’t happening,” he said. “But the industry is beginning to think that if the megapixel war is over, computational photography may be the next battleground.”
For example, consumers can buy apps for high dynamic range, or HDR, a common technique in computational photography, said Frédo Durand, an associate professor at M.I.T. who collaborates with Dr. Levoy. True HDR (99 cents) and Pro HDR ($1.99), both sold at iTunes, can combine photos shot at different exposures — one in deep shadow, the other overexposed, merging them for a dynamic range that normally can’t be attained in a single shot.
Professor Durand said he would like to write his own computational photography apps for conventional cameras. But he can’t, because the camera’s workings are typically closed to amateur photographers.
What feats could computation perform if consumer cameras were opened to programmers? To demonstrate, Dr. Levoy and his colleagues have created a gallery of programmable cameras.
Using spare parts, the team assembled a prototype for a portable camera, dubbed Frankencamera, now in its third version, that runs on Linux, the open operating system. Programmers can play with the chips inside the camera that record and process images.
There’s a cellphone Frankencamera, too. With the support of Nokia, the group has opened up the Nokia N900 smartphone, writing software to give programmers more control of its components. Details of the Frankencamera work, including the software for the Nokia, are available free at the Stanford group’s Web site.
Dr. Levoy and his group have also written applications showing the Frankencameras’ abilities. The Rephotography app, for instance, lets users take a photo in the exact spot where an earlier one was shot. “The camera guides you step by step, so that you mathematically find the exact same viewpoint,” said Professor Durand, who with colleagues created the original app.
SOON, many students may be learning about computational photography. Dr. Levoy has received a grant from the National Science Foundation for a course to introduce it to graduate students at American universities. He and his team are preparing materials; each packet will include lectures, one or two of the Frankencameras and a dozen or so of the adapted N900s.
Dr. Seitz in Seattle says he hopes the Frankencamera project will succeed.
“Once camera technology is opened up so that anyone can program,” he Seitz said, “the promise of computational photography will start to pay off.”
E-mail: novelties@nytimes.com.

No hay comentarios: