Computational Imaging: Rethinking How We Look at the World

September 2010
Topics: Image Processing
Computational imaging is based on the premise that digital imaging systems should not be designed to mimic the eye, but instead to measure the light reflected or radiated from a scene in the most efficient and effective manner.
digital camera

Evolution, but Not Revolution

The emergence of the digital camera has enabled a number of new imaging capabilities and applications. Pictures can be manipulated in untold ways; they can be effortlessly stored, catalogued, and retrieved; and they can be shared with anyone around the world in an instant.

Despite this perceived revolution in technology, the reality is that cameras have fundamentally changed very little since the first camera obscura was demonstrated a thousand years ago. When you get right down to the basic design, cameras are still modeled on the human eye; they rely almost exclusively on optics to form an image.

The primary advance associated with the modern digital camera is not in the way it acquires images, but in the way that those images are recorded. Semiconductor photodetector arrays are basically an electronic analog to film. One could argue that these new cameras are not really digital imaging systems at all, but analog imaging systems with electronic recording. And as such, modern cameras have fallen short of fully realizing the potential made possible by the digital transition.

Too Many Pixels, Not Enough Information

Computational imaging has emerged in recent years to exploit the unrealized potential of digital imaging technology. Computational imaging embraces the notion that imaging systems should be designed and optimized as systems rather than as a collection of individual components. Shouldn't the optics in an imaging system be designed with some consideration as to how the image can and will be processed once it's captured? As soon as we start to look at imaging system design in this way, we find that an entire new design space is opened up.

For instance, as cameras boast the ability to capture more and more megapixels—and the question seems not to be if, but when gigapixel cameras will become a reality—what should we do with all that data? For while the amount of data increases linearly with the number of pixels in an image, the amount of useful information contained in that image typically will not, leading to an ever expanding discrepancy between data and information. The explosion of image data has also produced an increasing, almost crippling, demand on the data capacities of the underlying infrastructures for processing, transmission, storage, and retrieval. So shouldn't we be designing imaging systems with the goal of providing more information rather than more data?

Computational Imaging: Rethinking How We Look at the World

The Eyes Don?t Have It

A second notion embraced by computational imaging is that it should not be necessary to rely so heavily on optics to form a perfect image up front. After all, image refinement can now be largely accomplished through digital processing, as long as the requisite information is contained in the light collected by the optics. The now-common use of post-processing to correct for optical imperfections (such as red eye) is certainly an example of this capability. Furthermore, some applications do not require an isomorphic image (one that matches what your eye sees) at all, but rather some other information that must be painstakingly extracted from a high-resolution image by an algorithm or an analyst, a task that is often computationally and labor intensive. Why, for example, create a high-resolution image of an entire urban area or rural countryside when you are only interested in vehicles traveling along a road?

The supporting idea behind computational imaging, then, is that perhaps it is possible to design digital imaging systems that more efficiently collect and transform information encoded in light, depending on the different types of information you are trying to collect. Through the strategic use of unconventional optical elements combined with custom digital filters to interpret the resulting data (which may no longer resemble an image at all), digital image systems are achieving amazing new capabilities.

More Bang for Your Pixel

Normally when a camera takes a picture of a multi-dimensional scene, the image data is compressed and mapped into a two-dimensional data set (three if you include color). The 2D data represents the intensity of the light at each pixel on the digital image sensor. This dimensional mismatch between the scene and the image inherently results in a loss of information.

MITRE is currently investigating a technique advanced by researchers at Stanford University and others referred to as "light-field imaging." The technique of light-field imaging is fundamentally different from conventional imaging in that along with measurements of light intensity at each point on the image sensor, the light-field approach provides intensity values at each point in the aperture of the lens as well. The added dimensions of the light field can be exploited to acquire additional information during image acquisition. For example, this added information can take the form of object distance, wavelength spectrum, and polarization of the collected light.

The unusual architecture of a light-field imaging system results in a raw image that only vaguely resembles the scene to the human eye. But since the collection optics have been designed cooperatively with the post-detection algorithms, image restoration can be achieved in processing.

Cooperatively designing the collection and processing components of an imaging system in this way is part of how computational imaging strives to squeeze the most information out of every pixel, while still keeping gigapixels worth of information easy to manage.

—by Gary Euliss

Publications

Interested in MITRE's Work?

MITRE provides affordable, effective solutions that help the government meet its most complex challenges.
Explore Job Openings

Publication Search