Gigapixel Camera Captures Unprecedented Image Detail

A new camera design consisting of a central lens surrounded by an array of microcameras heralds a new era of photography, enabling pictures of unprecedented detail.

By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail.

In fact, the camera can capture images containing elements that the human eye cannot detect while taking the picture.

As a comparison, most consumer cameras are capable of taking photographs with sizes ranging from eight to 40 megapixels. Pixels are individual “dots” of data – the higher the number of pixels, the better resolution of the image. The new camera has the potential to capture up to 50 gigapixels of data, which is about 50 billion pixels, more than 50,000 times more than megapixel cameras.

The gigapixel camera prototype. (Photo by Duke University Imaging and Spectroscopy Program)

“A 50-gigapixel image is about 10,000 times bigger than an average desktop display. If you wanted to capture all that there is to see, you’d need an array of 100 by 100 high-definition monitors to show it all,” said Michael Gehm, an assistant professor of electrical and computer engineering at the UA, who led the team that developed the software that combines the input from the new camera’s individual microcameras.

At the heart of the current camera prototype is a large ball-shaped lens surrounded by 98 much smaller microcameras that together form a 1-gigapixel image.

The researchers believe that within five years, as the electronic components of the cameras become miniaturized and made more efficient, the next generation of gigapixel cameras should be available to the general public.

Details of the new camera were published online in the journal Nature. The team’s research was supported by the Defense Advanced Research Projects Agency.

The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering and principal investigator of the project, along with scientists from the UA, the University of California, San Diego and Distant Focus Corp.

A gigapixel image reveals its detail only upon zooming in. (Photo by Duke University Imaging and Spectroscopy Program)

“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”

“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop these kinds of cameras,” Brady said. “The primary barrier to ubiquitous high-pixel imaging has been more the integrated circuits, not the optics.”

“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point this kind of complexity just saturates and becomes cost-prohibitive.”

“Our current approach instead of making increasingly complex optics is to come up with a massive parallel row of electronic elements,” said Gehm, whose team in the UA’s department of electrical and computer engieering includes Dathon Golish and Esteban Vera.

"We have to rethink everything about making images," said Michael Gehm, assistant professor at the UA College of Engineering. Image credit: University of Arizona

“A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

The prototype camera itself is a two-and-half foot square and 20 inches deep. Only about three percent of the camera is made up of the optical elements, while the rest is made up of the electronics and processors needed to assemble all the information gathered.

Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.

“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.

“The standard camera design was created in the 17th century,” said Gehm, who holds a joint appointment with the UA’s College of Optical Sciences. “Isaac Newton would understand how a traditional camera works today, which is a bit crazy when you think about it. Back then, it was designed so it had to be human eye/brain-friendly, but it doesn’t have to be that anymore.”

“At the heart of our program here at the University of Arizona is the idea of using computers as inspiration for optics. We ask, ‘How can we take optical systems and completely rethink them?'”

Instead of making increasingly complex optics, the approach of Gehm’s group is to come up with a massive parallel row of simple commodity components.

As one can imagine, making pictures with a gigapixel camera is completely different from traditional cameras.

“We want to be able to record images at 10 frames per second, which is near video rate,” Gehm explained. “The 50-gigapixel camera would generate a half a terabyte of data every second. You’d fill a terabyte hard drive in two seconds, you’d fill a data center in about a day, and you’d fill all of the data centers on the planet in about a year to a year and a half.”

Currently, no displays capable of showing pictures of the size produced by the gigapixel camera exist. Disorientation, too, becomes an issue.

“People get lost when they zoom in and out,” Gehm said. “If you have looked through binoculars or strong telephoto, you can zoom in and have no idea where you are. Those are some of the user interface issues that we’re also thinking about and how you give someone cues so they don’t get lost in this data. How do you help someone find what they need in this image?”

“You have to rethink everything about how you make the image, how you let people access the image, and what assistance you provide to the user to be able to get the information out of such an image.”

– By Richard Merrit and Daniel Stolte

*Source: The University of Arizona

(Visited 21 times, 1 visits today)