Saturday 02nd August 2014,

New Technology: A Camera Without A Lens – Metamaterial Aperture

Leigh Diprose January 19, 2013 Beyond the Lens
John Hunt and Tom Driscoll

John Hunt, left, and Tom Driscoll. Source: Duke University

The Metamaterial Aperture constructed by graduate student, John Hunt and colleges at Duke University, is essentially a flexible laminate made from different metals and plastics. Rows of small squares are etched into the copper; these squares are constructed to capture different frequencies of light. This unique design has allowed Hunt to capture an image without the need for a lens.

The way the image is captured is a two-step process. Firstly, microwave wavelengths refracts and reflects through the different materials and as Hunt describes,”some very elegant math” is used to turn the reflective light (through computation imaging) into a two-dimensional picture.

Although the new system doesn’t work with visible light, Hunt, has successfully captured images using microwave wavelengths. The images were captured from reflective objects stationed within a controlled environments.

Metamaterial Apertures - Source Sciencemag

Metamaterial Aperture – Source: Sciencemag

Hunt describes the details of the technology and how it could be used in everyday life in the following transcript from an interview with Kerry Klein for  Science Podcast Magazine.

Host – Kerry Klein

In a digital camera, more megapixels means better resolution—the ability to zoom in on an
image and gain more information. But in many images, more pixels also means more
redundant information and, thus, more required storage space. This week, John Hunt and
colleagues describe a high-efficiency imaging tool that aims to collect only the bare
essentials of an image. Where conventional cameras work pixel by pixel, the new tool
instead breaks down a scene by wavelengths of light. The catch? The detector is made
from so-called metamaterials–specialized substances which, so far, are only calibrated for
the microwave part of the spectrum. I spoke with Hunt about the mechanics and practical
uses of this technology.
Interviewee – John Hunt

Microwave wavelengths are the wavelengths that your cell phone uses to communicate,
for instance. We still think of them as a type of light. They’re not light that we can see,
though. And this imager that we’ve designed in this experiment uses no moving parts, no
lenses, and uses only a single detector. It’s equivalently a single pixel. This is made
possible by combining two developing technologies. First, we use metamaterials that allow us unique control over light waves, and we use another technique called
computational imaging, which generalizes how we think about and collect images.
Interviewer – Kerry Klein

Okay, so let’s start with the basics. What exactly is a metamaterial and what sorts of
metamaterials did you use here?


Interviewee – John Hunt

So you can think of a metamaterial as a type of composite, a composite material like, say,
fiberglass. In the case of fiberglass, you combine two different materials – a woven glass
thread cloth and a plastic resin – and by combining them and structuring them in a careful
way, you come up with a new material that has different mechanical properties, different
from and better than either of the two parts. So in metamaterials, we do the same sort of
thing. We make a composite of different structures – different metals, different plastics –
that gives us not mechanical properties but optical properties. We can control the way
that light refracts and reflects through this material in unique ways. So one of the
properties of metamaterials is that they tend to rapidly change how they respond to
different colors or different frequencies of light. In previous experiments, this has
sometimes been a limitation. For instance, in the cloak – which is a pretty famous
metamaterial device – it actually limits the operation range. It only works for one
frequency, one color of light. But in the current metamaterial imager, we’re actually
leveraging this behavior as a way to collect more information from the scene without
using moving parts or without multiple pixels.


Interviewer – Kerry Klein

So how do more conventional imaging technologies work, say, a hand-held point-and shoot camera?


Interviewee – John Hunt

In a basic point-and-shoot camera, you have a lens that focuses light from different parts
of a scene to different pixels on the detector array, so that every point in the scene that
you want to image is mapped by the lens to a different pixel. So if you want to have a
million pixels in your final image, you have to have a million different detectors on your
detector array. If you want to image at longer wavelengths and optical wavelengths, such
as a microwave wavelength with this imaging system that we’ve designed, it’s made to
work, you can’t use these millions and millions of pixels anymore because the resulting
array of pixel detectors becomes far too large and costly to use easily. So, instead, what
people typically do to image at microwave wavelengths is they take a single detector and
they move it from point-to-point across sort of a virtual detector array, so that eventually
they sample the light at every point that they would have put a pixel at if they could have
made an array of many, many pixels. The problem with that approach is that it’s very
slow to move this single detector, or array of detectors, across this plane. It requires
complicated gears that are expensive and take up a lot of space.


Interviewer – Kerry Klein

And how does your metamaterial aperture work in contrast to these more conventional
technologies with their own sets of limitations?


Interviewee – John Hunt

The first difference is that there’s only a single detector, and we never move it. And the
way that we make an image with this system is we have a metamaterial screen, which is
covered in patches of metamaterial elements, which are each transparent to different
wavelengths of light. So this means that for every color or wavelength of light coming
from the scene, it’s sampled by different patches of the aperture before it gets to the
detector. If you want to collect a lot of data from a scene, you have to, in a sense,
multiplex the way that you’re sensing. So one way of doing that, the way that it’s done in
traditional cameras, is you have many different pixels. Pixels are spatially separated
from each other. Instead of doing this sort of spatial detector multiplexing, what our
system does is sort of frequency multiplexing so that each frequency or wavelength of
light that comes into that imaging system samples a different portion of the scene. And
then we use some very elegant math, which is developed in the field of computation
imaging, to turn that data into a two-dimensional picture of all the scattering elements in
the scene.
Interviewer – Kerry Klein

Taking a step back, you describe this metamaterial screen as being made up of patches
that are receptive to different wavelengths of light.
Interviewee – John Hunt

That’s right.
Interviewer – Kerry Klein

How does this patchwork help process spatial information?
Interviewee – John Hunt

So the way, in general, that the metamaterial aperture encodes spatial information into
our individual measurements is by focusing light from different points in the scene onto
our single detector. And for every frequency that we tune our detector to, the
metamaterial aperture focuses a different set of points from our scene down onto that
detector. So we make a sequence of measurements for different frequencies, and we get
a sequence of different intensity measurements that correspond to the sum of the points in
the scene that are being focused onto the detector for each frequency.
Interviewer – Kerry Klein

Ah. So you’re not measuring all wavelengths of light at the same time. Instead, you’re
tuning the system to only detect one wavelength of light at a time, and then you’re
running through these wavelengths sequentially to make a robust image.
Interviewee – John Hunt

That’s exactly right. The detector is tuned to one frequency after the other sequentially,
but because this is all electronic, it can be done very quickly. The metamaterials themselves are static; we don’t change them. But what we do change is the wavelengths
of the light that we feed in to our metamaterial aperture.
Interviewer – Kerry Klein

So what kinds of scenes have you been able to image so far?
Interviewee – John Hunt

So right now we’re imaging pretty controlled scenes. We’re imaging scenes that we’ve
artificially constructed in rooms that have no reflection. So we cover the walls and floor
and ceiling in a room with a non-reflective material. That way we can put things in the
room and we can only see those objects; we don’t see any of the walls or other objects.
This system works at microwave wavelengths, so you can’t see things like the sun,
necessarily. What you can see are any metallic or shiny objects that reflect microwaves.
In addition, right now the system only images in a single plane. It’s kind of like a radar
plot. We have one dimension of range and one dimension of angle in our resulting
images. We’re working right now on extending the system to make full three dimensional images where we would have two dimensions of angle and one dimension of
range. And we’d be able to locate all scattering objects and shapes in a three-dimensional
Interviewer – Kerry Klein

So for right now, there’s no way to adjust the focus or depth of field of an image?
Interviewee – John Hunt

Well, in a sense we actually have a very large, perhaps infinite, depth of field. We can
see objects at almost any range between one and five meters, for instance; we don’t have
to focus at one depth. That’s one of the interesting advantages of this system.
Interviewer – Kerry Klein

Does that mean then that all objects in the field of view are equally in focus, and that
there’s no sense of depth at all?
Interviewee – John Hunt

Actually what we do have, one of our two dimensions in our image is a depth dimension.
So we can tell where things are in distance, and we can tell where things are left and
right. We can’t tell where things are up and down. And that’s because our aperture right
now is a one-dimensional aperture. It’s just a thin strip. In order to have information
about the up and down direction, we have to make that 1-D aperture a 2-D panel-type
aperture, which we’re working on right now.
Interviewer – Kerry Klein

And so how long does it take to actually collect the data and process an image?
Interviewee – John Hunt
The collection time is something like 50 milliseconds, and the processing time to
generate an image from that collected data is approximately 50 milliseconds. So the time to capture and generate one frame is a hundred milliseconds, and we can do that at about
10 times a second.
Interviewer – Kerry Klein

So what are the practical uses for this technology? How do you envision it ultimately
being used?
Interviewee – John Hunt

So this kind of technology would be useful for any application where you’d like to have a
cheap, small, microwave or infrared imaging system. So for instance, if you wanted to
build an imager into the body of a car so you could do collision-avoidance imaging, or
for security imaging at a checkpoint, if you wanted to just have an imager built onto a
wall, or for instance if you wanted to have a cheap handheld device that could see
through walls to find wires and pipes. Current systems cost millions of dollars to image
at these frequencies, and this potentially could replace those systems with a very cheap,
very lightweight, portable system.
Interviewer – Kerry Klein

Do you see this kind of detector ever being viable at other wavelengths?
Interviewee – John Hunt

Well, some of the math and ideas would certainly apply, and already is being applied to
other areas such as optical. But the current type of metamaterials, for instance, they don’t
scale to optical frequencies. We could use these same ideas to make an optical imaging
system, but we would have to change some of the hardware.
Interviewer – Kerry Klein

Great. Okay, well John Hunt, thank you so much.
Interviewee – John Hunt

Thank you.

Like this Article? Share it!

About The Author

Social Media Specialist > Blogger > Entrepreneur > Photographer > Teacher & Advisor on Google Helpouts > Views and opinions are my own.

Loading Facebook Comments ...

1 Comment

Leave A Response