I’m not sure what it was. Might have been the Predator movies, might have been something else.
Heat vision. The ability to see heat was something I wanted for a while.
Back in the old days, however, any sort of a heat vision camera cost more than I’d made in a decade.
You might think “superpower!”, but that’s not what I had in mind.
I had an idea, an image in my mind. Something that is trivial to photoshop, but is absurdly hard to actually see. Things and scenes lit up by the light emitted from a human. The glow of our heat lighting up the scene.
What would it take to make a picture like that? What makes a camera? And how can you make a heat-seeing camera yourself?
Centuries ago it was known that if you stand inside a dark box on a sunny day and poke a hole in one of the walls, you’ll get the image of the outside on the other wall. In the days of photography this concept became known as a pinhole camera. The diffraction of light producing the image.
It’s a mechanism that is independent of the nature and wavelength of radiation it is being used with.
At the back of the box there was a glass plate in the earlier cameras, then film, then a digital sensor – something that can record the image. Of course, the pinhole was quickly replaced with more efficient optics, like lenses.
But for our goal it’s important to remember, because infrared light does not pass through glass.
There are many bands of invisible light out there. The visible spectrum is, as the name suggests, what we see.
Above it, the shorter waves are the ultraviolet – light which is progressively more harmful, and also progressively dimmer due to our opaque atmosphere. Nice to try, not that interesting to linger there.
Below, the longer waves are the bands of infrared.
You might have seen the online guides on how to turn your digital camera into an “infrared camera”. Some even have the nerve to call it “heat vision” camera.
How that hack works is based on the fact that the light response of silicon detectors used in most cameras includes the near infrared (NIR) band, which is just below the visible red. To approximate what the eye would see, the cameras are fitted with a filter (“Hot mirror”).
Human eye sensitivity:
NIR is an interesting band – the grass and leaves are reflective in it, and you can get some surreal images of pitch black sky with brilliant-white trees.
Below the NIR there are:
SWIR – the short-wave infrared that is used in optic fiber communications.
MWIR – the middle/medium-wave IR, where things start to get interesting – you glow at just over 100° C.
And LWIR -long/far-wave IR where all the glowy human stuff happens.
Unfortunately, glass is opaque below NIR and can’t be used in heat vision cameras.
There are optical materials that pass and focus the lower bands, but they all cost money and are difficult to get.
This is where the pinhole camera comes into play – a pinhole focuses everything you shine through it, eliminating the need for an exotic lens.
So, how do you make a digital camera?
We start with a light-proof box.
Inside the box is a coordinate table – an X-Y scanning rig to rasterize the image.
A regular digital camera would have a matrix – huge array of detectors all working in parallel. That can’t be made at home.
What can be made at home is a single light detector, based on a photodiode.
The signal from the photodiode gets amplified by a huge factor with a transimpedance amplifier, then fed through an ADC to get the digital data for the microcontroller to store on a microSD card.
This detector is mounted on the caret of the X-Y rig which scans it across the image – up and down, left, up and down, left – repeat a thousand times and you get a picture!…
The amount of sensitivity needed to pick up the faint light from the pinhole is massive. And we live in the age of AC electricity – an omnipresent hum at 50Hz.
Fortunately, it’s easy to filter out with an age-old arrangement known as a Faraday cage. Radio waves cannot penetrate into a conductor, so if you wrap your box with metal foil and use the foil as your ground reference, then the noise vanishes.
And you start to see something.
More amplification, and new noise appears.
Now it’s the motors and internal electronics. The gains are enormous and everything that can be picked up gets picked up.
We need more foil.
Much more foil…
Actually, the foil above was an early failed experiment, and the foil below is all it takes – just put the actual detector into it’s own faraday cage!
With that done, the picture clears up.
Not much noise left.
A megapixel worth of vision, wrought by your own hands.
What would the sunset look like?
Not great, I guess. It’s still a black and white camera.
Let’s improve on that. To get colors we need more sensors.
But first, let’s fast forward a bit. This project spans years, and when I started there were no 3D printers. Eventually I made one, and all the clumsiness of the old rig got replaced with modern, 3D printed, well-fitted parts.
Sensor shielding got a solidity improvement as well.
Most importantly, I moved the ADC into the sensor box, and that allowed me to attach many boxes on the same digital bus.
Add some filters…
And you can start seeing the shades of the world.
There are many types of light bulbs. Let’s look at a room with infrared (red) and visible (blue) filters.
See how the lights have distinct colors? The bright red are incandescent, spewing out infrared waste. The blue are CCFLs, barely producing anything but visible.
Spot the LEDs.
Remember how I said grass is reflective in the NIR? Let’s combine NIR, green and blue.
But back to the true color.
The bands are there because the sensors are separated by some distance, so the images don’t overlap completely.
And of course, no camera would be complete without a self-portrait.
It takes 5 minutes to complete a picture, so here i am reading about color balancing algorithms on a phone and trying not to move.
It’s surprisingly not easy to get the colors right when all you have is the raw red, green and blue channels. I’ll never curse “whoever programmed the Auto White Balance on that stupid camera” again.
But what about the unseen? The whole idea was to see what no one saw before.
That’s a whole another story.
The first part of it is easy — let’s look at the ultraviolet.
Thick, opaque air.
A sun that looks dim.
You don’t really see the ozone layer, just the air itself glowing – same blue sky, only thicker. Would have been nice to see the fog of the ozone, but it’s hidden behind the air.
Now, for the magic.
To see in MWIR, we need an InAs photodiode, something that’s not really that easy to come by. They are made by a Japanese company Hamamatsu, and do not cost that much.
Or so I thought…
This was my first encounter with ITAR – the “how dare you want interesting stuff?” restrictions. Before then I never realized just how USA, uh…, loves the whole world.
Basically, the Hamamatsu guys wanted me to provide a ton of paperwork proving that I won’t be making nuclear bombs and guided missiles with their parts, cause that’s what they are commonly used for. MWIR is where jet fighters glow brightly in front of a pitch-black sky.
Fortunately, we in Russia also know how to make heat-seeking missiles, and after some searching I found a local supplier that was more than happy to provide me with a couple of 3.4 μm InAs photodiodes.
That is where the trip into the never-before-seen begins.
And it begins with figuring out that all of the above assumptions about pinholes and billion-to-one amplifiers don’t apply.
First, the InAs photodiode drifts. A lot. That’s what a blank picture looks like.
Then, with a pinhole and maximum gain I can give it, that’s what a soldering iron looks like:
Okay. The purpose of the problems is to be solved.
Let’s dump the pinhole and order a ZnSe lens from China.
Turns out that since the project’s inception the situation changed — China was making plenty of CO2 lasers, which use ZnSe lenses. And these lenses focus the light I need perfectly fine. Oh, and they are very cheap.
Some 3D printing later i got a proper lens assembly with focus capability.
In the dark of the night you can see a lot with it now (regular photodiode).
Next, the drift.
It’s not that fast, and we can assume that it’s stable during one scan line.
So, let’s add a flap at the end…
And modify the processing software to even out the image based on the reference band.
Let’s look at that soldering iron again.
The most amazing part is not that it glows, but that it glows brightly enough to illuminate the stand.
It’s not just the “temperature mapped to an image” of a regular heat vision camera, we see the actual long-wave light being emitted and reflected – a soldering iron turned into a lightbulb!
The low limit for the glow is somewhere around 100*C. Here are some hot resistors:
MWIR is neither heat, nor is it light. It’s both – if you look outside you’ll see the world illuminated by the MWIR radiation from our sun.
These kinds of scenes were never seen before by anyone. Outside of, perhaps, some specialty labs that make sensors and the people who built the cameras for the climate watching satellites like Terra and Aqua.
What other sensors are there? I mentioned SWIR, the band that is used for the optic fiber. There, the parts are easy to come by and are not restricted.
Unfortunately, the SWIR photodiode is also sensitive to the regular light.
Fortunately, silicon is opaque all the way to upper SWIR, so it would filter out all the things we don’t want to see.
Unfortunately, the raw silicon wafers you can find on ebay are not really transparent. Looking at a lightbulb through it produces only a blur.
Fortunately, my father’s old solar panel fab is still in business and they have some scrap of wafer-grade silicon.
That worked great.
Let’s look around.
Eh, it’s kinda cute but not that special. Same deep black sky the NIR is famous for. The snow is almost black, since there is no sky light for it to reflect I guess.
Big difference is that the vegetation is not as reflective, so you get the “blackness of space” sky with regular-ish landscapes.
It’s almost like being on the airless, derelict Earth – preserved under the void after whatever disaster befell it.
That is about all I got as far as sights not seen before go.
Unfortunately, no sensor that I can easily get or use can see deep enough in the LWIR to make the humans glow. You need liquid nitrogen cooling on an InGaAs photodiode to do that, and it’s not quite that easy to get.
So the story will continue.
You might wonder about a few bits of this build, a few words I said and a few unusual solutions. All of them hint at another use in mind. Another kind of radiation to see in…? Oh yes, there is one more.
But that would be a whole other story, for another time. Stay tuned!