Nocturnal predators have an ingrained superpower: even in pitch-black darkness, they can easily survey their surroundings, honing in on tasty prey hidden among a monochrome landscape.

Hunting for your next supper isn’t the only perk of seeing in the dark. Take driving down a rural dirt road on a moonless night. Trees and bushes lose their vibrancy and texture. Animals that skitter across the road become shadowy smears. Despite their sophistication during daylight, our eyes struggle to process depth, texture, and even objects in dim lighting.

It’s no surprise that machines have the same problem. Although they’re armed with a myriad of sensors, self-driving cars are still trying to live up to their name. They perform well under perfect weather conditions and roads with clear traffic lanes. But ask the cars to drive in heavy rain or fog, smoke from wildfires, or on roads without streetlights, and they struggle.

This month, a team from Purdue University tackled the low visibility problem head-on. Combining thermal imaging, physics, and machine learning, their technology allowed a visual AI system to see in the dark as if it were daylight.

At the core of the system are an infrared camera and AI, trained on a custom database of images to extract detailed information from given surroundings—essentially, teaching itself to map the world using heat signals. Unlike previous systems, the technology, called heat-assisted detection and ranging (HADAR), overcame a notorious stumbling block: the “ghosting effect,” which usually causes smeared, ghost-like images hardly useful for navigation.

Giving machines night vision doesn’t just help with autonomous vehicles. A similar approach could also bolster efforts to track wildlife for preservation, or help with long-distance monitoring of body heat at busy ports as a public health measure.

“HADAR is a special technology that helps us see the invisible,” said study author Xueji Wang.

Heat Wave

We’ve taken plenty of inspiration from nature to train self-driving cars. Earlier generations adopted sonar and echolocation as sensors. Then came Lidar scanning, which uses lasers to scan in multiple directions, finding objects and calculating their distance based on how fast the light bounces back.

Although powerful, these detection methods come with a huge stumbling block: they’re hard to scale up. The technologies are “active,” meaning each AI agent—for example, an autonomous vehicle or a robot—will need to constantly scan and collect information about its surroundings. With multiple machines on the road or in a workspace, the signals can interfere with one another and become distorted. The overall level of emitted signals could also potentially damage human eyes.

Scientists have long looked for a passive alternative. Here’s where infrared signals come in. All material—living bodies, cold cement, cardboard cutouts of people—emit a heat signature. These are readily captured by infrared cameras, either out in the wild for monitoring wildlife or in science museums. You might have tried it before: step up and the camera shows a two-dimensional blob of you and how different body parts emanate heat on a brightly-colored scale.

Unfortunately, the resulting images look nothing like you. The edges of the body are smeared, and there’s little texture or sense of 3D space.

“Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost,” said study author Dr. Fanglin Bao. “This loss of information, texture, and features is a roadblock for machine perception using heat radiation.”

This ghosting effect occurs even with the most sophisticated thermal cameras due to physics.

You see, from living bodies to cold cement, all material sends out heat signals. Similarly, the entire environment also pumps out heat radiation. When trying to capture an image based on thermal signals alone, ambient heat noise blends with sounds emitted from the object, resulting in hazy images.

“That’s what we really mean by ghosting—the lack of texture, lack of contrast, and lack of information inside an image,” said Dr. Zubin Jacob, who led the study.

Ghostbusters

HADAR went back to basics, analyzing thermal properties that essentially describe what makes something hot or cold, said Jacob.

Thermal images are made of useful data streams jumbled together. They don’t just capture the temperature of an object; they also contain information about its texture and depth.

As a first step, the team developed an algorithm called TeX, which disentangles all of the thermal data into useful bins: texture, temperature, and emissivity (the amount of heat emitted from an object). The algorithm was then trained on a custom library that catalogs how different items generate heat signals across the light spectrum.

The algorithms are embedded with our understanding of thermal physics, said Jacob. “We also used some advanced cameras to put all the hardware and software together and extract optimal information from the thermal radiation, even in pitch darkness,” he added.

Our current thermal cameras can’t optimally extract signals from thermoimages alone. What was lacking was data for a sort of “color.” Similar to how our eyes are biologically wired to the three prime colors—red, blue, and yellow—the thermo-camera can “see” on multiple wavelengths beyond the human eye. These “colors” are critical for the algorithm to decipher information, with missing wavelengths akin to color blindness.

Using the model, the team was able to dampen ghosting effects and obtain clearer and more detailed images from thermal cameras.

The demonstration shows HADAR “is poised to revolutionize computer vision and imaging technology in low-visibility conditions,” said Drs. Manish Bhattarai and Sophia Thompson, from Los Alamos National Laboratory and the University of New Mexico, Albuquerque, respectively, who were not involved in the study.

Late-Night Drive With Einstein

In a proof of concept, the team pitted HADAR against another AI-driven computer vision model. The arena, based in Indiana, is straight from the Fast and the Furious: late night, low light, outdoors, with an image of a human being and a cardboard cutout of Einstein standing in front of a black car.

Compared to its rival, HADAR analyzed the scene in one swoop, discerning between glass rubber, steel, fabric, and skin. The system readily deciphered human versus cardboard. It could also detect depth perception regardless of external light. “The accuracy to range an object in the daytime is the same…in pitch darkness, if you’re using our HADAR algorithm,” said Jacob.

HADAR isn’t without faults. The main trip-up is the price. According to New Scientist, the entire setup is not just bulky, but costs more than $1 million for its thermal camera and military-grade imager. (HADAR was developed with the help of DARPA, the Defense Advanced Research Projects Agency known for championing adventurous ventures.)

The system also needs to be calibrated on the fly, and can be influenced by a variety of environmental factors not yet built into the model. There’s also the issue of processing speed.

“The current sensor takes around one second to create one image, but for autonomous cars we need around 30 to 60 hertz frame rate, or frames per second,” said Bao.

For now, HADAR can’t yet work out of the box with off-the-shelf thermal cameras from Amazon. However, the team is eager to bring the technology to the market in the next three years, finally bridging light to dark.

“Evolution has made human beings biased toward the daytime. Machine perception of the future will overcome this long-standing dichotomy between day and night,” said Jacob.

Image Credit: Jacob, Bao, et al/Purdue University

By