Despite the algorithm having very little information to work from, especially compared to the heaps of data a LIDAR system in an autonomous car processes every second, it’s still able to create a 3D representation of the object hidden behind the obstruction, and one with a surprising degree of accuracy given the human eye can’t make out anything in the same situation.

Advertisement
Advertisement
“A three-dimensional reconstruction of the reflective letter “S,” as seen through the 1-inch-thick foam.”
“A three-dimensional reconstruction of the reflective letter “S,” as seen through the 1-inch-thick foam.”
Image: Stanford Computational Imaging Lab

Is the technology ready to be implemented in autonomous vehicles that are already roaming public roads? Not quite. During their testing, while the custom algorithm could crunch the data and generate a 3D representation of the hidden object in real-time, the process of scanning took anywhere from a minute to an hour depending on how reflective the hidden object was. The setup they tested also scanned only a fraction of the field of vision an autonomous car would need to be able to visualize to safely navigate foggy conditions.

Improvements will be needed to make this a real-time and viable solution before an autonomous car could safely drive down the road, even at nominal speeds, on a foggy day. But it’s not like humans are getting any better at the task either. There are some more immediate applications for the technology, however, including detailed and accurate medical imaging without doctors having to resort to invasive exploratory surgery, and in the future space-faring probes could carry imaging devices that rely on this technology to see through clouds and other particulates in a distant planet’s atmosphere without having to actually land on the surface.