There are lots of easy-to-use apps that allow you to generate 3D models of physical objects using just your smartphone’s camera. But when it comes to calculating the exact size of an object, those same apps are actually pretty inaccurate. They don’t have to be, though, because researchers at Carnegie Mellon have found a way to boost measurement accuracy using additional data from a mobile device’s built-in motion sensors.
The same motion sensor that tells your smartphone when to automatically switch the display from portrait to landscape modes can be used to make incredibly accurately measurements of what its camera is seeing. The data generated by that sensor—called inertial measurement units—aren’t actually very precise. But as a smartphone is waived around an object being turned into a 3D model, that IMU data is actually more than accurate enough to allow the processing software to precisely calculate distance, or the size of the object being scanned.
When used with an application designed to track human faces, the new approach was able to accurately measure the distance between a subject’s pupils to within half a millimeter. So the next time you try on virtual glasses from a website using your phone’s front-facing camera, the app could also calculate whether or not they’d actually fit your face. But the technology has endless applications, from allowing 3D printers to act as perfect photocopiers for physical objects, to even improving the vision systems of autonomous smartcars that need to know exactly how far they are from everything around them.