What Is Temporal Noise Reduction?

One of the new iPad's video features—along with 1080p recording and video stabilization—is temporal noise reduction. Apple claims it will improve the quality of footage in low-light conditions. OK, but what the hell is it?

It's a clever technique...

There's no getting around this: temporal noise reduction is tough to explain. That's because it's a complex process used to improve image and video rendering. This is very much a simplified explanation of what happens.

...that greatly reduces the noise of video...

When you record footage in low-light conditions, the resulting images are often noisy—speckled with pixelation that looks like a staticky TV screen. Why? Because there's just not enough light hitting the sensor. In bright conditions, all the light provides a huge signal; noise—from electrical interference or imperfections in the detector—is still present, but it's drowned out. In low light, the signals are much smaller which means that the noise is painfully apparent.


...by comparing what pixels actually move...

So, onto temporal noise reduction itself. Basically, it exploits the fact that with video there are two pools of data to use: each separate image, and the knowledge of how the frames change with time. Using that information, it's possible to create an algorithm that can work out which pixels have changed between frames. But it's also possible to work out which pixels are expected to change between frames. For instance, if a car's moving from left to right in a frame, software can soon work out that pixels to the right should change dramatically.

...and guessing what is noise and what is actual detail...

By comparing what is expected to change between frames, and what actually does, it's possible to make a very good educated guess as to which pixels are noisy and which aren't. Then, the pixels that are deemed noisy can have a new value calculated for them based on their surrounding brothers.


...to make low-light video super-sharp.

So, the process manages to sneakily use data present in the video stream to attenuate the effects of noise and improve the image. It's something that's been used in 3D rendering for years, but it requires a fair amount of computational grunt. Clearly, the new iPad can handle that—and as a result, we'll be fortunate enough to have better low-light video.