The patent, called "Video transmission using content-based frame search," suggests that video frame data could be used to help reduce the amount of data being sent between devices during video calls. In fact, it describes how it would be possible to track image parameters, like facial features, within the video and store them in a searchable database, tagged to their corresponding frame.
Then, when bandwidth issues bite, a device like an iPhone or iPad could just send the details of how the frame looks to the server—using way less data than sending each frame—allowing it to search for a similar image from earlier in the conversation, or perhaps even an entirely different conversation. Alternatively, if no such images exists, the details could be used to morph a suitably similar image to fit.
Either way, the result is a system that could fill in the blanks when frames drop out because of a crappy connection. In turn, that provides a seemingly continuous video in low-bandwidth conditions.
If you're wondering—rightly—what the system might do if the background are different even though facial expressions are the same in images, then Apple has an answer for that too. It would simply send two separate images—a high-res image of the correctly chosen saved face, and a very low-res image from the real-time conversation—then stitch 'em together. It might not be perfect, but you may barely notice.
All told, it's a neat idea that could both keep conversations running when connections are awful and save on data transmission the rest of the time. As ever, though, this is just a patent which means that, despite the fact that someone at Apple has dreamt up the idea, it's far from the case that it'll ever appear in a consumer product. We can hope though, right? [Apple Insider]