Facebook Shows Technology for Improving 360 Video Experience
John Mannes, machine learning writer for Tech Crunch recently created this piece on how Facebook is innovating where viewers look in 360 video.
Facebook is improving the 360 video experience by predicting where you will look
From the stage of F8, Joaquin Quinonero, Facebook’s Director of Applied Machine Learning, described a new technique the company is using to improve the watching experience for 360 videos. The format is challenging to deliver because of its size, but Facebook is using machine learning to reduce the number of pixels that have to be rendered at any one time. By predicting where a viewer will look next, rendering priority can be given to that location — particularly helpful for users with lower quality internet access.
The status quo for 360 videos is reactive rather than proactive rendering. Mike Coward, engineering director for Facebook’s VR video team echoed the frustration of users to me when he described the unpleasantness of turning your head in VR only to see a blurry scene. One partial fix is to optimize compression. But teams at the company are already using machine learning to select across the thousand-plus compression techniques for individual snippets of video. The other way to reduce the streaming load is to just cut down on what you’re rendering. And rather than reduce quality across the board, Facebook’s approach improves resolution for exactly what you’re most likely to look at next. Step one was to use the resources of the company to monitor where people actually do look when watching 360 videos. Facebook’s VR video team created a heat-map that highlighted the most popular spots that users looked at within videos. From there, Facebook built a generative saliency map using a deep neural network. This model makes it possible to perform predictions on new videos that haven’t previously been watched or studied. Read more about this innovative technology here.
Leave a comment