To necessarily give Oculus Quest apps 67% more GPU power to work with, Facebook Researchers seem to have figured out a way to use machine learning. Nevertheless, Facebook says that this is “purely research.”
The Oculus Quest is a standalone headset. It means that the computing hardware is inside the device itself that introduces the power and size constraints. It also has the desire to sell the equipment at a relatively affordable price. Because of that, Quest uses a smartphone chip significantly less potent than a gaming PC.
To create next-generation AR and VR experiences, it will require to find new and more efficient ways of rendering high quality and low-latency graphics.
The new technique is working by rendering at a lower resolution than usual. Afterward, the center of the view is upscaled using a machine learning “super-resolution” algorithm. With some websites even letting users upload any image on their PC or phone to be artificial intelligence upscaled, in the last few years, these algorithms have become popular.
Super-resolution algorithms can produce a significantly more detailed output than traditional upscaling, given enough training data. Some time ago, Enhance and Zoom was a meme. It was helpful for mocking those who falsely believed computers might do that. Nevertheless, machine learning has made this idea a reality. In many cases, there is no practical difference. Nevertheless, the algorithm is technically only “hallucinating” what it expects the missing detail can look like.
Benham Bastani is Facebook’s Head of Graphics in the Core AR/VR Technologies department. Also, he is the author of various papers. Between 2013 and 2017, Bastani worked for Google. He developed “advanced display systems” and then leading the development of Daydream’s rendering pipeline.
It is interesting to note that the paper is not primarily about either freeing up GPU resources by using that or super-resolution algorithm. The researchers’ direct goal was to figure out a framework to run machine learning algorithms in real-time within the current rendering pipeline (with low latency), which they achieved. The first essential example is super-resolution upscaling.
Because that is the focus of a paper, there is a mention of visually pleasing and temporally coherent results in VR. There is not much detail on the perceptibility or the exact size of the upscaled region.
The researchers claim that the technique can save roughly 40% of GPU time when rendering at a 70% lower resolution in each direction. They also claim that developers can use those resources to generate better content.
The saved GPU power could keep unused for increasing battery life for applications like a media viewer. It is because the DSP (use for machine learning tasks like this) and Snapdragon chips (and most others) is significantly more power-efficient than the GPU.
Since it happens when the frame is finished, a limitation of this technique is that it could add latency. Nevertheless, Mobile GPUs are different from PC GPUs. Because they render tile by tile, and the NPU tasks might run asynchronously.
Let’s see what researchers will manage to do and how much better the GPU will operate.