Graphics Pipelines for Next Generation Mixed Reality Systems
The Pipelines project is an EPSRC funded investigation into new graphics techniques and hardware. Our goal is to re-purpose and re-architect the current graphics pipeline to better support the next generation of AR and VR systems. These new systems will require far greater display resolutions and framerates than traditional TVs and monitors, resulting in greatly increased computational cost and bandwidth requirements. By developing new end-to-end graphics systems, we plan to make rendering for these displays practical and power-efficient.
A major focus of the project thus far has been on improving efficiency by rendering or transmitting only the content in each frame that a user can perceive. More detail on our work in this direction is given on the page for our paper Beyond Blur.
In our paper Metameric Varifocal Holograms we explore how hologram optimisation can be improved by only optimising the holograms to match image content that the user can really perceive. We do this using a metameric loss function, and by reconstructing varifocal holograms, 2D planar holograms correct at the user’s current focal depth.
To peripheral vision, a pair of physically different images can look the same. Such pairs are metamers relative to each other, just as physically-different spectra of light are perceived as the same color. We propose a real-time method to compute such ventral metamers for foveated rendering where, in particular for near-eye displays, the largest part of the framebuffer maps to the periphery. This improves in quality over state-of-the-art foveation methods which blur the periphery. Work in Vision Science has established how peripheral stimuli are ventral metamers if their statistics are similar. Existing methods, however, require a costly optimization process to find such metamers. To this end, we propose a novel type of statistics particularly well-suited for practical real-time rendering: smooth moments of steerable filter responses. These can be extracted from images in time constant in the number of pixels and in parallel over all pixels using a GPU. Further, we show that they can be compressed effectively and transmitted at low bandwidth. Finally, computing realizations of those statistics can again be performed in constant time and in parallel. This enables a new level of quality for foveated applications such as such as remote rendering, level-of-detail and Monte-Carlo denoising. In a user study, we finally show how human task performance increases and foveation artifacts are less suspicious, when using our method compared to common blurring.
Metameric Varifocal Holograms (arXiv)
Computer-Generated Holography (CGH) offers the potential for genuine, high-quality three-dimensional visuals. However, fulfilling this potential remains a practical challenge due to computational complexity and visual quality issues. We propose a new CGH method that exploits gaze-contingency and perceptual graphics to accelerate the development of practical holographic display systems. Firstly, our method infers the user’s focal depth and generates images only at their focus plane without using any moving parts. Second, the images displayed are metamers; in the user’s peripheral vision, they need only be statistically correct and blend with the fovea seamlessly. Unlike previous methods, our method prioritises and improves foveal visual quality without causing perceptually visible distortions at the periphery. To enable our method, we introduce a novel metameric loss function that robustly compares the statistics of two given images for a known gaze location. In parallel, we implement a model representing the relation between holograms and their image reconstructions. We couple our differentiable loss function and model to metameric varifocal holograms using a stochastic gradient descent solver. We evaluate our method with an actual proof-of-concept holographic display, and we show that our CGH method leads to practical and perceptually three-dimensional image reconstructions.
This project is funded by the EPSRC/UKRI project EP/T01346X.