Graphics Pipelines for Next Generation Mixed Reality Systems

The Pipelines project is an EPSRC fund­ed inves­ti­ga­tion into new graph­ics tech­niques and hard­ware. Our goal is to re-pur­pose and re-archi­tect the cur­rent graph­ics pipeline to bet­ter sup­port the next gen­er­a­tion of AR and VR sys­tems. These new sys­tems will require far greater dis­play res­o­lu­tions and fram­er­ates than tra­di­tion­al TVs and mon­i­tors, result­ing in great­ly increased com­pu­ta­tion­al cost and band­width require­ments. By devel­op­ing new end-to-end graph­ics sys­tems, we plan to make ren­der­ing for these dis­plays prac­ti­cal and power-efficient.

A major focus of the project thus far has been on improv­ing effi­cien­cy by ren­der­ing or trans­mit­ting only the con­tent in each frame that a user can per­ceive. More detail on our work in this direc­tion is giv­en on the page for our paper Beyond Blur.

In our paper Metamer­ic Var­i­fo­cal Holo­grams we explore how holo­gram opti­mi­sa­tion can be improved by only opti­mis­ing the holo­grams to match image con­tent that the user can real­ly per­ceive. We do this using a metamer­ic loss func­tion, and by recon­struct­ing var­i­fo­cal holo­grams, 2D pla­nar holo­grams cor­rect at the user’s cur­rent focal depth.

Publications

Beyond Blur: Ven­tral Metamers for Foveat­ed Ren­der­ing, ACM Trans. Graph. (Proc. SIGGRAPH 2021)
[Project Page] | [Preprint]

To periph­er­al vision, a pair of phys­i­cal­ly dif­fer­ent images can look the same. Such pairs are metamers rel­a­tive to each oth­er, just as phys­i­cal­ly-dif­fer­ent spec­tra of light are per­ceived as the same col­or. We pro­pose a real-time method to com­pute such ven­tral metamers for foveat­ed ren­der­ing where, in par­tic­u­lar for near-eye dis­plays, the largest part of the frame­buffer maps to the periph­ery. This improves in qual­i­ty over state-of-the-art foveation meth­ods which blur the periph­ery. Work in Vision Sci­ence has estab­lished how periph­er­al stim­uli are ven­tral metamers if their sta­tis­tics are sim­i­lar. Exist­ing meth­ods, how­ev­er, require a cost­ly opti­miza­tion process to find such metamers. To this end, we pro­pose a nov­el type of sta­tis­tics par­tic­u­lar­ly well-suit­ed for prac­ti­cal real-time ren­der­ing: smooth moments of steer­able fil­ter respons­es. These can be extract­ed from images in time con­stant in the num­ber of pix­els and in par­al­lel over all pix­els using a GPU. Fur­ther, we show that they can be com­pressed effec­tive­ly and trans­mit­ted at low band­width. Final­ly, com­put­ing real­iza­tions of those sta­tis­tics can again be per­formed in con­stant time and in par­al­lel. This enables a new lev­el of qual­i­ty for foveat­ed appli­ca­tions such as such as remote ren­der­ing, lev­el-of-detail and Monte-Car­lo denois­ing. In a user study, we final­ly show how human task per­for­mance increas­es and foveation arti­facts are less sus­pi­cious, when using our method com­pared to com­mon blurring.

Metamer­ic Var­i­fo­cal Holo­grams (arX­iv)

[Project Page] | [Paper] | [Video]

Com­put­er-Gen­er­at­ed Holog­ra­phy (CGH) offers the poten­tial for gen­uine, high-qual­i­ty three-dimen­sion­al visu­als. How­ev­er, ful­fill­ing this poten­tial remains a prac­ti­cal chal­lenge due to com­pu­ta­tion­al com­plex­i­ty and visu­al qual­i­ty issues. We pro­pose a new CGH method that exploits gaze-con­tin­gency and per­cep­tu­al graph­ics to accel­er­ate the devel­op­ment of prac­ti­cal holo­graph­ic dis­play sys­tems. First­ly, our method infers the user’s focal depth and gen­er­ates images only at their focus plane with­out using any mov­ing parts. Sec­ond, the images dis­played are metamers; in the user’s periph­er­al vision, they need only be sta­tis­ti­cal­ly cor­rect and blend with the fovea seam­less­ly. Unlike pre­vi­ous meth­ods, our method pri­ori­tis­es and improves foveal visu­al qual­i­ty with­out caus­ing per­cep­tu­al­ly vis­i­ble dis­tor­tions at the periph­ery. To enable our method, we intro­duce a nov­el metamer­ic loss func­tion that robust­ly com­pares the sta­tis­tics of two giv­en images for a known gaze loca­tion. In par­al­lel, we imple­ment a mod­el rep­re­sent­ing the rela­tion between holo­grams and their image recon­struc­tions. We cou­ple our dif­fer­en­tiable loss func­tion and mod­el to metamer­ic var­i­fo­cal holo­grams using a sto­chas­tic gra­di­ent descent solver. We eval­u­ate our method with an actu­al proof-of-con­cept holo­graph­ic dis­play, and we show that our CGH method leads to prac­ti­cal and per­cep­tu­al­ly three-dimen­sion­al image reconstructions.

People

Kaan Akşit, Rafael Kuffn­er Dos Anjos, Sebas­t­ian Fris­ton, Tobias Ritschel, Antho­ny Steed, David Swapp, David R. Walton

Acknowledgements

This project is fund­ed by the EPSRC/UKRI project EP/T01346X.