Graphics Pipelines for Next Generation Mixed Reality Systems

The Pipelines project is an EPSRC fund­ed inves­ti­ga­tion into new graph­ics tech­niques and hard­ware. Our goal is to re-pur­pose and re-archi­tect the cur­rent graph­ics pipeline to bet­ter sup­port the next gen­er­a­tion of AR and VR sys­tems. These new sys­tems will require far greater dis­play res­o­lu­tions and fram­er­ates than tra­di­tion­al TVs and mon­i­tors, result­ing in great­ly increased com­pu­ta­tion­al cost and band­width require­ments. By devel­op­ing new end-to-end graph­ics sys­tems, we plan to make ren­der­ing for these dis­plays prac­ti­cal and power-efficient.

A major focus of the project thus far has been on improv­ing effi­cien­cy by ren­der­ing or trans­mit­ting only the con­tent in each frame that a user can per­ceive — dis­play­ing a metamer to the tar­get con­tent. More detail on our work in this direc­tion is giv­en on the page for our paper Beyond Blur.

In our paper Metamer­ic Var­i­fo­cal Holo­grams we explore how holo­gram opti­mi­sa­tion can be improved by only opti­mis­ing the holo­grams to match image con­tent that the user can real­ly per­ceive. We do this using a metamer­ic loss func­tion, and by recon­struct­ing var­i­fo­cal holo­grams, 2D pla­nar holo­grams cor­rect at the user’s cur­rent focal depth.

In Metamer­ic Light Fields we extend metamer gen­er­a­tion from 2D images to 3D light fields. This needs spe­cial con­sid­er­a­tion to give tem­po­ral­ly con­sis­tent, high-qual­i­ty results with­out flick­er or incor­rect motion. 

Publications

Beyond Blur: Ven­tral Metamers for Foveat­ed Ren­der­ing, ACM Trans. Graph. (Proc. SIGGRAPH 2021)
[Project Page] | [Preprint] | [Sup­ple­men­tal Mate­r­i­al] | [Uni­ty Pack­age] | [Exe­cutable Win­dows Demo] | [Python Exam­ple]

To periph­er­al vision, a pair of phys­i­cal­ly dif­fer­ent images can look the same. Such pairs are metamers rel­a­tive to each oth­er, just as phys­i­cal­ly-dif­fer­ent spec­tra of light are per­ceived as the same col­or. We pro­pose a real-time method to com­pute such ven­tral metamers for foveat­ed ren­der­ing where, in par­tic­u­lar for near-eye dis­plays, the largest part of the frame­buffer maps to the periph­ery. This improves in qual­i­ty over state-of-the-art foveation meth­ods which blur the periph­ery. Work in Vision Sci­ence has estab­lished how periph­er­al stim­uli are ven­tral metamers if their sta­tis­tics are sim­i­lar. Exist­ing meth­ods, how­ev­er, require a cost­ly opti­miza­tion process to find such metamers. To this end, we pro­pose a nov­el type of sta­tis­tics par­tic­u­lar­ly well-suit­ed for prac­ti­cal real-time ren­der­ing: smooth moments of steer­able fil­ter respons­es. These can be extract­ed from images in time con­stant in the num­ber of pix­els and in par­al­lel over all pix­els using a GPU. Fur­ther, we show that they can be com­pressed effec­tive­ly and trans­mit­ted at low band­width. Final­ly, com­put­ing real­iza­tions of those sta­tis­tics can again be per­formed in con­stant time and in par­al­lel. This enables a new lev­el of qual­i­ty for foveat­ed appli­ca­tions such as such as remote ren­der­ing, lev­el-of-detail and Monte-Car­lo denois­ing. In a user study, we final­ly show how human task per­for­mance increas­es and foveation arti­facts are less sus­pi­cious, when using our method com­pared to com­mon blurring.

Metamer­ic Var­i­fo­cal Holo­grams (Proc. IEEEVR 2022)

[Project Page] | [Paper] | [Video] | [Library (Holo­gram Opti­mi­sa­tion)] | [Library (Metamer­ic Loss)]

Com­put­er-Gen­er­at­ed Holog­ra­phy (CGH) offers the poten­tial for gen­uine, high-qual­i­ty three-dimen­sion­al visu­als. How­ev­er, ful­fill­ing this poten­tial remains a prac­ti­cal chal­lenge due to com­pu­ta­tion­al com­plex­i­ty and visu­al qual­i­ty issues. We pro­pose a new CGH method that exploits gaze-con­tin­gency and per­cep­tu­al graph­ics to accel­er­ate the devel­op­ment of prac­ti­cal holo­graph­ic dis­play sys­tems. First­ly, our method infers the user’s focal depth and gen­er­ates images only at their focus plane with­out using any mov­ing parts. Sec­ond, the images dis­played are metamers; in the user’s periph­er­al vision, they need only be sta­tis­ti­cal­ly cor­rect and blend with the fovea seam­less­ly. Unlike pre­vi­ous meth­ods, our method pri­ori­tis­es and improves foveal visu­al qual­i­ty with­out caus­ing per­cep­tu­al­ly vis­i­ble dis­tor­tions at the periph­ery. To enable our method, we intro­duce a nov­el metamer­ic loss func­tion that robust­ly com­pares the sta­tis­tics of two giv­en images for a known gaze loca­tion. In par­al­lel, we imple­ment a mod­el rep­re­sent­ing the rela­tion between holo­grams and their image recon­struc­tions. We cou­ple our dif­fer­en­tiable loss func­tion and mod­el to metamer­ic var­i­fo­cal holo­grams using a sto­chas­tic gra­di­ent descent solver. We eval­u­ate our method with an actu­al proof-of-con­cept holo­graph­ic dis­play, and we show that our CGH method leads to prac­ti­cal and per­cep­tu­al­ly three-dimen­sion­al image reconstructions.

Metamer­ic Light Fields (Poster, Proc. IEEVR 2022)

[Project Page] | [Poster] | [Short Paper] | [Teas­er Video]

Ven­tral metamers, pairs of images which may dif­fer sub­stan­tial­ly in the periph­ery, but are per­cep­tu­al­ly iden­ti­cal, offer excit­ing new pos­si­bil­i­ties in foveat­ed ren­der­ing and image com­pres­sion, as well as offer­ing insights into the human visu­al sys­tem. How­ev­er, exist­ing lit­er­a­ture has main­ly focused on cre­at­ing metamers of sta­t­ic images. In this work, we devel­op a method for cre­at­ing sequences of metamer­ic frames, videos or light fields, with enforced con­sis­ten­cy along the tem­po­ral, or angu­lar, dimen­sion. This great­ly expands the poten­tial appli­ca­tions for these metamers, and expand­ing metamers along the third dimen­sion offers fur­ther new poten­tial for compression.

Metamer­ic Inpaint­ing for Image Warp­ing (TVCG 2022)

[Paper] | [Video] | [Code] | [Web­page]

Image-warp­ing, a per-pix­el defor­ma­tion of one image into anoth­er, is an essen­tial com­po­nent in immer­sive visu­al expe­ri­ences such as vir­tu­al real­i­ty or aug­ment­ed real­i­ty. The pri­ma­ry issue with image warp­ing is dis­oc­clu­sions, where occlud­ed (and hence unknown) parts of the input image would be required to com­pose the out­put image. We intro­duce a new image warp­ing method, Metamer­ic image
inpaint­ing — an approach for hole-fill­ing in real-time with foun­da­tions in human visu­al per­cep­tion. Our method esti­mates image fea­ture sta­tis­tics of dis­oc­clud­ed regions from their neigh­bours. These sta­tis­tics are inpaint­ed and used to syn­the­sise visu­als in real-time that are less notice­able to study par­tic­i­pants, par­tic­u­lar­ly in periph­er­al vision. Our method offers speed improve­ments over the stan­dard struc­tured image inpaint­ing meth­ods while improv­ing real­ism over colour-based inpaint­ing such as push-pull. Hence, our work paves the way towards future appli­ca­tions such as depth image-based ren­der­ing, 6‑DoF 360 ren­der­ing, and remote render-streaming.

People

Kaan Akşit, Rafael Kuffn­er Dos Anjos, Sebas­t­ian Fris­ton, Prithvi Kohli, Tobias Ritschel, Antho­ny Steed, David Swapp, David R. Wal­ton,

Acknowledgements

This project is fund­ed by the EPSRC/UKRI project EP/T01346X.