Graphics Pipelines for Next Generation Mixed Reality Systems

The Pipelines project is an EPSRC fund­ed inves­ti­ga­tion into new graph­ics tech­niques and hard­ware. Our goal is to re-pur­pose and re-archi­tect the cur­rent graph­ics pipeline to bet­ter sup­port the next gen­er­a­tion of AR and VR sys­tems. These new sys­tems will require far greater dis­play res­o­lu­tions and fram­er­ates than tra­di­tion­al TVs and mon­i­tors, result­ing in great­ly increased com­pu­ta­tion­al cost and band­width require­ments. By devel­op­ing new end-to-end graph­ics sys­tems, we plan to make ren­der­ing for these dis­plays prac­ti­cal and power-efficient.

A major focus of the project thus far has been on improv­ing effi­cien­cy by ren­der­ing or trans­mit­ting only the con­tent in each frame that a user can per­ceive. More detail on our work in this direc­tion is giv­en on the page for our paper Beyond Blur.

Publications

Beyond Blur: Ven­tral Metamers for Foveat­ed Ren­der­ing, ACM Trans. Graph. (Proc. SIGGRAPH 2021)
[Project Page] | [Preprint]

To periph­er­al vision, a pair of phys­i­cal­ly dif­fer­ent images can look the same. Such pairs are metamers rel­a­tive to each oth­er, just as phys­i­cal­ly-dif­fer­ent spec­tra of light are per­ceived as the same col­or. We pro­pose a real-time method to com­pute such ven­tral metamers for foveat­ed ren­der­ing where, in par­tic­u­lar for near-eye dis­plays, the largest part of the frame­buffer maps to the periph­ery. This improves in qual­i­ty over state-of-the-art foveation meth­ods which blur the periph­ery. Work in Vision Sci­ence has estab­lished how periph­er­al stim­uli are ven­tral metamers if their sta­tis­tics are sim­i­lar. Exist­ing meth­ods, how­ev­er, require a cost­ly opti­miza­tion process to find such metamers. To this end, we pro­pose a nov­el type of sta­tis­tics par­tic­u­lar­ly well-suit­ed for prac­ti­cal real-time ren­der­ing: smooth moments of steer­able fil­ter respons­es. These can be extract­ed from images in time con­stant in the num­ber of pix­els and in par­al­lel over all pix­els using a GPU. Fur­ther, we show that they can be com­pressed effec­tive­ly and trans­mit­ted at low band­width. Final­ly, com­put­ing real­iza­tions of those sta­tis­tics can again be per­formed in con­stant time and in par­al­lel. This enables a new lev­el of qual­i­ty for foveat­ed appli­ca­tions such as such as remote ren­der­ing, lev­el-of-detail and Monte-Car­lo denois­ing. In a user study, we final­ly show how human task per­for­mance increas­es and foveation arti­facts are less sus­pi­cious, when using our method com­pared to com­mon blurring.

People

Kaan Akşit, Rafael Kuffn­er Dos Anjos, Sebas­t­ian Fris­ton, Tobias Ritschel, Antho­ny Steed, David Swapp, David R. Walton

Acknowledgements

This project is fund­ed by the EPSRC/UKRI project EP/T01346X.