UCL at IEEE VR2019

IEEE VR 2019 was in Osa­ka. It was a chance for Antho­ny, who was on sab­bat­i­cal in New Zealand at the time, to meet up with a cou­ple of peo­ple in the group — Drew and Sebas­t­ian — who came out from Lon­don, and recent grad­u­ate Maria who was now work­ing at Facebook/Oculus.


Sebas­t­ian won the 2018 VGTC Vir­tu­al Real­i­ty Best Dis­ser­ta­tion Award for his dis­ser­ta­tion on Low Laten­cy Ren­der­ing with Dataflow Archi­tec­tures. He gave a short talk about his work. 


Drew pre­sent­ed his paper Per­cep­tion of Vol­u­met­ric Char­ac­ters’ Eye-Gaze Direc­tion in Head-Mount­ed Dis­plays.

Abstract: Vol­u­met­ric cap­ture allows the cre­ation of near-video-qual­i­ty con­tent that can be explored with six degrees of free­dom. Due to lim­i­ta­tions in these expe­ri­ences, such as the con­tent being fixed at the point of film­ing, an under­stand­ing of eye-gaze aware­ness is crit­i­cal. A repeat­ed mea­sures exper­i­ment was con­duct­ed that explored users’ abil­i­ty to eval­u­ate where a vol­u­met­ri­cal­ly cap­tured avatar (VCA) was look­ing. Wear­ing one of two head-mount­ed dis­plays (HMDs), 36 par­tic­i­pants rotat­ed a VCA to look at a tar­get. The HMD res­o­lu­tion, tar­get posi­tion, and VCA’s eye-gaze direc­tion were var­ied. Results did not show a dif­fer­ence in accu­ra­cy between HMD res­o­lu­tions, while the task became sig­nif­i­cant­ly hard­er for tar­get loca­tions fur­ther away from the user. In con­trast to real-world stud­ies, par­tic­i­pants con­sis­tent­ly mis­judged eye-gaze direc­tion based on tar­get loca­tion, but not based on the avatar’s head turn direc­tion. Impli­ca­tions are dis­cussed, as results for VCAs viewed in HMDs appear to dif­fer from face-to-face scenarios

Antho­ny gave a keynote speech at PERCAR: The Fifth IEEE VR Work­shop on Per­cep­tu­al and Cog­ni­tive Issues in AR. He talked about Embod­ied Cog­ni­tion in AR/VR.

Here is one of the offi­cial pho­tog­ra­phers pho­tos of Drew talking.


Show­ing a video from one of the VR group’s ear­ly exper­i­ments, cir­ca 1994.


Antho­ny gave an invit­ed talk on The Impact of Avatars on Close Quar­ters Inter­ac­tion at the Vir­tu­al Humans and Crowds in Immer­sive Envi­ron­ments (VHCIE) 

Abstract: There is a com­pelling the­o­ry emerg­ing of how embod­i­ment inside immer­sive vir­tu­al envi­ron­ments enables par­tic­i­pants to use their bod­ies in nat­ur­al and flu­id ways. In this talk, I will dis­cuss recent work on how avatar rep­re­sen­ta­tion and embod­i­ment affect col­lab­o­ra­tion in social vir­tu­al real­i­ty. Our lab-based works shows how users uti­lize infor­ma­tion about avatars in quite com­plex and sur­pris­ing ways, and our stud­ies of con­sumers in their homes shows some of the bar­ri­ers that users expe­ri­ence in using avatars for extend­ed peri­ods. I will then dis­cuss how these set some near-term chal­lenges to the field, and review some imme­di­ate ways for­ward that could have sig­nif­i­cant impact on the util­i­ty of social vir­tu­al reality.


We had a poster about some work I helped with dur­ing my stay at Microsoft, with Mar Gon­za­les and Paras­too Abtahi: Indi­vid­ual Dif­fer­ences in Embod­ied Dis­tance Esti­ma­tion in Vir­tu­al Reality


Final­ly, Antho­ny was on a pan­el about Vir­tu­al Real­i­ty Cur­ricu­lum.