UCL at IEEE VR 2018

We pre­sent­ed three papers at IEEE VR 2018.

Profiling Distributed Virtual Environments by Tracing Causality

Sebas­t­ian J Fris­ton, Elias J Grif­fith, David Swapp, Alan Mar­shall, Antho­ny Steed

Abstract: In this paper we explore a new tech­nique to pro­file dis­trib­uted vir­tu­al envi­ron­ments. Pro­fil­ing is a key part of the opti­mi­sa­tion process, but tech­niques based on sta­t­ic analy­sis or pass­ing meta-data have dif­fi­cul­ty fol­low­ing causal­i­ty in con­cur­rent and dis­trib­uted sys­tems. Our tech­nique is based on tak­ing hash­es of the sys­tem state in order to abstract away plat­form-spe­cif­ic details, facil­i­tat­ing causal­i­ty trac­ing across process, machine and even seman­tic bound­aries. Across three case stud­ies, we demon­strate the effi­ca­cy of this approach, and how it sup­ports a vari­ety of met­rics for com­pre­hen­sive­ly bench-mark­ing dis­trib­uted vir­tu­al environments.

This was a result of our CASMS project.

A Comparison of Virtual and Physical Training Transfer of Bimanual Assembly Tasks

Mara Mur­cia-López, Antho­ny Steed

Abstract: As we explore the use of con­sumer vir­tu­al real­i­ty tech­nol­o­gy for train­ing appli­ca­tions, there is a need to eval­u­ate its valid­i­ty com­pared to more tra­di­tion­al train­ing for­mats. In this paper, we present a study that com­pares the effec­tive­ness of vir­tu­al train­ing and phys­i­cal train­ing for teach­ing a biman­u­al assem­bly task. In a between-sub­jects exper­i­ment, 60 par­tic­i­pants were trained to solve three 3D burr puz­zles in one of six con­di­tions com­prised of vir­tu­al and phys­i­cal train­ing ele­ments. In the four phys­i­cal con­di­tions, train­ing was deliv­ered via paper- and video-based instruc­tions, with or with­out the phys­i­cal puz­zles to prac­tice with. In the two vir­tu­al con­di­tions, par­tic­i­pants learnt to assem­ble the puz­zles in an inter­ac­tive vir­tu­al envi­ron­ment, with or with­out 3D ani­ma­tions show­ing the assem­bly process. After train­ing, we con­duct­ed imme­di­ate tests in which par­tic­i­pants were asked to solve a phys­i­cal ver­sion of the puz­zles. We mea­sured per­for­mance through suc­cess rates and assem­bly com­ple­tion test­ing times. We also mea­sured train­ing times as well as sub­jec­tive rat­ings on sev­er­al aspects of the expe­ri­ence. Our results show that the per­for­mance of vir­tu­al­ly trained par­tic­i­pants was promis­ing. A sta­tis­ti­cal­ly sig­nif­i­cant dif­fer­ence was not found between vir­tu­al train­ing with ani­mat­ed instruc­tions and the best per­form­ing phys­i­cal con­di­tion (in which phys­i­cal blocks were avail­able dur­ing train­ing) for the last and most com­plex puz­zle in terms of suc­cess rates and test­ing times. Per­for­mance in reten­tion tests two weeks after train­ing was gen­er­al­ly not as good as expect­ed for all exper­i­men­tal con­di­tions. We dis­cuss the impli­ca­tions of the results and high­light the valid­i­ty of vir­tu­al real­i­ty sys­tems in training.

The Effect of Transition Type in Multi-View 360° Media

Andrew Mac­Quar­rie, Antho­ny Steed

Abstract: 360° images and video have become extreme­ly pop­u­lar for­mats for immer­sive dis­plays, due in large part to the tech­ni­cal ease of con­tent pro­duc­tion. While many expe­ri­ences use a sin­gle cam­era view­point, an increas­ing num­ber of expe­ri­ences use mul­ti­ple cam­era loca­tions. In such mul­ti-view 360° media (MV360M) sys­tems, a visu­al effect is required when the user tran­si­tions from one cam­era loca­tion to anoth­er. This effect can take sev­er­al forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the expe­ri­ence, includ­ing issues relat­ed to enjoy­ment and scene under­stand­ing. To inves­ti­gate the effect of tran­si­tion types on immer­sive MV360M expe­ri­ences, a repeat­ed-mea­sures exper­i­ment was con­duct­ed with 31 par­tic­i­pants. Wear­ing a head-mount­ed dis­play, par­tic­i­pants explored four sta­t­ic scenes, for which mul­ti­ple 360° images and a recon­struct­ed 3D mod­el were avail­able. Three tran­si­tion types were exam­ined: tele­port, a lin­ear move through a 3D mod­el of the scene, and an image-based tran­si­tion using a Möbius trans­for­ma­tion. The met­rics inves­ti­gat­ed includ­ed spa­tial aware­ness, users move­ment pro­files, tran­si­tion pref­er­ence and the sub­jec­tive feel­ing of mov­ing through the space. Results indi­cate that there was no sig­nif­i­cant dif­fer­ence between tran­si­tion types in terms of spa­tial aware­ness, while sig­nif­i­cant dif­fer­ences were found for users move­ment pro­files, with par­tic­i­pants tak­ing 1.6 sec­onds longer to select their next loca­tion fol­low­ing a tele­port tran­si­tion. The mod­el and Möbius tran­si­tions were sig­nif­i­cant­ly bet­ter in terms of cre­at­ing the feel­ing of mov­ing through the space. Pref­er­ence was also sig­nif­i­cant­ly dif­fer­ent, with mod­el and tele­port tran­si­tions being pre­ferred over Möbius tran­si­tions. Our results indi­cate that trade-offs between tran­si­tions will require con­tent cre­ators to think care­ful­ly about what aspects they con­sid­er to be most impor­tant when pro­duc­ing MV360M experiences.

Antho­ny organ­ised a pan­el on How Should Social Vir­tu­al Real­i­ty Work? He was a last minute replace­ments for a pan­elist on The future impact of neu­ro­science and cog­ni­tive psy­chol­o­gy on vir­tu­al environments