UCL at IEEE VR 2018

We presen­ted three papers at IEEE VR 2018.

Profiling Distributed Virtual Environments by Tracing Causality

Sebasti­an J Fris­ton, Eli­as J Grif­fith, Dav­id Swapp, Alan Mar­shall, Anthony Steed

Abstract: In this paper we explore a new tech­nique to pro­file dis­trib­uted vir­tu­al envir­on­ments. Pro­fil­ing is a key part of the optim­isa­tion pro­cess, but tech­niques based on stat­ic ana­lys­is or passing meta-data have dif­fi­culty fol­low­ing caus­al­ity in con­cur­rent and dis­trib­uted sys­tems. Our tech­nique is based on tak­ing hashes of the sys­tem state in order to abstract away plat­form-spe­cif­ic details, facil­it­at­ing caus­al­ity tra­cing across pro­cess, machine and even semant­ic bound­ar­ies. Across three case stud­ies, we demon­strate the effic­acy of this approach, and how it sup­ports a vari­ety of met­rics for com­pre­hens­ively bench-mark­ing dis­trib­uted vir­tu­al envir­on­ments.

This was a res­ult of our CASMS pro­ject.


A Comparison of Virtual and Physical Training Transfer of Bimanual Assembly Tasks

Mara Murcia-López, Anthony Steed

Abstract: As we explore the use of con­sumer vir­tu­al real­ity tech­no­logy for train­ing applic­a­tions, there is a need to eval­u­ate its valid­ity com­pared to more tra­di­tion­al train­ing formats. In this paper, we present a study that com­pares the effect­ive­ness of vir­tu­al train­ing and phys­ic­al train­ing for teach­ing a bimanu­al assembly task. In a between-sub­jects exper­i­ment, 60 par­ti­cipants were trained to solve three 3D burr puzzles in one of six con­di­tions com­prised of vir­tu­al and phys­ic­al train­ing ele­ments. In the four phys­ic­al con­di­tions, train­ing was delivered via paper- and video-based instruc­tions, with or without the phys­ic­al puzzles to prac­tice with. In the two vir­tu­al con­di­tions, par­ti­cipants learnt to assemble the puzzles in an inter­act­ive vir­tu­al envir­on­ment, with or without 3D anim­a­tions show­ing the assembly pro­cess. After train­ing, we con­duc­ted imme­di­ate tests in which par­ti­cipants were asked to solve a phys­ic­al ver­sion of the puzzles. We meas­ured per­form­ance through suc­cess rates and assembly com­ple­tion test­ing times. We also meas­ured train­ing times as well as sub­ject­ive rat­ings on sev­er­al aspects of the exper­i­ence. Our res­ults show that the per­form­ance of vir­tu­ally trained par­ti­cipants was prom­ising. A stat­ist­ic­ally sig­ni­fic­ant dif­fer­ence was not found between vir­tu­al train­ing with anim­ated instruc­tions and the best per­form­ing phys­ic­al con­di­tion (in which phys­ic­al blocks were avail­able dur­ing train­ing) for the last and most com­plex puzzle in terms of suc­cess rates and test­ing times. Per­form­ance in reten­tion tests two weeks after train­ing was gen­er­ally not as good as expec­ted for all exper­i­ment­al con­di­tions. We dis­cuss the implic­a­tions of the res­ults and high­light the valid­ity of vir­tu­al real­ity sys­tems in train­ing.


The Effect of Transition Type in Multi-View 360° Media

Andrew MacQuar­rie, Anthony Steed

Abstract: 360° images and video have become extremely pop­u­lar formats for immers­ive dis­plays, due in large part to the tech­nic­al ease of con­tent pro­duc­tion. While many exper­i­ences use a single cam­era view­point, an increas­ing num­ber of exper­i­ences use mul­tiple cam­era loc­a­tions. In such multi-view 360° media (MV360M) sys­tems, a visu­al effect is required when the user trans­itions from one cam­era loc­a­tion to anoth­er. This effect can take sev­er­al forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the exper­i­ence, includ­ing issues related to enjoy­ment and scene under­stand­ing. To invest­ig­ate the effect of trans­ition types on immers­ive MV360M exper­i­ences, a repeated-meas­ures exper­i­ment was con­duc­ted with 31 par­ti­cipants. Wear­ing a head-moun­ted dis­play, par­ti­cipants explored four stat­ic scenes, for which mul­tiple 360° images and a recon­struc­ted 3D mod­el were avail­able. Three trans­ition types were examined: tele­port, a lin­ear move through a 3D mod­el of the scene, and an image-based trans­ition using a Möbi­us trans­form­a­tion. The met­rics invest­ig­ated included spa­tial aware­ness, users move­ment pro­files, trans­ition pref­er­ence and the sub­ject­ive feel­ing of mov­ing through the space. Res­ults indic­ate that there was no sig­ni­fic­ant dif­fer­ence between trans­ition types in terms of spa­tial aware­ness, while sig­ni­fic­ant dif­fer­ences were found for users move­ment pro­files, with par­ti­cipants tak­ing 1.6 seconds longer to select their next loc­a­tion fol­low­ing a tele­port trans­ition. The mod­el and Möbi­us trans­itions were sig­ni­fic­antly bet­ter in terms of cre­at­ing the feel­ing of mov­ing through the space. Pref­er­ence was also sig­ni­fic­antly dif­fer­ent, with mod­el and tele­port trans­itions being pre­ferred over Möbi­us trans­itions. Our res­ults indic­ate that trade-offs between trans­itions will require con­tent cre­at­ors to think care­fully about what aspects they con­sider to be most import­ant when pro­du­cing MV360M exper­i­ences.


Anthony organ­ised a pan­el on How Should Social Vir­tu­al Real­ity Work? He was a last minute replace­ments for a pan­el­ist on The future impact of neur­os­cience and cog­nit­ive psy­cho­logy on vir­tu­al envir­on­ments