Related publications, videos and repositories:

Integrated Design of Augmented Reality Spaces Using Virtual Environments [Paper PDF]
T. Scargill, Y. Chen, N. Marzen, M. Gorlatova
To appear in Proc. IEEE ISMAR, Oct. 2022 (21% acceptance rate).

Here To Stay: A Quantitative Comparison of Virtual Object Stability in Markerless Mobile AR [Paper PDF]
T. Scargill, G. Premsankar, J. Chen, and M. Gorlatova
In Proc. IEEE/ACM Workshop on Cyber-Physical-Human System Design and Implementation, May 2022 (co-located with CPS-IoT Week 2022).

Demo: Will It Move? Indoor Scene Characterization for Hologram Stability in Mobile AR [Paper PDF] [Demo Video]
T. Scargill, S. Hurli, J. Chen, M. Gorlatova
In Proc. ACM HotMobile 2021, Feb. 2021.

ARStats: Open-Source AR Session Measurement Application [Github repository]

Virtual-Inertial SLAM [Github repository]


A core feature of modern augmented reality (AR) platforms is the ability to align virtual content with real world objects, and have it remain in the correct position as users move around. This is key to achieving a sense of realism and a high quality of experience for many applications. Similarly, in many cases we wish virtual objects to be occluded if their position in real world space is behind a physical object.

Both of these properties require accurate mapping of the real world, achieved through a combination of a camera image and (in AR headsets) an infrared (IR) time-of flight sensor. They also require accurate tracking of the user’s position and pose within that map, achieved using the current optical input and data from an onboard inertial measurement unit. A large body of existing research is focused on ways to improve both the fidelity of the map produced and pose estimation.

However, what is much less studied is how exactly the properties of a real-world environment affect these outcomes. Some conditions, such as low lighting, highly reflective objects and featureless surfaces are known to make mapping/tracking much more challenging, and are likely to result in errors. Leading AR platform providers give general advice on how to set up an environment to achieve good results, but to date there is no work which quantifies this, and assesses the relative contributions of these factors.

World mapping inputs on ARKit (left) and the Magic Leap One (right)

Motivated by this gap in the literature, our research aims to produce the first comprehensive studies comparing a variety of AR platforms and scenes. By measuring a property known as ‘drift’ (how much a hologram that should be stable moves out of position), as well as metrics of a scene thought to affect mapping/tracking accuracy, we can determine a relationship between the two, and develop a system that predicts the likely positional error. Doing so will help AR application and space designers to achieve better experiences for their users.

Furthermore, it is clear that the impact a given amount of drift has on a user’s quality of experience is not uniform. It will clearly be affected by the position/size of the object relative to the user, and the real world reference points that exist. For example, the nature of visual perception means that movement against a plain background may be less noticeable than against a patterned background. A very interesting trade-off is therefore likely to exist – the presence of visual features within a scene makes positional errors less likely, but potentially increases the perceptual impact of that error. Where then does the optimal balance lie? Our research will also look at exploring this, through extensive user studies.