Related publications, videos and repositories:
Demo: Catch My Eye: Gaze-Based Activity Recognition in an Augmented Reality Art Gallery [Demo abstract PDF] [Demo Video]
T. Scargill, G. Lan, M. Gorlatova
To appear at IEEE IPSN’22, May 2022
Demo: Will It Move? Indoor Scene Characterization for Hologram Stability in Mobile AR [Paper PDF] [Demo Video]
T. Scargill, S. Hurli, J. Chen, M. Gorlatova
In Proc. ACM HotMobile 2021, Feb. 2021.
Edge-based Provisioning of Holographic Content for Contextual and Personalized Augmented Reality [Paper PDF]
M. Glushakov, Y. Zhang, Y. Han, T. Scargill, G. Lan, M. Gorlatova
In Proc. IEEE Workshop on Smart Edge Computing and Networking (co-located with IEEE PerCom), Austin, TX, Mar. 2020.
Two major challenges exist to a vision of high-quality, personalized AR content, delivered instantaneously to the end user. One is the storage/computation restrictions of a mobile, wearable device; we simply cannot store all the holograms that might possibly be required locally. However if we store this content in the cloud, then we incur a transmission latency which will likely degrade a user’s quality of experience. Secondly, if we are to maximize the potential for personalized content then highly sensitive attributes such as a user’s eye movements must be processed. How then do we preserve the privacy of users?
Fortunately edge computing can provide a solution to both of these issues; as such we see it as a key enabler of modern AR. By bringing remote resources closer to the end user, we facilitate storing a wider range of possible content and computing the complex algorithms that select them, while provisioning them within an acceptable time window. Furthermore, the edge can provide an ideal platform to implement privacy-preserving methods such as differential privacy, before data is transferred to a cloud location administered by a third party. We envisage edge servers being deployed as ‘specialists’ in a particular AR area of interest, storage specific knowledge and computational tasks regarding the scene, the virtual content that is displayed in that area, and how users behave in that area.
In our research we aim to develop and evaluate edge-based network architectures that support modern AR applications. We plan to show the benefits of such an approach over current techniques, by examining the latency and jitter incurred in realistic applications. Additionally, we would like to know the resulting impact on quality of experience through user studies. How much does an edge architecture improve user satisfaction? Which applications and tasks are the most latency/privacy-sensitive, and need to be processed at the edge? Can the workload be balanced by processing others in the cloud? If poor network conditions are detected, should we process locally instead? These are exciting research directions which can contribute greatly to the development of next-generation AR platforms.