Many emerging mobile applications, including augmented reality (AR) and wearable cognitive assistance (WCA), aim to provide seamless user interaction. However, the complexity of benchmarking these human-in-the-loop applications limits reproducibility and makes performance evaluation difficult. In this paper, we present EdgeDroid, a benchmarking suite designed to reproducibly evaluate these applications. Our core idea rests on recording traces of user interaction, which are then replayed at benchmarking time in a controlled fashion based on an underlying model of human behavior. This allows for an automated system that greatly simplifies benchmarking large scale scenarios and stress testing the application. Our results show the benefits of EdgeDroid as a tool for both system designers and application developers.
Olguín, M., Wang, J., Satyanarayanan, M., Gross, J.
Proceedings of the 20th International Workshop on Mobile Computing Systems and Applications (HotMobile ’19), Santa Cruz, CA, February 2019