Announcement of a partnership between Microsoft and the Living Edge Lab at CMU
Partnership announcement featuring OpenRTiST application
This short (4-minute) NPR radio piece and associated web page on wearable cognitive assistance was broadcast in Spring 2016.
Part of a series of articles on Gabriel, a framework for wearable cognitive assistance that leverages cloudlets.
RibLoc System for Surgical Repair of Ribs: Wearable Cognitive Assistant
Unsolicited by us, this video was made by a startup company VIZR Tech (http://vizrtech.com) to illustrate the potential of wearable cognitive assistance in medical training. The video provides background to explain relevant concepts to the company's target audience. The company already uses Google Glass (with the camera blocked) in medical training, to show videos of complex medical procedures to trainees. We created this new Gabriel application to give a tutorial on the RibLoc system for surgical repair of ribs, which is made by AcuteInnovations, Inc. (http://acuteinnovations.com). Today, this training is given to a doctor by an AcuteInnovations technician traveling to the doctor's site. The Gabriel application illustrates how this training could be delivered more efficiently. In addition, the application is available to the doctor to refresh training at any time. The principals of VizrTech appear in this video and share their thoughts about why this is a game-changing innovation. From a technical point of view, the computer vision in this application is particularly difficult because the parts are small, differ in subtle ways (e.g. color of screw), and easily confused under different lighting conditions. The object detectors are all implemented using deep neural networks.
IKEA Table Lamp Kit: Wearable Cognitive Assistant
In our talks on Gabriel, we have often mentioned assembly of IKEA kits as an example of how step by step guidence and prompt detection of errors could be valuable. This video shows a Gabriel application to assemble a genuine IKEA kit (a table lamp) purchased off the shelf at IKEA. An interesting first is the use of short video segments (rather than still images) in the Google Glass display to guide the user. The use of videos in this way, combined with the active, context-sensitive real-time guidance from the Gabriel application, is very effective.
RTFace: Denaturing Live Video on Cloudlets
This demo shows how cloudlets can improve the scalability of video analytics and how they can be used to enforce privacy policies based on face recognition. The demo also illustrates use of the OpenFace face reognition system that we have created. RTFace combines OpenFace with face tracking across frames to achieve the necessary frame rate for live video.
Making a Sandwich: Google Glass and Microsoft Hololens Versions of a Wearable Cognitive Assistant
This demo shows two things. First, it shows how Gabriel can use much more sophisticated computer vision (based on convolutional neural nets) than the much simpler computer vision algorithms used in demos such as Lego and Ping-Pong. Second, it shows how different kinds of wearable devices (Google Glass and Microsoft Hololens) can be used for the same application using the same Gabriel back-end.
Gabriel on CBS 60 Minutes
Wearable Cognitive Assistance can be viewed as "Augmented Reality Meets Artificial Intelligence". This 90-second excerpt from the October 9, 2016 CBS 60 Minutes special edition on Artificial Intelligence highlights the table-tennis wearable cognitive assistant on Google Glass.
The Fall 2016 offering of 15-821/18-843 "Mobile and Pervasive Computing" course included many 3-person student projects based on cloudlets and wearable cognitive assistance. Examples include wearable cognitive assistance for use of an AED device, cloudlet-based privacy mediator for audio data, etc. This web page contains brief descriptions of the projects, and videos of the student projects captured on the final day of class. The PDFs of the posters used by the students to explain their projects are also included.
FaceSwap: Cloud versus Cloudlet Comparison of User Experience
This demo shows the difference between using a cloud and a cloudlet for an application where the impact of latency is easily perceivable by users. We have created an Android application called "FaceSwap" that is available in the Google Play Store. A back end VM image for an Amazon cloud site is also available. The VM image can also be run on a cloudlet.
TPOD System for Creating Deep Neural Net Object Detectors for Cloudlets
Creating object detectors for wearable cognitive assistance is difficult. TPOD is a web-based system that we have created to simplify the creation of training data sets for object detectors based on deep convolutional neural networks. This demo shows an early version of TPOD.
Drawing Assistant with Google Glass
Can a legacy application for training be modified to use Gabriel? This demo shows how a Drawing Assistant created by researchers at INRIA in France has been modified to use a wearable device (Google Glass). In its original form, a user would receive instruction to improve his drawing skills on a desktop display, and provide input using a pen-based tablet. This demo shows how the system has been modified to retain the application logic for instruction, but use any writable surface (e.g., paper, whiteboard, etc.) for input. Computer vision on the video stream from Google Glass is used to generate input and display streams that are indistinguishable from the original.
PingPong Assistant with Google Glass
This conceptually simple demo has proved to be especially popular because it brings out the importance of low latency. A person wearing Google Glass plays ping-pong with a human opponent. The video stream from the Glass device is streamed to a cloudlet and analyzed on each frame to detect the ball and the opponent, compare their positions from the previous frame, and then to infer their trajectories. Based on this, the application guides the user to hit to the left or right in order to increase the chances of beating the opponent. To avoid annoying the user, the application tries not to give advice too frequently and only when it is confident of its advice.
The Fall 2015 offering of 15-821/18-843 "Mobile and Pervasive Computing" course included many 2-person student projects based on cloudlets and wearable cognitive assistance. Examples include wearable cognitive assistance for gym exercises, using cloudlets for Google Street View hyper-lapse viewing, real-time cloudlet-based super-resolution imaging, etc. This web page contains brief descriptions of the projects, and videos of the student projects captured on demo day. The PDFs of the posters used by the students to explain their projects are also included.
Task Assistance Demo with Lego Assembly on Google Glass
This is the world's very first wearable cognitive assistance application! We chose a deliberately simplified task (assembling 2D lego) since it was our first attempt. The demo seems easy, but the code to implement it reliably was challenging (especially with flexible user actions and under different lighting conditions).
Satyanarayanan, M., Bahl, P., Caceres, R., Davies, N.
"The Case for VM-based Cloudlets in Mobile Computing"
IEEE Pervasive Computing, Vol. 8, No. 4, October-December 2009
Satyanarayanan, M., Schuster, R., Ebling, M., Fettweis, G., Flinck, J., Joshi, K., Sabnani, K.
"An Open Ecosystem for Mobile-Cloud Convergence"
IEEE Communications Magazine, Volume 53, Number 3, March 2015
Ha, K., Chen, Z., Hu, W., Richter, W., Pillai, P., Satyanarayanan, M.
"Towards Wearable Cognitive Assistance"
Proceedings of the Twelfth International Conference on Mobile Systems, Applications and Services (MobiSys 2014), Bretton Woods, NH, June 2014
"The Emergence of Edge Computing"
IEEE Computer, Volume 50, Number 1, January 2017
"System Infrastructure for Mobile-Cloud Convergence"
PhD Thesis, Electrical and Computer Engineering Department, Carnegie Mellon University, December 2016
Hu, W., Gao, Y. Ha, K., Wang, J., Amos, B., Chen, Z., Pillai, P., Satyanarayanan, M.
"Quantifying the Impact of Edge Computing on Mobile Applications"
Proceedings of APSys 2016, Hong Kong, China, August 2016
Software Architecture Strategies for Cyber-Foraging Systems"
PhD Thesis, Vrije Universiteit, Amsterdam, Netherlands, June 2016
Amos, B., Ludwiczuk, B., Satyanarayanan, M.
"OpenFace: A General-purpose Face Recognition Library with Mobile Applications"
Technical Report CMU-CS-16-118, Computer Science Department, Carnegie Mellon University, June 2016
For a complete list of publications related to edge computing, please click this link.