In recent years, breakthroughs from the field of deep learning (DL) have transformed how sensor data (e.g. images, audio, ambient light, and even accelerometers and GPS) can be interpreted to extract the high-level information needed by bleeding-edge sensor-driven systems like smartphone apps, wearable devices, and self-driving cars. Today, the state-of-the-art in computational models that, for example, recognize a face, track user emotions, or monitor physical activities, are increasingly based on deep learning principles and algorithms. Unfortunately, deep models typically exert severe demands on local device resources and these conventionally limit their adoption within mobile and embedded platforms. As a result, in far too many cases existing systems process sensor data with machine learning methods that have been superseded by deep learning years ago.
At the same time, two new families of applications have emerged that aim to push the boundaries of user experience when using mobile devices in everyday life. On the one hand, the hardware and software progress of augmented and virtual reality (AR/VR) technologies have sparked a promise towards unified mixed-reality environments, such as the recently coined concepts of “meta-verse” and "telepresence", where users will be able to interact through new forms of digital communication. On the other hand, cyber-physical systems have started gaining traction with the gradual maturation of robot platforms and DL-based perceptual models. As such, an increasing interest around embodied intelligence in the form of home robots, autonomous vehicles and drones has been formed. Despite the application-level opportunities, both types of systems pose significant engineering challenges. Robust operation, responsiveness, energy efficiency and user-oriented design form a multi-objective design and engineering problem that calls for radically novel solutions, beyond the conventional approaches of today.
In this context, the mobile computing community is in a unique position to begin the careful study of two core technical questions: 1) how to design robust and energy-efficient DL models, or adapt existing ones, to meet the stringent needs of mixed-reality mobile devices and cyber-physical embedded systems, and 2) how to exploit the capabilities of embedded DL through software and hardware innovations in order to enable the emerging applications of AR/VR and robotics. As such, we particularly encourage submissions on these two topics. More specific topics of interest include, but are not limited to:
- Applications of deep neural networks (DNNs) with real-time requirements, including vision, audio or other input modalities
- Compression of DNN architectures for mobile devices
(quantization, pruning, etc.) - Systems and networking techniques for DL-driven VR/AR applications
- Resource-efficient deep models for VR/AR systems
- Neural models or sensors for modeling user activities and behavior
- Mobile continuous vision systems supported by DNNs
- Optimizing commodity processors (GPUs, DSPs, NPUs) for DNNs
- Embedded hardware accelerators for DNNs
- Distributed training approaches, including Federated Learning
- Network-level optimizations for improved edge and cloud offloading
- Resource & energy management for embedded DL systems
-
Keynote Speakers
Maria Gorlatova
Duke UniversityNikolas Kourtellis
Telefonica Research - Paper Submission Deadline:
April 8th - 11:59PM AOE
April 22nd - 11:59PM AOE (Final) - Workshop Event:
July 1st 2022