About

Considering movement-based interaction beyond the mouse-keyboard paradigm, The ANR project ELEMENT (2018-2021) proposes to shift the focus from intuitiveness/naturalness towards learnability: new interaction paradigms might require users to develop specific sensorimotor skills compatible with – and transferable between, – digital interfaces (including video interface, mobile, internet of things, game interfaces). With learnable embodied interactions, novice users should be able to approach a new system with a difficulty adapted to their expertise, then the system should be able to carefully adapt to the improving motor skills, and eventually enable complex, expressive and engaging interactions.

Our project addresses both methodological and modelling issues. First, we need to elaborate methods to design learnable movement vocabularies, which units are easy to learn and be composed to create richer and more expressive movement phrases. Since movement vocabularies proposed by novice users are often idiosyncratic with limited expressive power, we propose to capitalize on knowledge and experience of movement experts such as dancers and musicians. For example, dance practitioners commonly use the notion of movement qualities (i.e. describing “how” a movement is performed [Fdili Alaoui et al., 2012]) which can be key to describe movements, as well as methods to memorize choreographic phrases such as marking (i.e. performing a movement sequence with simplified gestures). Second, we need to conceive computational models able to analyze users’ movements in real-time to provide various multimodal feedback and guidance mechanisms (e.g. visual and auditory feedback). Importantly, the movement models must take into account the user’s expertise and learning development. We argue that computational movement models able to adapt to user-specific learning pathways is key to facilitate the acquisition of motor skills.

Research questions and aims

We propose to address three main research questions:

  • How to design body movement as input modality, whose components are easy to learn, but that allow for complex/rich interaction techniques that go beyond simple commands?
  • What computational movement modelling can account for sensorimotor adaptation and/or learning in embodied interaction?
  • How to optimize model-driven feedback and guidance to facilitate skill acquisition in embodied interaction?

Use cases

We consider complementary use-cases where learnability is essential for the development of expressive or efficient interaction. The project involves three main use cases, each one led by one of the project partners.

Use case 1: Learning gesture-based interaction techniques for communication.

This is a use case with a general HCI perspective and will be coordinated by LRI. In particular, this use case will leverage on the lab’s work on gesture-based interaction in text entry communication applications [Alvina et al., 2017] and non-verbal communication in computer-supported cooperative work through wide displays [Wagner et al, 2013]. In this use case, we will assess how adaptive technologies for gesture-based interaction can facilitate the transition from novice to expert users, where users refer to a heterogeneous and non-controlled population of mobile application end-users.

Use case 2: Music and dance, from novice to experts.

This is a use case with a more specific perspectives on expressive gestures as typically found in music interfaces and dance. This will be coordinated by IRCAM which has expertise in performing arts (music, dance), and has recently led a project on sensorimotor learning with audio feedback (ANR project Legos). In particular, this use case will involve new gestural interfaces for playing electronic music that could be used by either novices or experts musicians. In close collaboration with the LRI, we will also consider the case of the learning of expressive gestures in dance using audio-visual feedback, leveraging on the expertise of Sarah Fdili Alaoui at LRI (certified Laban Movement Analyst).

Use case 3: Assistive technology

This use case focuses on assistive technologies for people with sensorimotor disabilities. It will be coordinated by LIMSI, where the AMI team has expertise in multimodal interaction and assistive technologies. In particular, the AMI team recently conducted research on new spatial interfaces for the blind [Jacquet et al., 2005] and is investigating novel touch-based interface for wheelchair control designed for people with motor disabilities [Guedira et al., 2016]. In this use case, individual adaptation is critical because of the large variety of sensorimotor abilities. We will consider how adaptive technologies and multimodal feedback can facilitate the learning and/or appropriation of gesture-based interfaces for disabled persons.