Enabling Learnability in Movement Interaction
Considering movement-based interaction beyond the mouse-keyboard paradigm, The ANR project ELEMENT (2018-2022) proposes to shift the focus from intuitiveness/naturalness towards learnability. With learnable embodied interactions, novice users should be able to approach a new system with a difficulty adapted to their expertise, then the system should be able to carefully adapt to the improving motor skills, and eventually enable complex, expressive and engaging interactions.
How to design body movement as input modality, whose components are easy to learn, but that allow for complex/rich interaction techniques that go beyond simple commands?
What computational movement modelling can account for sensorimotor adaptation and/or learning in embodied interaction?
How to optimize model-driven feedback and guidance to facilitate skill acquisition in embodied interaction?