ELEMENT Days: symposium and workshop

October 13-14 2022: Symposium with invited talks IRCAM, Paris, Salle Stravinsky and online

October 21 2022: Workshop and demos IRCAM, Paris, Studio 5

For the end of our project ELEMENT - Enabling Learnability in Movement Interaction - we present the main project results, with presentations along with invited keynote speakers (Oct.13-14), as well as one day of workshops/demos (Oct. 21).

The presentations will concern movement-based interaction, from different perspectives: user-centered and participatory design studies, interactive machine learning tools and applications covering arts (dance and music), education and health.

October 13

Morning

Overview

  • 10:30 - 11:00 Welcome
  • 11:00 - 12:00 Overview of ELEMENT project with key results (Frédéric Bevilacqua)

Afternoon

Theme 1: Interactive Machine Learning

Invited speaker: Rebecca Fiebrink

  • 14:00 - 15:00 Keynote Speaker: Rebecca Fiebrink
  • 15:00 - 15:30 Interactive Machine Teaching (T. Sanchez, B. Caramiaux)
  • 15:30 - 16:00 Marcelle: Composing Interactive Machine Learning Workflows and Interfaces (J. Françoise, B. Mohammadzadeh)
  • 16:00 - 17:00 Pause

Séminaire Technologies, Inclusion et Société

Invited speaker: Paola Tubaro

  • 17:00 - 18:00 Keynote Speaker: Paola Tubaro

October 14

Morning

Theme 2: Learning or Re-Learning Movements

Invited speaker: Ana Tajadura-Jiménez

  • 10:30 - 11:00 Welcome
  • 11:00 - 12:00 Keynote Speaker: Ana Tajadura-Jiménez
  • 12:00 - 13:00 From education to rehabilitation using sensory feedback
    • Tools: the CoMo Family (F. Bevilacqua)
    • Learning and Exploring with sonification (A. Loriette, A. Liu, V. Paredes)

Afternoon

Theme 3: Movement-based interaction: artistic applications

Invited speaker: Andrew McPherson

  • 14:30 - 15:30 Keynote Speaker: Andrew McPherson
  • 15:30 - 16:00 Probing dance practice learning and transmission (S. Fdili Alaoui, T. Grimstad Bang, J.P. Rivière, E. Walton)
  • 16:00 - 16:30 Pause

Theme 4: Beyond elementary technology: ethical and societal issues

  • 16:30 - 17:00 Practice and Politics of AI in Visual Arts (B. Caramiaux)
  • 17:00 - 18:00 Roundtable: discussion

October 21

10h-13h: Marcelle A Toolkit for Composing Interactive Machine Learning Workflows and Interfaces

https://marcelle.dev/

Marcelle is a Toolkit for composing interactive machine learning workflows and interfaces in the browser. Marcelle is open-source and can also be integrated in Python pipelines. Marcelle is made for applications involving several stakeholders interacting with ML pipelines, from data curation, tweaking model parameters, choosing metrics or exploration of the model capacities.

Demonstration and hands-on workshop by Jules Françoise et Baptiste Caramiaux. The first hour will be dedicated to a participative demo to showcase various features of the Marcelle toolkit. Then participants will be guided through the basics of programming Marcelle applications. The session will end with discussions on participants’ projects with Marcelle.

Please register by sending an email to Jules Françoise

14h30-17h30: Demonstration of the Como family

https://apps.ismm.ircam.fr/como

CoMo is an ecosystem of web applications for gesture/movement interaction with sounds. Applications include artistic creation and pedagogy, as well as education and rehabilitation. Como-elementsComo- educationComo-RééducationComo-Vox

  • 14h30 : CoMo-Elements: gesture recognition and motion-sound mapping using mobile phones (Frédéric Bevilacqua)
  • 15h10 : CoMo.education: Telling stories with sound and movement in kindergarten (Marion Voillot)
  • 15h50 : CoMo-rééducation: movement sonification for physical rehabilitation (Iseline Peyre)
  • 16h30 : CoMo-Vox: an application to improve in choir conducting

Keynote Speakers

Portrait of Rebecca Fiebrink

Rebecca Fiebrink

Using interactive machine learning to expand possibilities for new musical instruments (and more)
Professor @ University of the Arts London, Creative Computing Institute, London, United Kingdom
https://researchers.arts.ac.uk/1594-rebecca-fiebrink

Abstract: In this talk, I'll give a tour through 14 years of my research exploring how interactive machine learning can facilitate new approaches designing new musical instruments and other realtime interactions. I'll describe how tools like Wekinator, Sound Control, and InteractML have been used to make new music and creative experiences. I'll also discuss the research discoveries arising from the design and use of these tools, suggesting new ways of thinking about data and machine learning as creative tool, new approaches to supporting personalisation and accessibility, and new perspectives on making machine learning usable and useful for more people.

Bio: rofessor Rebecca Fiebrink makes new accessible and creative technologies. As a Professor at the Creative Computing Institute at University of the Arts London, her teaching and research focus largely on how machine learning and artificial intelligence can change human creative practices. She is the developer of the Wekinator creative machine learning software, which is used around the world by musicians, artists, game designers, and educators. She is the creator of the world's first online class about machine learning for music and art. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: current and recent projects include creating new accessible technologies with people with disabilities, and working with the Decolonising Arts Institute and Tate to build machine learning tools for addressing bias and uncovering hidden connections in art collections across the UK. Prof. Fiebrink previously taught at Goldsmiths University of London and Princeton University, and she has worked with companies including Microsoft, Smule, and Imagine Research. She holds a PhD in Computer Science from Princeton University.

Portrait of Paola Tubaro

Paola Tubaro

Learners in the loop: hidden humans and the ethics of artificial intelligence
Research Director @ CNRS (CREST, LISN)
https://cv.archives-ouvertes.fr/paola-tubaro

Abstract: Today’s artificial intelligence (AI), largely based on data-intensive machine learning models, relies heavily on the contributions of invisibilized and precarized 'humans-in-the-loop' who perform a variety of functions, between data preparation, verification of results, and even impersonation when algorithms fail. Using original data, I show that the 'loop' is one in which both machines and humans learn, but humans are at a disadvantage as they lack recognition and cannot build a personal and professional development trajectory. I conclude on the implications of these findings for AI ethics charts, which currently lack provisions to improve these workers' conditions and quality of life.

Bio: Paola Tubaro is Research Professor in sociology at the National Centre for Scientific Research (CNRS) in Paris. A specialist of social and organizational networks, she is currently researching the place of human labour in the global production networks of artificial intelligence, and the social conditions of digital platform work. Her interests also include data methodologies and research ethics.

Portrait of Ana Tajadura-Jiménez

Ana Tajadura-Jiménez

The Hearing Body: Sound-driven Body Transformation Experiences and Applications for Emotional and Physical Health
Researcher @ i_mBODY lab, Universidad Carlos III de Madrid, Leganés, Spain + 2UCL Interaction Centre (UCLIC), University College London, UK
https://www.imbodylab.com

Abstract: Body perceptions are important for people’s motor, social and emotional functioning. Critically, current neuroscientific research has shown that body perceptions are not fixed, but are continuously updated through sensorimotor information. In this talk I will present work from our group on how sound and other sensory feedback on one’s body and actions can be used to alter body perception, creating Body Transformation Experiences. I will talk about how neuroscientifically grounded insights that body perceptions are continuously updated through sensorimotor information may contribute to the design of novel body-centred technologies to support people’s needs and for behaviour change. I will then present various studies from our current project, Magic OutFit, aimed to inform the design of wearable technology in which sensory-driven changes in body perception are used to enhance behavioral patterns and emotional state in the context of exertion. I will discuss how apart from the focus on real-life applications, novel technologies for body sensing and sensory feedback may also become a research tool for investigating how emotional and multisensory processes shape body perception. I will conclude by identifying new challenges and opportunities that this line of work presents, some of which we are addressing in our recently started ERC project BODYinTRANSIT.

Bio: Ana Tajadura-Jiménez is an Associate Professor at Universidad Carlos III de Madrid (UC3M) and Honorary Research associate at the University College London Interaction Centre (UCLIC). She leads the i_mBODY lab, in which research focuses on understanding how sensory-based interaction technologies could be used to alter people’s perceptions of their own body, their emotional state and their motor behaviour patterns. This research is empirical and multidisciplinary, combining perspectives of psychoacoustics, neuroscience and Human-Computer Interaction (HCI). She is currently Principal Investigator of the AEI-funded Magic OutFit project, which aims to inform the design of technology to make people feel better about their bodies and sustain healthy lifestyles. She has recently been awarded a Consolidator Grant from the European Research Council to conduct research for the next 5 years. Prior to this she obtained a PhD in Applied Acoustics at Chalmers University of Technology (Sweden). She was a post-doctoral researcher in the Lab of Action and Body at Royal Holloway, University of London, an ESRC Future Research Leader and Principal Investigator of The Hearing Body project at University College London Interaction Centre (UCLIC) and a Ramón y Cajal fellow at Universidad Loyola Andalucía. Her work has led her to receive the 2021 Award from Fundación Banco Sabadell to Science and Engineering.

Portrait of Andrew McPherson

Andrew McPherson

Sensorimotor skill and cultural factors in digital musical instrument design
Professor @ Centre for Digital Music at Queen Mary University of London
https://andrewmcpherson.org/

Abstract: This talk will examine two specific human facets of new instrument design: sensorimotor skill and sociocultural factors. In the first case, it takes many years to achieve proficiency on an instrument, and a trained performer is unlikely to want to repeat the process afresh with an unfamiliar instrument. This talk will explore ways of designing new instruments which extend and repurpose existing expertise on familiar instruments. In the second case, instrumental performance is the locus of a rich set of aesthetic and social practices whose values should be considered at an early stage of any instrument design. The musical instrument thus takes on many different roles: as transducer of action to sound, as a mediator of the performer’s intentions, and as the product of a larger musical ecology.

Bio: Andrew McPherson is Professor of Musical Interaction in the Centre for Digital Music (C4DM) at Queen Mary University of London. A composer (PhD U.Penn 2009) and electronic engineer (MEng MIT 2005) by training, his research focuses on digital musical instruments, especially those which extend the capabilities of traditional musical instruments. Within C4DM, he leads the Augmented Instruments Laboratory, a research team investigating musical interface design, performer-instrument interaction and embedded hardware systems. Notable projects include the magnetic resonator piano, an electromagnetically-augmented acoustic grand piano which has been used by dozens of composers and performers worldwide; TouchKeys, a sensor overlay which transforms the electronic keyboard into a nuanced multi-touch control surface; and Bela, an open-source embedded hardware platform for ultra-low-latency audio and sensor processing which spun out into a company in 2016. Andrew holds a research fellowship from the Royal Academy of Engineering which is co-funded by Bela. He will soon start a Consolidator Grant from the European Research Council (funded by UKRI) entitled “RUDIMENTS: Reflective understanding of digital instruments as musical entanglements”, which looks at the cultural implications of engineering decisions in music technology.