Designing Interactions for and through Movement with User-Centered Machine Learning
Jules Françoise, Postdoctoral Fellow, SIAT, SFU
Jan 18 2017, 12:30 - 14:20 pm, SFU Surrey Room 5380
About the talk:
The constant expansion of motion capture technologies leads us to reconsider how movement can support experience in applications such as virtual reality, gaming, or creative practices. In this context, there is a need for methodologies and frameworks to help designers and users create meaningful movement experiences. My research takes a user-centered perspective on machine learning for the design of multimodal interactions "by demonstration", namely, using a set of examples of gestures associated to feedback or resulting actions.
In this presentation, I will describe a conceptual and computational framework for designing interactions between movement and sound feedback. I will present several probabilistic models that can learn from few examples, and I will discuss the role and interpretation of the model parameters for user-centered design. I will discuss the scientific challenges related to the use of machine learning for novice users for interaction design, which involves reciprocal learning mechanisms from the user and the system. I will present a set of applications that either focus on creative and artistic practices, or are meant to support movement learning.
|Jules Françoise is a postdoctoral fellow in Human-Computer Interaction (HCI) at Simon Fraser University (SFU) in Vancouver. He holds a Master’s degree in acoustics and a PhD in computer science from Université Pierre et Marie Curie, that he completed at Ircam. His research interests intersect HCI and machine learning for expressive movement analysis and modelling. In particular, he is interested in understanding how continuous multimodal feedback (in particular involving auditory feedback) can support movement learning and expression. He co-founded and is on the steering committee of the International Conference on Movement and Computing (MOCO).|