Analysis of expressive gestures

What kind of information is relevant to recognize expressive gestures?
How can we extract them?
What kind of processing is needed in order to obtain qualitative information about the expressive content conveyed by the users?


These are some questions that wait for an answer from research on expressive content analysis.

The work on the analysis side is intended to investigate problems such as

(i) the choice of sensors systems, eventually developed within the project, providing the analysis algorithms with low level information about the users and the environment,

(ii) the development of algorithms to process such low level information in order to identify some higher level parameters related with the expressiveness conveyed by the users,

(iii) the development of models and algorithms for the extraction of high level, qualitative information about the recognized expressive content. Further, the analysis of expressive gestures from the users has to be performed both in the particular modality (e.g. recognize expressive information in human movements and gesture) and from a multimodal perspective (e.g. how to use information coming from the analysis of expressive content in human movement to perform a better and deeper analysis of expressive content in music performances and vice versa).

A coordinated research work has been carried out on analysis of expressive gestures in dance, music, and visual media. The work mainly consisted in (i) individuating (in year 1) and further extensing (in years 2 and 3) a “palette” of expressive cues for audio (music) and video (dance), (ii) performing statistical analysis on the values of expressive cues extracted from reference microdances and audio excerpts, (iii) validating the obtained results through spectators’ ratings, (iv) using of the extracted values in interactive performances and events (e.g., for automatic generation of audio and visual content depending on expressive gesture analysis). Results have been published on first-class international journals and in the proceedings of several international conferences.
As a concrete output the research work produced a collection of software modules for analysis and synthesis of expressive gestures, integrated or connected in the MEGA System Environment.