Capturing time and predicting human movement

The EnTimeMent project proposes a radically new motion analysis technology with the capacity to predict human movement and use it in areas as diverse as healthcare, the performing arts, sports, video games and films

SHARE THIS STORY:

Anyone who has seen the “behind the scenes” of Hollywood’s biggest action and animation movies or the most famous sporting videogames, may have noticed actors and sportsmen in tight fitting suits with little lights attached to them. Called motion capture suits, they are designed to help record, through cameras and sensors, the body movements of the wearer so that they can be processed by computers and regenerated onto a screen.

 

Pioneered in the 70’s by Swedish psychologist Gunnar Johansson, motion capture technology has come a long way. However, it still faces limitations with highly complex movements that may also involve interaction with other moving parts.

 

In humans, predicting how others move is based on a constant feedback loop of information at different temporal scales, which allows us to accurately estimate the motion signature of, for example, a person crossing the street, the movement of an actor in a theatre or a gymnast dancing across the floor. What if we could also teach technology to predict movement in a similar way?

 

This is where the EnTimeMent project comes in. Funded by the EU FET Proactive/EIC Pathfinder programme and led by the Università degli Studi di Genova, it has a consortium of experts in neuroscience, biomechanics, physiology and computation. Unlike the traditional motion capture, which works on a unique temporal scale (milliseconds data), the EnTimeMent technology operates at multiple time scales in a multi-layered approach, emulating how the human brain may process movement at different temporal scales in parallel (for example the movement of a person’s eye is faster than a person’s breath).

 

This means that motion capture and movement analysis systems will be endowed with a completely novel functionality, achieving a new generation of time-aware multisensory motion perception and prediction systems. It could allow the technology to measure qualities like expressivity i.e., if a movement is aggressive, fluid or empathic, both at individual and social level.

 

These new systems will be applicable to all those fields where the motion capture is already applied (film, animation, videogames etc.) but also in other, new fields. For example it could be used as a promising support to enhance the training practices of athletes, and in collective sports to analyse and predict the movements of individual players and their interactions.

 

It could also improve the everyday life of severely disabled persons with low or no verbal capabilities, by capturing the intentions of the person through reading their body language at multiple temporal scales. A rehabilitation team could provide a “home-care-unit”, to help families of such patients to live more autonomously thanks to the monitoring provided by the EnTimeMent technology. In this way, home-carers would be alerted to the specific needs of the patient based on the technology’s reading of his or her emotional state.

 

 

Background information

FET-Open and FET Proactive are now part of the Enhanced European Innovation Council (EIC) Pilot (specifically the Pathfinder), the new home for deep-tech research and innovation in Horizon 2020, the EU funding programme for research and innovation.

 

Cover photo by Ahmad Odeh on Unsplash