SONICOM: Making virtual interactions more real through AI-informed immersive audio technologies

A FET/EIC Pathfinder project aiming at transforming auditory-based social interaction and communication

 

Immersive audio is what we experience in our everyday life, when we can hear and interact with sounds coming from different positions around us. We can simulate this interactive auditory experience within Virtual Reality (VR) and Augmented Reality (AR). Picture yourself being able to dynamically change the position of the various participants within a virtual conversation, modifying also the acoustical characteristics of the simulated environment. Then extend this to an interaction where some participants are present in person in the same environment, and some are accessing it remotely; imagine ‘blending’ the real and virtual so that it is not possible, from an auditory point of view, to distinguish between the two. Finally, take this to a whole new level, and imagine being able to use Artificial Intelligence (AI) to predict the impact and reaction of the various participants to your spoken voice, and to other sounds and acoustic features within the surrounding environment (e.g. reverberation).

 

SONICOM is a Horizon 2020 FET-PROACT project aiming to transform auditory-based social interaction and communication in AR and VR. As many of us continue to work from home during the COVID-19 pandemic, emulating real-life scenarios and interactions more accurately could help rebuild the conversational nuances and social cues that are often lost during online communication. But several major challenges still need to be tackled before we can achieve this level of simulation and control. This will involve not only significant technological advancements, but also employing AI to measure, model and understand low-level psychophysical (sensory) as well as high-level psychological (social interaction) perception.

 

Lead investigator Dr Lorenzo Picinali of Imperial College London said: “Our current online interactions are very different to real life scenarios: social cues and tone of voice can be lost, and all the sound comes from one direction. Our technology could help users have more real, more immersive experiences that convey the nuanced feelings and intentions of face-to-face conversations.

 

SONICOM will revolutionise the way we interact socially within AR and VR environments and applications by leveraging methods from AI to design a new generation of immersive audio technologies and techniques, specifically looking at personalisation and customisation of the audio rendering. Using a data-driven approach, it will explore, map and model how the physical characteristics of spatialised auditory stimuli can influence observable behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios.

 

The project includes researchers from: Imperial College London (UK), Sorbonne University (France), Austrian Academy of Sciences (Austria), University of Milan (Italy), National and Kapodistrian University of Athens (Greece), University of Malaga (Spain), University of Glasgow (UK), Dreamwaves (Austria), Reactify (UK), and USound (Austria).

 

Background information

FET-Open and FET Proactive are now part of the Enhanced European Innovation Council (EIC) Pilot (specifically the Pathfinder), the new home for deep-tech research and innovation in Horizon 2020, the EU funding programme for research and innovation

 

Photo by Klaus Pichler, ÖAW