Even though an android’s appearance is so realistic that it could be mistaken for a human in a photo, watching it move in person can seem a little unsettling. He may smile, frown, or display other colloquial expressions, but finding a coherent emotional state behind these expressions can be difficult, leaving you unsure of what he is really feeling and creating a feeling of unease.
Until now, when robots capable of moving many parts of their face, such as androids, were allowed to display facial expressions for extended periods a “patchwork method” was used. This method involves the preparation of several pre-arranged action scenarios to ensure that unnatural facial movements are excluded when switching between these scenarios as needed.
However, this poses practical challenges, such as preparing complex action scenarios in advance, minimizing noticeable artificial movements during transitions, and refining movements to subtly control conveyed expressions.
In a study published in the Journal of Robotics and Mechatronicslead author Hisashi Ishihara and his research group developed dynamic facial expression synthesis technology using “waveform movements”, which represent various gestures that constitute facial movements, such as “breathing”, ” blink” and “yawn”, as individual waves.
These waves propagate to the relevant facial areas and are superimposed to generate complex facial movements in real time. This method eliminates the need to prepare complex and diverse action data while avoiding noticeable motion transitions.
Additionally, by introducing “waveform modulation”, which adjusts individual waveforms based on the internal state of the robot, changes in internal conditions, such as mood, can be instantly reflected in the form of variations in facial movements.
“Advancing this research into dynamic facial expression synthesis will enable robots capable of performing complex facial movements to display more vivid expressions and convey mood changes that respond to surrounding circumstances, including interactions with humans. humans,” says lead author Koichi Osuka. “This could greatly enrich emotional communication between humans and robots.”
Ishihara adds: “Rather than creating superficial movements, developing a system in which internal emotions are reflected in every detail of an android’s actions could lead to the creation of androids perceived as having a heart.”
By realizing the function of adaptive adjustment and emotion expression, this technology is expected to significantly improve the value of communication robots, allowing them to exchange information with humans in a more natural and humane way.
More information:
Hisashi Ishihara et al, Automatic generation of dynamic arousal expression based on decaying wave synthesis for robot faces, Journal of Robotics and Mechatronics (2024). DOI: 10.20965/jrm.2024.p1481
Provided by
Osaka University
Quote: Crossing the Uncanny Valley: Researchers develop technology for realistic facial expressions in androids (December 23, 2024) retrieved December 24, 2024 from https://techxplore.com/news/2024-12-uncanny-valley-technology- lifelike-facial. HTML
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.