A two-hour conversation with a artificial intelligence (AI) is enough to create an accurate replica of a person’s personality, researchers have found.
In a new study published November 15 in the Preprint Database arXivresearchers from Google and Stanford University created “simulation agents” – essentially AI replicas – of 1,052 people based on two-hour interviews with each participant. These interviews were used to train a generative AI model designed to mimic human behavior.
To assess the accuracy of the AI replicas, each participant completed two rounds of personality tests, social surveys, and logic games, and was asked to repeat the process two weeks later. When the AI replicas underwent the same tests, they matched the responses of their human counterparts with 85% accuracy.
The article proposes that AI models that mimic human behavior could be useful in various research scenarios, such as evaluating the effectiveness of public health policies, understanding responses to product launches or even modeling reactions to major societal events that might otherwise be too costly. , difficult or ethically complex to study with human participants.
Related: AI speech generator ‘achieves human parity’ – but it’s too dangerous to release, scientists say
“A general-purpose simulation of human attitudes and behaviors – where each simulated person can engage in a range of social, political or informational contexts – could allow a laboratory of researchers to test a wide range of interventions and theories “, the researchers wrote in the paper. Simulations could also help pilot new public interventions, develop theories around causal and contextual interactions, and increase our understanding of how institutions and networks influence people, they added.
To create the simulation agents, the researchers conducted in-depth interviews covering participants’ life stories, values, and opinions on societal issues. This allowed the AI to capture nuances that might be missed in traditional surveys or demographic data, the researchers explained. Most importantly, the structure of these interviews gave the researchers the freedom to highlight what they considered most important to them personally.
Scientists used these interviews to generate personalized AI models that could predict how individuals might respond to survey questions, social experiments, and behavioral games. This included responses to General social surveya well-established tool for measuring social attitudes and behaviors; the inventory of the five great personalities; and economic games, such as the dictator game And the trust game.
Although the AI agents closely resembled their human counterparts in many areas, their accuracy varied across tasks. They were particularly successful at replicating responses to personality surveys and determining social attitudes, but were less accurate in predicting behaviors in interactive games involving economic decision-making. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuances.
They also recognized the potential for abuse of this technology. AI anddeep fake“The technologies are already underway used by malicious actors to deceiveimpersonate, abuse and manipulate others online. Simulation agents can also be misused, researchers say.
However, they said the technology could allow us to study aspects of human behavior in a way that was previously impractical, by providing a highly controlled testing environment without the ethical, logistical or interpersonal challenges of working with humans.
In a statement to MIT Technology Reviewlead author of the study Joon Sung Parkstudent in computer science at Stanford, said: “If you can have a bunch of little ‘yous’ running around and actually making the decisions that you would have made – that, I think, is ultimately the future.”