The choices we make in large group settings 鈥 such as in online forums and social media 鈥 might seem fairly automatic to us. But our decision-making process is more complicated than we know. So, researchers have been working to understand what鈥檚 behind that seemingly intuitive process.
Now, new 91爆料 research has discovered that in large groups of essentially anonymous members, people make choices based on a model of the 鈥渕ind of the group鈥 and an evolving simulation of how a choice will affect that theorized mind.
Using a mathematical framework with roots in artificial intelligence and robotics, 91爆料 researchers were able to uncover the process for how a person makes choices in groups. And, they also found they were able to predict a person鈥檚 choice more often than more traditional descriptive methods. The Wednesday, Nov. 27, in Science Advances.
鈥淥ur results are particularly interesting in light of the increasing role of social media in dictating how humans behave as members of particular groups,鈥 said senior author聽, the CJ and Elizabeth Hwang professor in the 91爆料鈥檚 Paul G. Allen School of Computer Science & Engineering and co-director of the聽Center for Neurotechnology.
鈥淲e can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions.鈥
鈥淚n online forums and social media groups, the combined actions of anonymous group members can influence your next action, and conversely, your own action can change the future behavior of the entire group,鈥 Rao said.
The researchers wanted to find out what mechanisms are at play in settings like these.
In the paper, they explain that human behavior relies on predictions of future states of the environment 鈥 a best guess at what might happen 鈥 and the degree of uncertainty about that environment increases 鈥渄rastically鈥 in social settings. To predict what might happen when another human is involved, a person makes a model of the other鈥檚 mind, called a , and then uses that model to simulate how one鈥檚 own actions will affect that other 鈥渕ind.鈥
While this act functions well for one-on-one interactions, the ability to model individual minds in a large group is much harder. The new research suggests that humans create an average model of a 鈥渕ind鈥 representative of the group even when the identities of the others are not known.
To investigate the complexities that arise in group decision-making, the researchers focused on the 鈥渧olunteer鈥檚 dilemma task,鈥 wherein a few individuals endure some costs to benefit the whole group. Examples of the task include guarding duty, blood donation and stepping forward to stop an act of violence in a public place, they explain in the paper.
To mimic this situation and study both behavioral and brain responses, the researchers put subjects in an MRI, one by one, and had them play a game. In the game, called a , the subject鈥檚 contribution to a communal pot of money influences others and determines what everyone in the group gets back. A subject can decide to contribute a dollar or decide to 鈥渇ree-ride鈥 鈥 that is, not contribute to get the reward in the hopes that others will contribute to the pot.
If the total contributions exceed a predetermined amount, everyone gets two dollars back. The subjects played dozens of rounds with others they never met. Unbeknownst to the subject, the others were actually simulated by a computer mimicking previous human players.
鈥淲e can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions,鈥 said lead author , a doctoral student in the Allen School. 鈥淲hen interacting with a large number of people, we found that humans try to predict future group interactions based on a model of an average group member鈥檚 intention. Importantly, they聽also know that their own actions can influence the group. For example, they聽are aware that even聽though they are anonymous to others, their selfish behavior would decrease collaboration in the group in future interactions聽and possibly bring undesired outcomes.鈥
In their study, the researchers were able to assign mathematical variables to these actions and create their own computer models for predicting what decisions the person might make during play. They found that their model predicts human behavior significantly better than reinforcement learning models 鈥 that is, when a player learns to contribute based on how the previous round did or didn鈥檛 pay out regardless of other players 鈥 and more traditional descriptive approaches.
Given that the model provides a quantitative explanation for human behavior, Rao wondered if it may be useful when building machines that interact with humans.
鈥淚n scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI,鈥 he said. 鈥淎 machine that simulates the 鈥榤ind of a group鈥 and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.鈥
Co-authors include Seongmin A. Park, Center for Mind and Brain at UC Davis and Institut des Sciences Cognitives Marc Jeannerod, France; Saghar Mirbagheri, Department of Psychology, New York University; Remi Philippe, Mariateresa Sestito and Jean-Claude Dreher at the Institut des Sciences Cognitives Marc Jeannerod.聽This research was funded by the National Institute of Mental Health, National Science Foundation, and the Templeton World Charity Foundation.
For more information, contact Rao at rao@cs.washington.edu.