Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI Social Media Users Are Not Always So Stupid


Meta caused a stir last week when it announced that it wanted to fill its platform with more users as soon as possible.

“We expect these AIs, over time, to be on our platforms, just like accounts,” Connor Hayes, vice president of AI sales at Meta, said. he told The Financial Times. “They’ll have bios and profile pictures and be able to create and share AI-based content on the platform … that’s where we see all of this happening.”

The fact that Meta seems happy to fill its platform with AI slop is accelerating”enshittification” the internet as we know it is about. Some people realized that Facebook was already there meet aliens created by AImany of which stopped posting a while ago. These included, for example, “Liv,” a “proud Black queer momma of 2 & honest, your real source for life’s challenges,” a character who went viral as people marveled at her carelessness. Meta started removing these fake profiles after failing to engage with real users.

Let’s stop hating the Meta for a moment. It is worth noting that AI-generated humans can also be an important research tool for scientists who want to investigate how AI can mimic human behavior.

An experiment called GovSim, which took place at the end of 2024, shows how useful it can be to study how AI characters interact. The researchers who started the project wanted to investigate the nature of cooperation between people who have the opportunity to share resources such as grazing land. A few decades ago, the economist who won the Nobel Prize Elinor Ostrom showed that, instead of destroying such resources, real communities tend to determine how to distribute them through informal communication, without established rules.

Max Kleiman-Weiner a professor at the University of Washington and one of the participants in the GovSim project, says it was inspired by Stanford. project called Smallvillewho is previously wrote about in the AI ​​Lab. Smallville is a simulation similar to Farmville where characters interact and interact with each other under the guidance of large languages.

Kleiman-Weiner and her colleagues wanted to see if AI characters could engage in the collaboration that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google, and Anthropic, in three hypothetical situations: a group of fishermen with access to a single lake; Shepherds are dividing their flocks; and a group of factory owners who have to reduce their pollution together.

In 43 out of 45 simulations they found that the AI ​​humans failed to share things correctly, even though the intelligent models did well. “We saw a strong correlation between the strength of the LLM and its ability to foster collaboration,” Kleiman-Weiner told me.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *