Humans are social animals, but there appear to be hard limits to the number of relationships we can maintain at once. New research suggests AI may be capable of collaborating in much larger groups.

In the 1990s, British anthropologist Robin Dunbar suggested that most humans can only maintain social groups of roughly 150 people. While there is considerable debate about the reliability of the methods Dunbar used to reach this number, it has become a popular benchmark for the optimal size of human groups in business management.

There is growing interest in using groups of AIs to solve tasks in various settings, which prompted researchers to ask whether today’s large language models (LLMs) are similarly constrained when it comes to the number of individuals that can effectively work together. They found the most capable models could cooperate in groups of at least 1,000, an order of magnitude more than humans.

“I was very surprised,” Giordano De Marzo at the University of Konstanz, Germany, told New Scientist. “Basically, with the computational resources we have and the money we have, we [were able to] simulate up to thousands of agents, and there was no sign at all of a breaking of the ability to form a community.”

To test the social capabilities of LLMs the researchers spun up many instances of the same model and assigned each one a random opinion. Then, one by one, the researchers showed each copy the opinions of all its peers and asked if it wanted to update its own opinion.

The team found that the likelihood of the group reaching consensus was directly related to the power of the underlying model. Smaller or older models, like Claude 3 Haiku and GPT-3.5 Turbo, were unable to come to agreement, while the 70-billion-parameter version of Llama 3 reached agreement if there were no more than 50 instances.

But for GPT-4 Turbo, the most powerful model the researchers tested, groups of up to 1,000 copies could achieve consensus. The researchers didn’t test larger groups due to limited computational resources.

The results suggest that larger AI models could potentially collaborate at scales far beyond humans, Dunbar told New Scientist. “It certainly looks promising that they could get together a group of different opinions and come to a consensus much faster than we could do, and with a bigger group of opinions,” he said.

The results add to a growing body of research into “multi-agent systems” that has found groups of AIs working together could do better at a variety of math and language tasks. However, even if these models can effectively operate in very large groups, the computational cost of running so many instances may make the idea impractical.

Also, agreeing on something doesn’t mean it’s right, Philip Feldman at the University of Maryland, told New Scientist. It perhaps shouldn’t be surprising that identical copies of a model quickly form a consensus, but there’s a good chance that the solution they settle on won’t be optimal.

However, it does seem intuitive that AI agents are likely to be capable of larger scale collaboration than humans, as they are unconstrained by biological bottlenecks on speed and information bandwidth. Whether current models are smart enough to take advantage of that is unclear, but it seems entirely possible that future generations of the technology will be able to.

Image Credit: Ant RozetskyUnsplash

By