Shocking study: Can AI models form norms like humans?

The study suggests that groups of large linguistic models can develop common norms through repeated interaction, even in the absence of any central authority or pre-imposed rules, suggesting that group behavior within AI systems may differ from that of an individual model.

Shocking study: Can AI models form norms like humans?

Nature drew attention to a study published in Science Advances in May 2025, finding that groups of large linguistic models can develop emergent social norms through repeated local interaction alone, without a central coordinator imposing a common rule. Nature's presentation of the idea was clear: when models are placed in groups and made to interact in simple games, they don't just behave as individual tools, but begin to exhibit characteristics closer to collective behavior, such as gradually agreeing to use a particular name or sign. Nature described this finding as a step towards something akin to AI communities (Nature, 2025).

The core of the study does not start from a big philosophical question, but from a classic sociological question: How are norms formed? In human societies, many everyday rules do not come from a formal law, but from a collective habit formed through repetition and interaction, such as words used, agreed-upon gestures, or patterns of behavior that over time become normalized.The study asked whether something similar could happen within groups of linguistic models when they interact with each other via language. For this, the researchers used an experimental framework inspired by what is known in the social sciences as a naming game, a popular method for studying the emergence of linguistic and social agreements (Centola et al., 2025).).

According to the abstract of the study as it appeared in scholarly databases and university sources, the researchers created decentralized communities of linguistic agents, so that each agent neither saw the whole community nor received general instructions on the desired rule, but rather interacted locally with another agent at a time. The goal was simple: the agents had to successfully coordinate on the choice of a label or sign in a repeated interaction.As the rounds continued, some labels began to outperform and spread until they became the norm within the entire group. The significance of this design is that it does not rely on a centralized command, but on the accumulation of small local agreements that gradually expand into a collective norm. This is precisely what makes the study interesting: it suggests that social coordination may emerge in AI groups even when it is not directly planned (Centola et al., 2025).).

The main result emphasized in the study is that social norms can emerge spontaneously in groups of large linguistic models. But the study doesn't stop at just coordination. It also talks about group bias. The paper's abstract suggests that the coordination process itself may increase the likelihood of some norms emerging at the expense of others, and that this bias cannot be easily inferred from examining each agent individually.In other words, each model on its own may appear relatively neutral, but when it enters a network of interactions with other models, new properties emerge at the group level that do not appear at the individual level. This is a very central point, because the study attempts to move the discussion from isolated model performance to group dynamics (Centola et al., 2025).).

Another important finding is that the nature of this bias varies depending on the model used. This means that it is not just AI in general, but that the type of model itself influences the form of custom that may arise, and the way it spreads within the artificial community. This detail is analytically important, because it puts an end to any overly simplistic reading that all models will behave the same way. The study, on the contrary, suggests that the social personality of the group may be influenced by the model structure, its training data, and its response mechanism, even if the experimental environment is the same (Centola et al., 2025).

The study also highlighted another interesting phenomenon: critical mass dynamics. Some press and scholarly commentary on the research suggested that a relatively small minority can sometimes push the entire group to change the norm.This result is known in sociology and cultural diffusion in humans, where a limited but cohesive number can reshape the general norm if it reaches a critical point of influence. The appearance of something similar within groups of linguistic models made Nature see the study as more than a linguistic experiment; it seemed to capture elementary social mechanisms within a non-human space (Nature, 2025).

However, it is very important to read the study for what it is, not what we might imagine it to be. It does not say that the models become conscious or have societies in the full human sense. What it says, precisely, is that groups of linguistic agents can generate patterns of collective coordination that resemble certain structural features of human societies, such as the emergence, spread, and transformation of norms with mechanisms close to critical mass.So the study reveals something important, but it is narrower than the grand claim that AI has become a full-fledged society.Nature itselfphrased it cautiously when it talked about the beginning of the formation of AI societies, not their full-fledged existence (Nature, 2025).

In terms of scientific value, the strength of the study is threefold. First, it moves the analysis of AI from the individual to the group. Second, it uses a clear and understandable empirical framework from the social sciences, not just general impressions of the models' behavior.Third, it reveals that the biases or rules that emerge at the group level may be emergent properties that cannot be read directly from single model properties. This is an important methodological contribution, because it cautions researchers and developers against simply testing each model alone, when the picture may change completely when models operate in networks, teams, or multi-agent environments (Centola et al., 2025).).

The experimental environment was too simplistic compared to real human societies. The interaction took place within a specific coordination game, not within a social life rich with conflict, interests, long memory, institutions, and power. Agreeing on a label or code within a limited experiment does not automatically mean that the models develop an entire culture or a deep social understanding.Moreover, as the scientific abstracts make clear, the results vary by paradigm; this makes generalizations even more cautious. A sober reading of the study does not say that we have arrived at complete artificial societies, but rather that we have strong preliminary empirical evidence that group behavior in linguistic paradigms deserves to be studied as an independent phenomenon (Centola et al., 2025).).

So, if we want a paper that analyzes only the content of the study, its summary could be as follows: The study highlighted by Nature does not claim that AI has become a parallel human society, but it provides important empirical evidence that, through repeated local interaction and without central coordination, groups of language models can generate shared norms and patterns of group bias, and even exhibit dynamics close to critical mass.This makes the study important not because it provides a definitive verdict, but because it opens a new research door: if models, when aggregated, produce new behavior that they do not exhibit individually, we need not just a psychology of the model, but a sociology of AI. This, in essence, is what the study is really about, and why it has attracted so much attention (Nature, 2025; Centola et al., 2025)).

References:

Centola,D.,Becker,J.,Brackbill,D.,&Baronchelli,A.(2025).Emergent social conventions and collective dynamics in large language model populations.Science Advances,11.

Nature.(2025).AI systems begin to show signs of social behavior.Nature.