If it seems to you that people are starting to speak a bit like robots, you're not far from the truth, according to a new study.
Experts warn that as billions of people use the same artificial intelligence tools for assistance, humanity is becoming more predictable and less creative.
These chatbots are standardizing the way we speak, write, and think—and that risks diminishing humanity's collective wisdom. They suggest that AI developers should incorporate more real-world diversity into their technology to preserve the unique ways of human expression.
"Individuals differ in how they write, reason and see the world," said lead author Zhivar Sourati from the University of Southern California.
“When these variations pass through the same large language models (LLMs), their linguistic style, perspective, and reasoning strategies become homogeneous, producing standard expressions and thoughts among users.”
He warned that individuality is being "flattened" by the overuse of AI, with people increasingly using the same tone, vocabulary and level of linguistic complexity.
So, can you tell which text is original and which was written by a chatbot?
According to the team, the most common prompts that people feed into the AI are things like “Can you improve this for me” or “Make my reasoning seem more logical.” For example, the phrase “Soooo excited for what's next!” could be translated by the AI as: “I'm really looking forward to what's ahead and feel very optimistic about the future.”
When people use chatbots to improve their writing, they often lose stylistic individuality, the team said.

“The problem is not only that LLMs shape the way people write or speak, but they also invisibly rewrite what is considered credible expression, accurate perspective, or good reasoning,” Sourati added.
Numerous studies have shown that texts generated by chatbots are less diverse than those of humans and often reflect the language, values, and reasoning style of Western, educated, industrial, and democratic societies.
“Since LLMs are trained to capture and reproduce statistical regularities in training data, which often dominate dominant languages and ideologies, their results often reflect a narrow and distorted human experience,” he added.
Within groups and societies, people who think differently foster creativity and problem-solving. But this “cognitive diversity” is diminishing as more people use AI, the team writes in Trends in Cognitive Sciences.
“When a lot of people around me think and speak a certain way, and I do things differently, I feel a pressure to conform because it seems like a more credible or socially acceptable way to express my ideas,” Sourati explained.
Previous suggestions for distinguishing AI-generated text include observing inaccuracies, repetition, sudden changes in tone, or repetitive sentence structures.
Excessive use of familiar words or jargon may indicate that the AI is filling in the gaps with generic vocabulary. Also, very quick responses may be an indication of AI-generated text.
Some developers have even created AI detection tools to identify texts written or enhanced by AI, such as student essays or job applications.
A preliminary study found that people who regularly use chatbots can determine about 90% of the time whether an article was generated by AI, while those who don't use them often perform slightly better than chance.
In 2024, a research team from the University of Reading created 100% ChatGPT-written exam answers for 33 fake students in the Faculty of Clinical Psychology and Language Sciences. The exams were handed in without the knowledge of the assessors.
The result: 94% of the AI-generated answers remained undetected, and on average, these answers received higher grades than those of real students. /GazetaExpress/