To gauge AI’s actual intelligence, you need to understand linguistics – and large language models.
During artificial intelligence’s historic ascent, much has been made about how much intelligence the technology brings to the table.
While the media has given that concept wide latitude and as technology companies promise a massive increase in accumulated knowledge thanks to AI, one new study claims AI technology is as likely to spin a yarn that spit out a fact.
That’s the take from Anthony Chemero, a psychology professor at the University of Cincinnati. His new study claims AI is “muddled” by linguistic limitations. “While indeed intelligent, AI cannot be intelligent in the way that humans are, even though it can lie and BS like its maker,” the study notes.
Language Models an Issue
There’s a reason for that, Chemero says.
Linguistics is the study of language, focusing on the properties of specific languages and language characteristics. Like AI, linguistics is designed to boost human knowledge and help people better understand the world they live in, leveraging practical applications.
To understand AI and the knowledge it accumulates, you need to understand how large language models (LLM) work, Chemero says. These large language models (LLM) are the lifeblood of artificial intelligence, trained on massive amounts of data mined from the internet, but often use data that “shares the biases of the people who post the data,” he states.
“LLMs generate impressive text but often make things up whole cloth,” Chemero notes. “They learn to produce grammatical sentences but require much, much more training than humans get. They don’t actually know what the things they say mean. LLMs differ from human cognition because they are not embodied.”
While AI advocates acknowledge that large language models do tend to “hallucinate” when harvesting data that has to make sense to end users, the more accurate term is “bullshitting”, the study concludes.
“It would be better to call it ‘bullshitting,” because LLMs just make sentences by repeatedly adding the most statistically likely next word — “and they don’t know or care whether what they say is true,” the UC study reports.
That’s the difference between AI and human knowledge, Chemero adds. One is inanimate and the other is humanistic, always surrounded by and influenced by other humans. That’s not good for AI uses, as they may be hijacked to say “nasty things that are racist, sexist and otherwise biased,” he says.
“(Being human) makes us care about our own survival and the world we live in,” Chemero says, adding that language models don’t “live” in the human world and don’t care about the things humans care about.
In short, AI “doesn’t give a damn,” Chemero says, adding that things matter to humans. “We are committed to our survival. We care about the world we live in”, he concludes.

Brian O’Connell, a former Wall Street bond trader and best-selling author, is a prominent figure in the finance industry. With a substantial background as an ex-Wall Street trader, he has authored two best-selling books: ‘The 401k Millionaire’ and ‘CNBC’s Creating Wealth’, demonstrating his profound knowledge of finance and investing.
Brian is also a finance and business writer for esteemed national platforms and publications, including CNN, TheStreet.com, CBS News, The Wall Street Journal, U.S. News & World Report, Forbes, and Fox News.