Even AI suffers from 'brain rot', the web-like stupor

Artificial intelligence models, like humans, are also suffering from "brain rot," the Oxford Dictionary's word of the year for 2024. It refers to intellectual deterioration due to compulsive consumption of low-quality online content, especially that circulating on social media. This is according to a study by the University of Texas at Austin and Purdue University, which conducted an experiment on two large-scale open-source language models: Meta's Llama and Qwen, from China's Alibaba.
"We live in an age where information is growing faster than attention span, and much of it is designed to capture clicks, not convey depth. We wondered what happens when AIs are trained on junk content," says Junyuan Hong, a professor at the National University of Singapore who collaborated on the research.
So Hong and his colleagues fed different types of text into large language models from Meta and Alibaba and examined what happened when they were fed posts containing click-bait or sensational phrases like “wow,” “look,” or “only today.”
They recorded a sort of "brain rot" on the platforms, with cognitive decline, reduced reasoning skills, and degraded memory. The models also became less ethically aligned and psychotic.
The findings are important for the field, Hong says, because those building AI models might assume that social media posts are a good source of data for training. "Training AI on viral or attention-grabbing content may seem like data aggrandizement," the researcher says. "But it can erode reasoning, ethics, and attentiveness."
ansa




