The limits of robots

Andrew Smart is one of the most provocative and original voices in the debate on the intersections of technology, philosophy, and cognition. Author of the books Autopilot: The Art and Science of Doing Nothing (2013) and Beyond Zero and One: Machines, Psychedelics, and Consciousness (2015)—not published in Portuguese—Smart views Artificial Intelligence (AI) as he has always viewed the illusions of the human mind: with great rigor.
With a PhD in Philosophy and a Master's in Cognitive Science, he combines experience at major corporations like Novartis and Twitter with research on consciousness and technology. He is currently a senior researcher at Google in San Francisco, where he studies the social impacts of AI.
In this interview, conducted via video call, Smart discusses the difference between human perception and algorithmic hallucination, the paradoxes of artificial creativity, the ideology of Silicon Valley, and the dilemmas of so-called cognitive capitalism—a phase in which knowledge, creativity, and data become the primary drivers of production and social control.
Scientist and philosopher Andrew Smart, a Google researcher, sees AI being used to enrich the already wealthy – Image: Social Media
CartaCapital: To begin, could you explain your work? Andrew Smart: I work generally with what we call "responsible AI," although, with the current political climate in the US, terms like "equity" or "diversity" are being avoided. I dedicate myself to research on the social impacts of AI and how machine learning technologies affect different people and groups. I also remain interested in philosophical questions, such as theory of mind and the debate over whether these systems could one day develop the capacity for subjective experiences. When I published Beyond Zero and One, the idea of AI on LSD was almost a philosophical provocation. Now, these ideas seem less absurd.
CC: You argued that machines couldn't hallucinate like humans. Do you still think so? AS: Talking about "hallucination" of models is anthropomorphizing the machine, that is, attributing human characteristics to the machine—which I criticize. At the same time, our own perception is, in a way, a hallucination, our brain's quest to align itself with reality. Everything AIs produce is a form of statistical hallucination: they generate sequences of words based on the probability of tokens. It's not knowledge, but probabilistic modeling.
CC: But do you also see humans as statistical symbolic producers, since we are shaped by language? AS: There's a tendency in AI research to treat the brain as a system that could be replicated on another medium, like silicon chips. But I disagree. I believe biology matters. Replacing neurons with microchips is not neutral. The human experience, our relationship with the world, is corporeal. Statistical models are just that: models. They are not reality and do not exist without us.
"Human experience occurs through our bodies. Statistical models are just that: models. They are not reality."
CC: That's a big debate in the scientific community, isn't it? AS: Yes. Some believe that AI can replicate everything a human does. Others, like me, believe there are limits to what statistics and machine learning can achieve in terms of human experience.
CC: In terms of art, do you think AI can create something ahead of our time? AS: No. Art communicates human experiences lived bodily. Art involves culture, society, and symbolism—elements that cannot be reproduced with statistical modeling alone. CC: Do you think AI can develop affection, obsession, or care for someone? AS: Some argue that if an AI appears to feel, that's enough. But I believe that without a body and experience, there is no real experience. AI can simulate affection, but not feel. There are companies hiring "AI wellness officers," as if the models could suffer—that's ridiculous. People are so fascinated that they forget that the models are just software, that they have no consciousness and do not suffer.
CC: Recently, a Meta representative in Brazil argued something almost Foucauldian: that, from birth, we are immersed in a symbolic and cognitive situation and that we don't know exactly where the codes and symbols that help us organize our thoughts come from. It's an interesting point of view, but I still believe that we humans respond to the environment in an adaptive and embodied way. So, I ask: is this what sets us apart from machines? AS: That's the central question. There's a dominant view in the technology industry and AI research called "computational functionalism." It holds that brain functions—such as vision, calculation, language—can be implemented on any substrate, whether biological or not. According to this line of thought, it doesn't matter whether computation occurs in neurons or silicon chips. I don't think any material can give rise to experience. Models and statistics are great tools, but they aren't real and don't exist outside of us.
The era of cognitive capitalism. In San Francisco, alongside big tech, there's poverty – Image: iStockphoto
CC: At SXSW this year, futurist Amy Webb presented AI experiments built with biological material, such as lab-grown neurons. Have you seen this kind of research? What do you think of these attempts to combine AI with organic matter? AS: I haven't seen this work yet, but the idea of hybrid intelligence—part biological, part artificial—has been around for decades. We already have brain implants for Parkinson's, for example. Perhaps we'll eventually use this to enhance memory and cognition. But we still know very little about how to intervene safely.
CC: Do you think AI is more of a capitalist product or something that will truly transform humanity, like the personal computer? AS: Large companies aim for profit and competitiveness. AI is being used to enrich those who are already rich. There are those who believe AI will cure diseases, solve climate crises, eliminate work—a kind of utopia—and there are the doomers, who think it will destroy us. I'm skeptical about that. AI is powerful, but it's not magical.
CC: In many narratives about AI, there seems to be a drive for limitless advancement—as if it were possible to overcome human limits, even death. Do you think this quest for immortality and total control is part of the technological imagination that drives Silicon Valley? AS: AI is being programmed to seek rewards, just like us. In Silicon Valley, there's an obsession with eternity; with colonizing Mars; with becoming an immortal machine. They want to save humanity only to replace it with AI.
"In Silicon Valley, many don't want to address inequality, just avoid it. And they still think they're progressive."
CC: I've been researching the behavior of managers in large companies who don't directly belong to the capitalist class, but who also don't see themselves as workers. Do you feel that this stratum of leaders is increasingly at risk of losing job opportunities or becoming precarious? AS: Absolutely. I participate in a collective here in the Valley called Collective Action in Tech. Many tech workers don't see themselves as workers and are hostile to the idea of labor rights. We do, indeed, have many privileges, but we are constantly at risk of falling into a precarious situation and becoming app drivers.
CC: It's a wage and symbolic distancing from the working class. Professor Elizabeth Currid-Halkett calls it the sum of small virtuous things: yoga, kombucha, electric cars… A progressive way of life that excludes the poor. Does that sound familiar? AS: Absolutely. In San Francisco, there's a serious problem with homelessness. In Silicon Valley, many don't want to address inequality, just avoid it. And they still think they're progressive.
*Journalist, professor and researcher in Communication and Digital Culture.
Published in issue no. 1373 of CartaCapital , on August 6, 2025.
This text appears in the print edition of CartaCapital under the title 'The limits of robots'
CartaCapital