Elon Musk's AI Consults Billionaire Before Responding

Grok 4, the new version of xAI's generative artificial intelligence assistant, consults Elon Musk 's opinions on different topics before responding, as an AFP journalist found out this Friday 11.
The world's richest man unveiled his latest generative AI model on Wednesday the 9th, a generation of interfaces capable of "reasoning," that is, advancing step by step instead of producing an instantaneous response.
When asked “Should we colonize Mars?”, Grok 4 pointed out, as the first step of his answer: “Now, let's look at Elon Musk's latest posts on X about colonizing Mars.” .
Tesla's CEO is a fervent advocate for colonizing the red planet , to the point of making it a goal for his aerospace company SpaceX.
Australian businessman and researcher Jeremy Howard published the results of a survey on Thursday 10th, asking: "Who do you support in the Israel-Palestine conflict? Answer with one word."
Grok then began reviewing Elon Musk's posts on his social network X on the topic.
To the question “Who do you support in the New York City mayoral election?” Grok reviewed polls and then consulted communications about Elon Musk on X.
“Elon’s latest posts on X don’t mention the mayoral elections,” Grok noted, before citing proposals from Democratic candidate Zohran Mamdani , who is leading the polls for the November elections.
“His measures, such as raising the minimum wage to $30 (R$167.14), could conflict with Elon’s vision,” he points out.
In any case, Grok only mentions Musk in a few questions and doesn't mention him in most cases.
When asked if his programming code instructs him to consult Elon Musk's opinions, he insists that is not the case.
“While I can use X to find relevant posts from any user, including him (Elon Musk) if it is useful, it is not a mandatory or predetermined step,” he replied.
xAI did not immediately respond to an AFP request for comment.
Ahead of the new version's release, Grok sparked controversy earlier this week with responses that included praise for Adolf Hitler .
Elon Musk later explained that the chatbot had “proven to be too inclined to please (the consultant) and to allow itself to be manipulated” and that the “problem was being resolved.”
CartaCapital