“I think Musk makes it plain for everybody that he isn’t interested in furthering healthy public debates,” one expert told Euronews Next.
Elon Musk’s self-purchase of his artificial intelligence (AI) company AIxchat and social media platform X (formerly Twitter) is the latest sign of AI and social media uniting.
While the move is largely seen to be for financial reasons – to boost X’s static revenue by adding it to the AI bubble – it is also likely to fuse the companies so that user data on X can be used to train xAI.
“Today, we officially take the step to combine the data, models, compute, distribution and talent,” Musk said when the purchase was announced.
“Not only does the move consolidate power for Musk, but it could also be worrisome for how the data is used, especially if it is used to make the company money via advertising, experts say.
“It might just be that the data stays within their company, but you never know what that company picks as their next business model,” said Jan Penfrat, senior policy advisor at the Brussels-based advocacy group European Digital Rights (EDRi).
“Obviously, selling information to advertisers has always been part of Twitter’s business model and then later X’s business model. So, I think it’s totally in the cards,” he told Euronews Next.
However, X is not the only social media platform planning to use data to train AI models.
Meta is already doing so by using Facebook and Instagram user data to train its Llama AI model and could even use photos taken on its Meta Ray-Ban smartglasses.”
“Elon Musk would like to consolidate [the companies’] power in the field and also because there is intense competition in AI… So Elon Musk would very much like to be front-runner,” said Petros Iosifidis, professor of media and communication policy at City University in London.
“We should also remember that he is the right hand of the new government, of Donald Trump these days. So, combining technological power with political power, I think that will be great for him,” he told Euronews Next.
Misinformation and disinformation
But what may be “great” for Musk might not result in an unbiased and accurate AI system.
Despite Musk stating that his AI model called Grok is designed to be “maximum truth-seeking AI,” training it on X user data may not provide the best foundation for truth.
Since Musk took over Twitter and transformed it – not only by changing its name but also by firing outsourced content moderators – the platform has seen an increase in misinformation and hate speech.
A study released in February, which analyzed thousands of English-language posts since Musk took over X, found a 50 percent increase in hate speech during his first eight months of ownership and no reduction in fake bot accounts – something Musk had promised to eliminate.
This same hate speech and misinformation could then be used to train AI models.
“It’s not a good idea to integrate large language models (LLMs) into social media platforms because they explicitly enable the creation of fake news, disinformation, and various types of harmful artificial content that don’t contribute positively to online conversations,” said Penfrat.
He noted that one of the issues with LLMs is that they take content from original creators, reprocess it, and essentially sell it back to users – which becomes particularly problematic within a social media context.
“By merging these two companies, Musk is making it clear to everyone that he’s not interested in fostering healthy public debates or anything of that nature. His interests lie elsewhere commercially,” he added.
Consolidation of power
This consolidation of power becomes particularly concerning given Musk’s expanding influence – he now controls not just SpaceX, Tesla, and xAI, but has also developed close ties with former U.S. President Donald Trump.
Penfrat warned: “This situation highlights how one individual controls both the content platform and how AI language models will interact with users – determining what information these systems share or withhold. It represents an unprecedented opportunity for information control.”
The comment underscores fears that this vertical integration of social media and AI under a single controversial billionaire, combined with political connections, creates troubling potential for manipulating public discourse at scale.
With Musk controlling the training data (X’s posts), the AI models (xAI), and their deployment (through X’s platform), critics argue this constitutes a dangerous concentration of influence over both human and machine-generated conversations.
He added that since Musk owns the satellite internet infrastructure, he could threaten to cut off internet connections in any country at any moment.
Penfrat said that if people are concerned about social media companies using their data to train AI models and the political influence they wield through their commercial and financial power, “then leaving these platforms is never too late and is always a good idea.”