The social media platform uses pre-ticked consent boxes, violating UK and EU GDPR regulations.
Elon Musk’s X platform faces scrutiny from data regulators after it was revealed that users unknowingly consent to having their posts used to build artificial intelligence systems due to a default setting on the app.
The UK and Irish data watchdogs have reached out to X regarding this apparent effort to obtain user consent for data harvesting without their awareness.
An X user brought attention to the issue on Friday, noting that a default setting on the app allowed the account holder’s posts to be used for training Grok, an AI chatbot developed by Musk’s xAI company.
Under UK GDPR, which mirrors EU data regulations, companies cannot use “pre-ticked boxes” or any method of default consent.
The setting, which includes a pre-ticked box, indicates that you “allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.” According to the X user, this setting can only be disabled on the web version of X.
Data regulators quickly raised concerns about this default setting. In the UK, the Information Commissioner’s Office (ICO) stated that it is “making enquiries” with X.
“Platforms that wish to use their users’ data to train their AI foundation models must be transparent about their activities,” said an ICO spokesperson.
“They should take steps to proactively inform users well in advance of using their data for these purposes and provide a clear and simple process for users to opt out.”
The Data Protection Commission (DPC) in Ireland, which oversees X across the European Union, stated that it had already been discussing data collection and AI models with Musk’s company this week and was “surprised” to discover the default setting.
“The DPC has been engaging with X on this issue for several months, with our most recent interaction occurring just yesterday, so we are surprised by today’s developments. We have followed up with X today and are awaiting a response. We anticipate further engagement early next week,” said Graham Doyle, a deputy commissioner at the DPC.
Large language models, which are the foundation for chatbots like ChatGPT and Grok, are trained on vast amounts of data collected from the internet to identify patterns in language and develop a statistical understanding of it. This allows chatbots to generate convincing responses to queries.
However, this approach has faced opposition from various sectors, including news publishers and authors, who argue that it violates copyright laws, as well as from regulators.
This month, Meta, the parent company of Facebook and Instagram, announced it would not release an advanced version of its AI model in the EU, citing the “unpredictable” behavior of regulators as the reason for the decision.