Artificial Intelligence & Machine Learning , General Data Protection Regulation (GDPR) , Next-Generation Technologies & Secure Development
Meta's AI Model Training Comes Under European Scrutiny
Austrian Privacy Group Lodges Complaints With 11 European Regulators Against MetaA plan by social media giant Meta to train artificial intelligence with data generated by Facebook and Instagram users faces friction in Europe after a rights group alleged it violates continental privacy law.
See Also: Banking on Machine Data
Meta in May announced plans to expand AI across Facebook and Instagram for European and British customers. "These features and experiences need to be trained on information that reflects the diverse cultures and languages of the European communities who will use them," it said - meaning that it will use publicly shared posts, photos and their captions dating back to 2007 for AI training purposes.
Citing concerns over potential violations of the European General Data Protection Regulation, Austrian privacy organization None of Your Business on Thursday said it lodged complaints against Meta with 11 European data regulators.
With just days left before the Meta's latest data processing requirements are set to kick in, NOYB invoked the "urgency procedure" under the General Data Protection Regulation, which calls on supervisory authorities to take an immediate binding decision.
"Meta is basically saying that it can use 'any data from any source for any purpose and make it available to anyone in the world,' as long as it's done via 'AI technology.' This is clearly the opposite of GDPR compliance," said Max Schrems, the founder of NOYB. "Meta doesn't say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalized advertising or even a killer drone," he added.
By failing to provide adequate information on how the company is planning to use the customer data, NOYB alleges Meta is violating at least 10 GDPR requirements. They include requiring companies to lawfully and transparently process user data, obtaining explicit content from data owners and ensuring data subjects' rights are protected.
A Meta spokesperson on Thursday said the company only uses public data of its users above the age of 18 and that its upcoming measures are compliant with existing laws.
The company, which cites a clause in the GDPR that allows processing for the purposes of "legitimate interests" as its legal basis, said European users can opt out of having their data included in the AI training dataset by filling out an objection form, the spokesperson told Information Security Media Group.
Schrems said the Court of Justice of the European Union "has already made it clear that Meta has no 'legitimate interest' to override users' right to data protection when it comes to advertising."
A spokesperson for the Norwegian data protection agency Datatilsynet, which already has received other complaints about Meta's use of personal data for training AI, said on Thursday that the agency is taking the complaint "very seriously and giving it priority."
The complaints come as European regulators are trying to bring more transparency into how companies are using personal data to train their AI models. Currently, ChatGPT maker OpenAI is being probed by Italian, French and German data regulators for its web-scraping practices and use of personal data for powering its AI models (see: German Data Regulator to Intensify ChatGPT Probe).
The European Data Protection Board's ChatGPT task force, which published its initial report in May, said GDPR requirements such as transparency, lawfulness and data accuracy should be evaluated by AI companies during web scraping, data filtering, training and prompting, as well as for the ChatGPT outputs.
The agency said the data subjects for ChatGPT input and output training must be "clearly and demonstrably informed" that their information will be used for AI model training purposes.