Meta has recently updated its privacy information in relation to AI, with the changes due to take effect on 26 June 2024.
They plan to use content created by users, as well as messages with businesses, Meta and its AI tools, messages in public features (including metadata), activity in apps and information from third parties (the data privacy activists noyb note that data can be taken from as far back as 2007) to develop and provide artificial intelligence technology.
Meta is using legitimate interests to justify the processing.
Perhaps unsurprisingly, noyb are already taking action to challenge these changes, having filed complaints with 11 data protection regulators and requested their complaints be dealt with using the urgency procedure, given the upcoming launch date for the changes. They allege a number of things, including that Meta is ignoring the previous ruling by the EDPB in relation to the use of legitimate interests for its activities and that they have failed to be clear about what the technology is and how it will be used, with Max Schrems saying “Meta doesn’t say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalised advertising or even a killer drone.”
On the purpose limitation point, the ICO’s second call for evidence, issued earlier this year relates to purpose limitation and AI and makes it clear that developers should set out sufficiently specific, explicit and clear purposes of each different stage of the lifecycle and explain what personal data is processed in each stage, and why it is needed to meet the stated purpose.
On 14 June the Irish Data Protection Commissioner confirmed that Meta has paused its proposed changes in the EU/EEA, presumably resulting from the noyb pressure. The announcement does, however, not refer to the UK.