The European Data Protection Board (EDPB) published an opinion on Wednesday that explores how artificial intelligence (AI) developers might use personal data to develop and deploy AI models, such as large language models (LLMs), without falling foul of the bloc’s privacy laws.
Areas the EDPB opinion covers include whether AI models can be considered to be anonymous (which would mean privacy laws wouldn’t apply); whether a “legitimate interests” legal basis can be used for lawfully processing personal data for the development and the deployment of AI models (which would mean individuals’ consent would not need to be sought); and whether AI models that were developed with unlawfully processed data could subsequently be deployed lawfully.
The question of what legal basis might be appropriate for AI models to ensure they are compliant with the General Data Protection Regulation (GDPR), especially, remains a hot and open one. And failing to abide by the privacy rules could lead to penalties of up to 4% of global annual turnover and/or orders to change how AI tools work.
On the question of model anonymity — which the Board defines as meaning an AI model that should be “very unlikely” to “directly or indirectly identify individuals whose data was used to create the model” and be very unlikely to allow users to extract such data from the model through prompt queries — the opinion stresses this must be assessed “on a case-by-case basis.”
A whole host of design and development choices AI developers make could influence regulatory assessments of the extent to which the GDPR applies to that particular model. Only truly anonymous data, where there is no risk of re-identification, falls outside the scope of the regulation — but in the context of AI models the bar is being set at risks of identifying individuals or their data at “very unlikely.”