CNIL issues an action plan for the use of AI systems

On 16 May 2023, the French data protection authority, Commission Nationale Informatique & Libertés (the CNIL) published an action plan detailing how it will investigate the privacy issues posed by AI systems, in particular, generative AI models, large language models and chat bot AI systems (the AI Action Plan). The AI Action Plan comes after the CNIL’s announcement, in January 2023, of a dedicated department within the CNIL to investigate issues raised by AI.

The AI Action Plan is split into four action points:

1.                Understanding AI Systems and their impacts

The CNIL will investigate the main data protection issues posed by AI, such as:

  • fairness and transparency into the collection of personal data
  • the scraping of publicly accessible information on the internet
  • protecting data transmitted by users when they use the tool (including reuse of this data)
  • the interplay between data subject rights requests and AI models, including how these rights can be managed in line with the complex structure in which AI training models are set up. For example, given that AI systems cannot ‘forget’ the CNIL will be investigating how the right to erasure can be respected by those creating AI systems
  • protection against bias and discrimination
  • the security challenges

2.                Enabling and guiding the development of AI

In the second action point, the CNIL shows a willingness to assist those in the field of AI with a set of publications and guidance notes, including guidance on sharing and reuse of data and guidance on the design of AI systems, which will build on the library of guidance published by the CNIL on the topic of AI in 2022.

3.         Supporting innovative players

The CNIL has stated its intention to support key actors in the AI ecosystems by launching a regulatory sandbox, which will provide support in the context of developing AI Models and by engaging in sustained dialogue with R&D centres and companies wishing to develop AI systems, providing enhanced support in some instances.

4.                Auditing and controlling AI systems

The CNIL intends to develop a tool to audit AI systems and in 2023 will focus on the use of enhanced video surveillance and the use of AI to combat fraud. The CNIL revealed that it has received a number of complaints against ChatGPT and has opened investigative proceedings against the organisation. The investigation is in parallel to an investigation into Open AI by a dedicated working group within the European Data Protection Board.

The full AI Action Plan can be accessed here and don’t forget to read our article on Chat GPT and the issues with generative AI here if you missed it the first time round!

Share:

Facebook
Twitter
Pinterest
LinkedIn
Don't just take our word for it