UK and US develop guidelines for secure AI development

The National Cyber Security Centre reports that 18 countries are to endorse guidelines on AI security developed by the UK and US.

“The new UK-led guidelines are the first of their kind to be agreed globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others.”

The guidelines are broken down into four key areas:

  1. secure design – understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design;
  2. secure development – including supply chain security, documentation, and asset and technical debt (the work accumulated by using short term solutions rather than more labour intensive but more sophisticated solutions) management;
  3. secure deployment – including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release; and
  4. secure operation and maintenance – guidelines on actions including logging and monitoring, update management and information sharing.

The guidelines can be accessed here.

Share:

Facebook
Twitter
Pinterest
LinkedIn
Don't just take our word for it