Guidelines for Secure AI System Development: An International Collaboration
Cogent Integrated Business Solutions Inc.
Simple Solutions - Better Ideas
Hello Subscribers,
As many as 18 countries including the United States, Britain and Germany released ‘Guidelines for secure AI system development’ on 26 November to promote safe and reliable development and use of AI. The 20-page document contains recommendations aimed at providers of AI systems for keeping their systems secure from tampering and misuse.
While the agreement is not legally binding, US Cybersecurity and Infrastructure Security Agency director Jen Easterly was encouraged by the participation of 23 international agencies.
Push Towards AI Security
While efforts to address the problem of secure AI system development have been made previously, this agreement has resulted in a more fruitful collaboration than before. The US government introduced an executive order in October requiring developers of AI systems to share the results of product safety tests in the interest of national security. However, not much else has been achieved as despite the Biden administration calling for stricter AI regulation, the issue has been a divisive one in the US Congress.
While the European Union is more willing to tackle the issue of AI safety, finding the proper solution seems quite challenging. Despite several meetings, EU member states are yet to completely agree on AI regulations. However, countries such as France, Italy and Germany are trying to find common ground through self-regulation.
The Document: Guidelines for secure AI system development
‘Guidelines for secure AI system development’ is published by the UK National Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and 21 other international agencies, including the NSA and FBI. Organisations such as Amazon, IBM, Google, Microsoft and OpenAI also contributed towards developing the guidelines.
领英推荐
For the scope of the document, all machine learning applications are referred to as AI. The document defines machine learning applications as ones that “involve software components that allow computers to recognise and bring context to patterns in data without the rules having to be explicitly programmed by a human; and generate predictions, recommendations, or decision-based on statistical reasoning.”
It states, “AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way.” With its fast pace of development, AI can be susceptible to threats such as unauthorised information extraction and performance regression. The document notes that AI systems providers must be vigilant in order to prevent the exploitation of their products’ vulnerabilities.
The role of AI providers and users is also deliberated in the document. Nowadays, AI providers often work with multiple partners throughout the supply chain, making it more complex than ever. With the lines between providers and users getting blurred, it is recommended that providers take responsibility and ensure their products are secure for users further down the supply chain. The document states that providers should have security measures in place to mitigate risks for users. It also recommends that they inform the users of the risks that they are accepting by using their products and advise them on how to operate them securely.?
Overall, the guidelines are focused on four key areas of the AI system development life cycle, i.e., design, development, deployment, and operation and maintenance. Each area contains a particular set of relevant recommendations. Some of them include raising staff awareness of threats and risks, securing the supply chain, developing incident management procedures, and monitoring system behaviour.
Conclusion
As AI continues to advance and play a bigger role in our lives, the threats associated with it also increase. Job loss, fraud and government disruption are only some of the concerns around AI system development. The growing need to keep AI system development secure is evident in the recent efforts of governments around the world. ‘Guidelines for secure AI system development’ is one such collaboration which can hopefully provide some guidance to providers for AI systems worldwide.
At CogentIBS, we value your presence within our community. Follow us for smart tech solutions and updates. Share your feedback in the comment section and subscribe to our newsletters to stay tuned for timely tech insights.