Safeguarding against malicious attacks on AI code is crucial to ensure the integrity, reliability, and security of AI systems. The U.S. National Institute of Standards and Technology (NIST) released a new paper warning of hackers looking to potentially manipulate or “poison” AI data sets for malicious purposes.
The Jan. 4 paper warns of adversaries looking to target AI and machine learning systems to create real-world problems, which have been steadily increasing with the growth of AI technology in both the private and public sectors. The NIST paper provides an overview of attack techniques and methodologies that consider all types of AI systems, and it introduces a taxonomy of attacks in adversarial machine learning for PredAI systems. The attacker’s objectives are shown as disjointed circles with the attacker’s goal at the center of each circle: Availability breakdown, Integrity violations, and Privacy compromise.
Not all AI systems are created equal when it comes to functionality, and this includes safeguarding against PredAI. Here are some steps my company and other "good" AI citizens habitually employ ensure the quality of our product and also help to protect us from attack.
- Code Review and Testing: Conducting thorough code reviews and testing throughout the development lifecycle to identify and address potential vulnerabilities, loopholes, and security flaws in the AI algorithms and software.
- Secure Development Practices: Implementing secure coding practices and standards to minimize the risk of common security vulnerabilities such as injection attacks, buffer overflows, and authentication bypasses. This includes input validation, parameterized queries, and secure authentication mechanisms.
- Access Control and Authentication: Implementing robust access control mechanisms to restrict access to AI code, data, and infrastructure based on the principle of least privilege. Utilize strong authentication methods such as multi-factor authentication (MFA) to verify the identity of users and prevent unauthorized access.
- Data Privacy and Confidentiality: Implementing strong encryption techniques to protect sensitive data at rest and in transit. Utilize data anonymization and pseudonymization techniques to minimize the risk of data breaches and unauthorized access to personal or confidential information.
- Monitoring and Logging: Implementing comprehensive monitoring and logging mechanisms to track access, usage, and changes to AI code, models, and data. Monitor for suspicious activities, unauthorized access attempts, and anomalies that may indicate a potential security breach.
- Patch Management and Updates: Keeping software and dependencies up to date with the latest security patches and updates to address known vulnerabilities and security weaknesses. Implement a robust patch management process to ensure timely deployment of security updates.
- Secure Deployment and Configuration: Following best practices for secure deployment, including network segmentation, firewall configuration, and proper configuration of access controls.
- Security Training and Awareness: Mandating security training and awareness programs for developers, data scientists, and other personnel involved in AI development and deployment to educate staff about evolving security threats, attack vectors, and best practices for secure coding and deployment.
- Incident Response and Contingency Planning: Developing and regularly testing incident response plans and contingency measures to effectively respond to security incidents and mitigate their impact.
- Third-party Risk Management: Conducting thorough security assessments and due diligence when working with third-party vendors, suppliers, or partners to ensure that they adhere to security standards and best practices to minimize the risk of supply chain attacks or vulnerabilities.
When looking to acquire, make certain to add requirements for enhancing the resilience of its AI systems and better protect against malicious attacks on AI code. Additionally, ongoing monitoring, threat intelligence, and collaboration with cybersecurity experts can further strengthen the security posture of AI solutions.