Keep AI data security in mind always. Ensure model training data in locked inside a production environment and not in lower less secure environment outside of production (QA, test, user training, sandbox). Models will be constantly tuned and training updated. Ensure you have full version control and fallback procedures for both training data and models. Treat your data snapshots versioning the same as application version control.
NIST recently warned of four types of artificial intelligence poisoning attacks. In an interview with CSO Online, ISACA Emerging Trends Working Group member Mary Carmichael says, "This is an emergent risk, and once AI technology scales, the poisoning threat will become more apparent." Read the article here: https://lnkd.in/gjkiGkwH