AI based systems are highly flexible and data intense & offer new challenges in securing this new technology in the coming year ahead.  Microsoft just published some recent best practices that can help mitigate these risks.

Best practices for AI security risk management – Microsoft Security Blog

GitHub – Azure/AI-Security-Risk-Assessment

Today, we are releasing an AI security risk assessment framework as a step to empower organizations to reliably audit, track, and improve the security of the AI systems. In addition, we are providing new updates to Counterfit, our open-source tool to simplify assessing the security posture of AI systems.

There is a marked interest in securing AI systems from adversaries. Counterfit has been heavily downloaded and explored by organizations of all sizes—from startups to governments and large-scale organizations—to proactively secure their AI systems. From a different vantage point, the Machine Learning Evasion Competition we organized to help security professionals exercise their muscles to defend and attack AI systems in a realistic setting saw record participation, doubling the amount of participants and techniques than the previous year.

The deficit is clear: according to Gartner® Market Guide for AI Trust, Risk and Security Management published in September 2021, “AI poses new trust, risk and security management requirements that conventional controls do not address.