The most common types of AI systems are still only as good as their training data. If there’s no historical data that mirrors our current situation, we can expect our AI systems to falter, if not fail. To date, at least 1,200 reports of AI incidents have been recorded in various public and research databases. That means that now is the time to start planning for AI incident response, or how organizations react when things go wrong with their AI systems.
While incident response is a field that’s well developed in the traditional cybersecurity world, it has no clear analogue in the world of AI. What is an incident when it comes to an AI system? When does AI create liability that organizations need to respond to? This article answers these questions, based on our combined experience as both a lawyer and a data scientist responding to cybersecurity incidents, crafting legal frameworks to manage the risks of AI, and building sophisticated interpretable models to mitigate risk.
O’Reilly media explains when and why AI creates liability for the organizations that employ it, and to outline how organizations should react when their AI causes major problems. For example, how material is the threat? Materiality is a widely used concept in the world of model risk management, a regulatory field that governs how financial institutions document, test, and monitor the models they deploy.
Broadly speaking, materiality is the product of the impact of a model error times the probability of that error occuring. Materiality relates to both the scale of the harm and the likelihood that the harm will take place. Data sensitivity tends to be a helpful measure for the materiality of any incident. From a data privacy perspective, sensitive data, like consumer financials, tend to carry higher risk and therefore a greater potential for liability and harm. Additional real-world considerations for increased materiality also include threats to health, safety, and third parties, legal liabilities, and reputational damage.