The Financial Services Information Sharing and Analysis Center (FS-ISAC) announced the release of six white papers designed to help financial services institutions understand the threats, risks, and responsible use cases of artificial intelligence (AI).
The papers are the first of their kind to provide standards and guidance curated specifically for the financial services industry. They provide additive resources that build on the expertise of government agencies, standards bodies, academic researchers, and financial services partners, as well as NIST’s AI Risk Management Framework.
“While AI promises breakthroughs in the financial services industry, there are a plethora of risk factors that the sector needs to be aware of, both when integrating AI into internal processes as well as building cyber defenses against threat actors utilizing AI tools,” said Michael Silverman, vice president of Strategy and Innovation at FS-ISAC, in a statement. “It is integral to operational safety and the very foundation of trust in the financial services industry that the sector aligns on how to counteract the risks that AI poses.”
The six white papers identify the threats and risks associated with AI, and provide frameworks and tactics that financial services firms can customize based on their size, needs, and risk appetites:
- Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks: Defines and maps the existing threats associated with AI and characterizes the types of attacks and vulnerabilities this new technology presents to the financial services industry, as well as security controls that can be used to address those risks.
- Building AI into Cyber Defenses: Highlights financial services’ key considerations and use cases for leveraging AI in cybersecurity and risk technology.
- Combating Threats and Reducing Risks Posed by AI: Outlines the mitigation approaches necessary to combat the external and internal cyberthreats and risks posed by AI.
- Responsible AI Principles: Examines the principles and practices that ensure the ethical development and deployment of AI in alignment with industry standards.
- Generative AI Vendor Evaluation and Qualitative Risk Assessment: A customizable tool designed to help financial services organizations assess, select, and make informed decisions about generative AI vendors while managing associated risks.
Framework of Acceptable Use Policy for External Generative AI: Guidelines to assist financial services organizations in developing an acceptable use policy when incorporating external generative AI into security programs.
“The financial services industry is facing increased pressure to capitalize on AI integration, while simultaneously ensuring security and resilience against AI-enhanced cyber risks,” said Benjamin Dynkin, executive director at Wells Fargo and Chair of FS-ISAC’s AI Risk Working Group. “These papers provide point-in-time guidance on using AI securely, responsibly, and effectively, while offering tangible steps the sector can take to counteract the rising risks associated with AI.”
“Threat actors are not only constantly looking to exploit vulnerabilities caused by the adoption of new technology but will also leverage the same technology to enhance the efficacy of their attacks,” said Hiranmayi Palanki, principal engineer at American Express and vice chair of FS-ISAC’s AI Risk Working Group. “The multi-faceted nature of AI is both compelling and ever-changing, and the education of the financial services industry on these risks is imperative to the safety of our sector.”