AI risks bring new scrutiny to cyber insurance policies
Cyber underwriters want to know how CFOs are mitigating AI risks.
• 3 min read
Alex Zank is a reporter with CFO Brew who covers risk management and regulatory compliance topics. Prior to CFO Brew, he covered the property/casualty insurance industry.
Don’t think for a second that cyber insurance underwriters didn’t notice your fancy new generative AI tools. Nothing gets past them.
Just as companies think through the risks associated with the booming technology, so too do the insurance companies covering those risks. AI risks touch on numerous property/casualty policies—such as employment practices liability, product liability, and errors and omissions, to name a few. In addition, cyber underwriters are looking keenly at risks such as intellectual property infringement, data integrity, prompt injection, and contracts with AI vendors, according to experts who spoke with CFO Brew, as well as a new Cyber Insurance Market Outlook report from insurance broker Gallagher.
Cyber underwriters “are much more interested in how companies are viewing their own risk when it comes to AI, and how they are attempting to mitigate that through the use of these [AI governance] frameworks,” Dan Burke, national cyber practice leader at Woodruff Sawyer, a subsidiary of Gallagher, told CFO Brew.
Underwriters are still in the “early days” of scrutinizing organizations’ AI governance frameworks and sussing out best practices, Burke said. But for now, underwriters look for a “good accounting” of the ways a company uses AI, the related risks with those uses, and any changes to the company’s incident-response policies to account for them. They also want to see contractual protections between themselves and AI developers to limit their liability.
News built for finance pros
CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.
Legislation ahead. AI risks are coming into focus for lawmakers, too. Legislators in four states—California, Utah, Colorado, and Texas—have signed AI-related legislation into law, with more legislation potentially on the way, according to the Gallagher report. There are also “over 200 active legal cases” involving AI-related “data bias, intellectual property and trademark infringement, privacy liability, discrimination, and regulatory risks,” the Gallagher report said.
Organizations can limit exposure to AI risks in several ways, according to Burke.
For example, companies should understand how much risk they’re willing to take on internally rather than transfer them to insurance.
In determining how much insurance coverage to buy, organizations should benchmark themselves against their peers, Burke said. He also recommended “limit modeling”—identifying potential loss scenarios and “scal[ing them] to a company of your size” to better understand the financial risks.
“Loss modeling actually informs much more accurately how companies should be thinking about their risk, and allows them to make an informed decision about where they are on their own risk tolerance spectrum,” Burke said.
News built for finance pros
CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.