A new survey of corporate insurance buyers reveals widespread interest in buying insurance to cover Gen AI risks—and a willingness to pay up 10-20 percent more in organization-wide insurance premiums for the coverage.
But the most willing buyers are those that are heavy users of Gen AI or who have already been involved in serious AI-related incidents, according to the report, “Gen AI Risks for Businesses: Exploring the role for insurance,” published by the Geneva Association, which warns insurance underwriters of the high possibility of adverse selection.
Much of the report is based on a survey of 600 corporate insurance decision-makers across China, France, Germany, Japan, the UK, and the United States, providing useful breakdowns of potential buyers, their willingness to pay for insurance, their experiences with AI, and their views of their biggest AI risks by country, firm size, and industry. The report also provides an overview of insurance coverage being offered today, and recommendations about how insurers should proceed in tapping into the opportunity to offer coverage while staying mindful of adverse selection and other risks.
Among the key takeaways:
- High insurance demand. Over 90 percent of businesses surveyed show interest in insurance cover for Gen AI risks.
- How much will they pay? More than two-thirds of respondents would pay at least 10 percent higher premiums for such coverage. A graphic in the report reveals that about 50 percent would pay 10-20 percent more, and more than 15 percent would pay more than 20 percent of their organizations current insurance costs for Gen-AI coverage.
- Risks identified. Survey respondents most commonly selected cybersecurity as the Gen AI-related risk they wish to cover (selected by somewhere between 50 and 60 percent of respondents). Coverage for third-party liability (liability to customers and suppliers) and business operations risks were next on the list, each selected by more than 50 percent of respondents. Reputational risk was much less important to respondents from a coverage standpoint (selected by 30 percent).
- Rapid adoption: 71 percent of surveyed businesses have implemented Gen AI in at least one function.
“Few technologies in history have spread as rapidly as Gen AI, yet its risks are complex and poorly understood,” said Jad Ariss, Managing Director of the Geneva Association, in a media statement about the report.
Demand Highest in U.S. and China
The survey results indicate that demand for insurance is highest in the U.S. and China, where respondents were also most likely to indicate that Gen AI was useful in their day-to-day work.
In both nations, between 60 and 70 percent of respondents said that Gen AI was “very useful” in their daily work. In stark contrast, less than 20 percent of respondents in Japan said Gen AI is “very useful,” and perceptions of usefulness were also low for respondents in France and Germany. The text of the report indicates that slower uptake in these three countries reflects lower trust, regulatory or cultural hurdles, and organizational resistance.
What Insurance Policies Should Cover Gen AI Risks?
Asked what types of insurance they thought should cover Gen AI risks, more than 50 percent of respondents to a Geneva Association business insurance customer survey said cyber insurance.
More than 40 percent said that there should be standalone AI-specific coverage instead.
Others saw the potential for extended coverage under existing property and liability policies, with just over 40 percent eyeing AI coverage in property or business interruption policies and about 35 percent seeking coverage for AI risks in existing liability policies.
Corresponding to the high response levels about Gen AI usefulness in the U.S. and China, respondents in these regions indicated most willingness to pay for insurance coverage. While about 15 percent of respondents across all countries said they’d be willing to absorb a 20 percent increase in their organizations’ overall insurance costs for Gen AI coverage, more than one-quarter of U.S. respondents said that a 20 percent hike would be acceptable. About another 35 percent of U.S. respondents said they’d accept a 10-20 percent cost increase.
In China, more than 70 percent said they would be willing to pay 10-20 percent more, and roughly another 10 percent said they’d pay 20 percent more.
Tallying the responses by size of company, roughly 70 percent of respondents at “very large” companies (with more than 1,000 employees) and nearly 80 percent of “large companies”(with 251–1,000 employees) said they would accept a 10 percent insurance premium jump for Gen AI coverage, with more than 20 percent of respondents for both company sizes saying that a hike of more than 20 percent would be OK.
On the other end of the size spectrum, less than 40 percent of respondents from the more cost-sensitive small firms (less 20 people), also perceiving lower risk exposure, said they are willing to pay even 10 percent more.
While the report doesn’t reveal exact response percentages by industry, it does state respondents in the technology sector shows significantly higher demand for Gen AI insurance than other industries, “probably because Gen AI is embedded in the product (or is the product itself). Demand is also strong in the finance and manufacturing sectors, the report notes.
One graphic in the report shows relative insurance demand and relative willingness to pay for Gen AI insurance across for different industries. Here, the chart reveals that demand for Gen AI insurance in the healthcare and education sectors is roughly 40 percent lower than demand in the tech sector. But willingness to pay is only slightly lower for education and healthcare than for respondents in the finance and tech sectors.
Insurability Challenges and Supply
A section of the report addressing the supply of insurance presents insurability criteria, such as randomness of loss occurrences, acceptable maximum loss and legal clarity. It notes that meeting these criteria is much more challenging for Gen AI than traditional AI.
“Businesses are embedding Gen AI into products and internal processes to drive innovation and efficiency. However, these capabilities introduce novel risks and amplify existing risks associated with traditional AI,” the report notes, highlighting the fact that Gen AI models sometimes hallucinate, confidently producing false or misleading output, and also sometimes inadvertently replicating copyrighted content.
Aside from potentially greater exposure to cybersecurity risk, operationally, firms that deploy Gen AI to steer their businesses face risks including incorrect or biased decision-making, operational inefficiencies, and financial losses. Meanwhile tech providers of Gen AI models may be potentially liability for mistakes that cause users financial harm.
Underscoring the unpredictability of Gen AI risks, the report refers to the magnitude of potential maximum losses to insurers that challenge insurability. “Wrong or malicious code generated by AI can lead to massive service disruption, potentially causing systemic risk. Gen AI failures like spreading misinformation, IP violations, and deepfake-driven fraud in critical sectors (e.g. healthcare, finance) can also lead to large losses, particularly when the failure persists for a long time or is subject to regulatory penalties.”
The report offers the example of a malfunction in a Gen-AI-driven healthcare system related to a medical diagnoses, treatment planning, or automated patient communications “that could result in widespread harm, overwhelming traditional insurance capacities and challenging premium affordability,” explaining that the maximum potential loss from an AI failure must be manageable within the insurer’s capacity.
Apart from catastrophic losses in critical sectors, even average losses may be high, encompassing potential financial and reputational damage from Gen AI incidents like misinformation or regulatory fines.
Another question mark for insurability relates to information asymmetry—”insured parties may neglect AI system integrity (moral hazard), while riskier AI systems may seek coverage (adverse selection).” In addition, “insurers may struggle to verify Gen AI risks and how businesses manage them.”
In spite of obstacles, insurers are starting to respond to demand for Gen AI-related coverage by extending coverage in existing policies, deploying new underwriting strategies and even piloting standalone products.
- Some cyber policies now encompass AI-driven cyber attacks or data leaks, and some E&O policies might cover errors from AI-generated content.
- In terms of underwriting adjustments, some insurers are experimenting with using parametric triggers—paying preset amount for specific AI failure events.
- To mitigate information asymmetry, some insurers will scrutinize insureds’ AI systems and governance practices before granting coverage.
There are also some standalone AI insurance products available from a few carriers, which may bundle coverage for multiple AI risks. As an example, an insurer might offer a policy to cover an AI developer’s liability for algorithmic errors and IP infringement by AI outputs in one package, the report notes.
The report includes sidebars describing specific carrier offerings—both the extensions and the standalone covers. Among those described are:
- AXA XL’s cyber insurance policy endorsement, announced last year, to cover Gen AI risks linked to clients own Gen AI initiatives, covering first- and third-party Gen AI risks, including data poisoning (attackers manipulating or contaminating the training data used to develop machine learning models), usage rights infringement (negligently failing to obtain appropriate permissions to use particular items or data), and regulatory violations. (i.e. liability resulting from the
- Munich Re’s AI performance coverage, aiSure, launched at the end of 2018, allowing AI providers and creators of home-grown AI tools to transfer risks that machine learning models either fail, perform below contracted levels or fuel discrimination lawsuits. (Related articles: AI Insurance Takes a Step Toward Becoming a Market; Writing Cyber Is Key To Survival, Munich Re Exec Says)
- Performance guarantees from MGA Armilla Assurance. Armilla Assurance offers verification and assessment of AI model quality, and since late 2023 has been offering performance warranty products backed by Swiss Re, Greenlight Re and Chaucer, indemnifying the performance of AI models.
- Liability coverage for AI startups from Vouch Insurance covering lawsuits resulting from product errors, allegations of algorithmic bias and discrimination, regulatory violations, and intellectual property disputes.
Additional providers are discussed in the report.
The Path Forward for Insurers
“The task for insurers now is to define clear risk boundaries and pilot modular coverage models that can adapt to this evolving technology,” said Ruo (Alex) Jia, Director Digital Technologies at the Geneva Association and lead author of the report, in a media statement, summarizing some of the authors’ recommendations for insurers going forward.
Noting that other breakthrough technologies–electricity, the Internet, and mobile phones—”all faced uncertain pathways to insurability, with coverage evolving only as risks became clearer,” the report predicts that the development of Gen AI insurance will follow a similar trajectory to cyber insurance. As with cyber insurance, insurers will starting cautiously, and then expand as insight and confidence grows.
Related article: Is AI Risk Insurance the Next Cyber for Insurers?
“We urge insurers to actively engage with insuring Gen AI risks by starting with scenario modeling and piloting products, instead of waiting for perfect data,” the report says.
“This means introducing controlled policy extensions or trial products for AI risks and using these to gather experience. By starting small (as with cyber insurance) and iterating, underwriters can learn about loss patterns and client needs in real time.”
“Early engagement will allow insurers to scale up coverage intelligently as the Gen AI risk landscape matures.”
In an abbreviated research summary, the authors outline these additional steps for insurers to consider:
- Collaborate with AI developers, clients, and regulators to establish governance standards covering bias testing, output validation, data safeguards, and accountability. Shared standards and industrywide incident data will reduce uncertainty, clarify liability, and improve insurability.
- Promote risk mitigation and preparedness by pairing insurance must be paired with strong AI risk management. In addition to requiring safeguards–human oversight, bias checks, cybersecurity controls, and contingency plans—insurers can also provide value-added services like AI risk audits, the report states.
The report findings, Ariss said, ” underline the urgency for insurers, regulators, and technology providers to work together in developing frameworks that can safeguard businesses while enabling innovation to flourish across economies.”
Based in New York, Stephen Freeman is a Senior Editor at Trending Insurance News. Previously he has worked for Forbes and The Huffington Post. Steven is a graduate of Risk Management at the University of New York.