While March Madness spotlighted college basketball champions, another bracket was catching the attention of the insurance world. Verisk, a global leader in data analytics, recently wrapped up its “emerging issues” bracket competition, highlighting topics expected to impact the property and casualty insurance sector in the years ahead.

During a webinar on June 10, Verisk unveiled the final rankings of this year’s key concerns. Despite stiff competition from issues like climate change, infrastructure, and microplastics, artificial intelligence (AI) and generative AI (Gen AI) once again emerged as the top concern among industry professionals. As advancements in AI continue and new laws begin to take shape, the risks—such as hallucinations and unpredictable behaviors—are becoming more apparent.
Legislative Push Grows with AI Advancements
The rapid evolution of AI technologies has spurred legislative responses across the U.S., as lawmakers scramble to regulate tools and systems that are still largely misunderstood.
Laura Panesso, Associate VP of Government Relations at Verisk, noted during the webinar that the pace of proposed legislation has been “remarkably quick.” Although many states introduced AI-related bills in 2024, only a few have successfully passed them into law.
“This trend shows that regulators are trying to catch up with a technology advancing faster than we can keep pace with,” Panesso explained.
The National Association of Insurance Commissioners (NAIC) has issued guidance to help insurers govern AI use in business decisions, encouraging transparency, oversight, and testing to detect bias and discrimination. So far, 24 jurisdictions have adopted this guidance, while states like California, Colorado, and New York have gone further by enacting their own AI-specific rules.
With no overarching federal regulation for AI, states are forging ahead individually. As of now, 40 states have introduced or enacted AI-related laws, ranging from exploratory studies to detailed governance.
Key concerns include:
-
Requirements for AI system deployers
-
Rights over training data and outputs
-
Algorithmic discrimination and pricing fairness
Generative AI, in particular, has drawn attention for issues such as deepfake content and non-consensual image creation. States are beginning to define how Gen AI tools can legally be used.
For instance, Utah’s Senate Bill 26, passed in the most recent session, regulates the use of generative AI in customer-facing settings. It mandates transparency when Gen AI is used, sets liability standards for deceptive practices, and offers safe harbors for companies that disclose AI use upfront.
Edge Cases & AI Hallucinations Raise Red Flags
AI is powerful and innovative—but it’s far from flawless. Greg Scoblete, a principal on Verisk’s emerging issues team, highlighted two notable vulnerabilities: edge cases and hallucinations in generative AI systems.
Edge cases refer to rare or unusual scenarios that AI models haven’t been trained on thoroughly. These outliers can expose serious weaknesses in AI behavior, especially in life-critical areas like autonomous vehicles.
Scoblete cited real-world examples:
-
In the UK, a high-end car with adaptive cruise control accelerated to over 100 mph in a 30-mph zone after misreading a road marking.
-
In the U.S., another vehicle with similar tech collided with the top of an overturned truck—a shape its AI wasn’t trained to recognize as an obstacle.
“These edge cases illustrate a major difference between human judgment and AI,” Scoblete said. “Humans instinctively recognize danger, even in unfamiliar situations. AI, on the other hand, can make critical errors because it lacks that intuition.”
Alongside edge cases, generative AI hallucinations—or incorrect outputs—are another concern. Unlike traditional AI models that rely heavily on data, Gen AI systems may generate content based on flawed reasoning or logic.
This has real consequences: Over 120 legal filings have included mistakes caused by generative AI tools. Stanford researchers have confirmed that AI-generated legal content often contains factual or logical inaccuracies.
Scoblete warned, “If lawyers—who must be accurate and precise—are facing these problems, we must assume other industries under pressure to improve efficiency with AI could face similar risks.”
He also noted that at least 11 product liability lawsuits tied to generative AI have been filed, per George Washington University data. A key question now is whether existing product liability frameworks—designed for physical goods—can or should apply to virtual AI products.
“The injury caused by AI is sometimes intangible,” he said. “But as AI becomes embedded in physical products like cars, its ability to cause real harm—both physical and financial—will only grow.”
Looking Ahead
Verisk’s emerging issues bracket made it clear: AI isn’t just a trend—it’s a transformative force that’s raising complex questions for regulators, insurers, and tech developers alike.
With rapid technological advancement outpacing legal and ethical standards, the industry is now grappling with how to responsibly integrate AI while safeguarding against its most unpredictable risks. As the landscape continues to evolve, AI’s place at the top of Verisk’s list seems well-deserved—and far from temporary.


It’s no surprise AI is at the top of the concern list. We need more discussions on ethical AI development and regulations.
This article really highlights how rapidly AI is evolving and why staying informed is crucial for businesses and policymakers alike.
Interesting insights! AI’s potential risks definitely require careful management to ensure it benefits society without unintended harm.