When an industry realizes the rules have changed and no memo has been sent, a certain kind of silence descends upon it. If you’ve been paying attention, the financial and technology sectors have been plagued by that silence for a number of months. Lawsuits have been filed. The clocks on regulations are running out. Additionally, there are uncomfortable discussions about insurance taking place somewhere in OpenAI’s legal department.

The figures that are being proposed are astounding. According to reports, OpenAI’s insurance coverage is far less than what would be required to cover losses from a wave of multibillion-dollar claims. Even that coverage figure is contested internally, with one account claiming the real amount is much less than what has been acknowledged.

Information CategoryDetails
Central Legal CaseMobley v. Workday (AI hiring discrimination class action)
Estimated Class SizeHundreds of millions of applicants (since 2020)
Certification DateMay 2025
Related CaseGarcia v. Character.AI (product liability for AI outputs)
Key RegulationEU AI Act — high-risk provisions enforceable August 2026
Maximum EU Penalty€35 million or 7% of global annual turnover
U.S. Jobs at Risk (mid-range)~9.3 million
Top Vulnerable RolesWriters, programmers, web designers, financial analysts
Projected City Income Losses$20B+ annually in NYC, LA, SF, Chicago, Dallas, Boston
Insurance GapOpenAI exploring “self-insurance” and captive vehicles
Key Insurance SourceAon (Kevin Kalinich, Head of Cyber Risk)
Reference WebsiteWorld Law Forum – WLF Litigation Summit

The insurance industry just doesn’t have enough capacity for AI model providers, according to Kevin Kalinich, head of cyber risk at Aon.

The fact that a company the size of OpenAI is investigating “self-insurance”—basically putting investor funds in a ringfenced vehicle to cover its own risks—tells you something. It indicates that traditional financial safety nets were not designed to handle this.

Observing this from the outside, it’s difficult to ignore how much the legal environment changed without anyone announcing it. Neither a single law nor a significant decision served as the turning point. Mobley v. Workday, which was certified as a class action in May 2025, and Garcia v. Character were examples of the gradual accumulation.First Amendment protections for chatbot outputs are being rejected by AI, and courts across several nations are posing the same awkward question: who really controlled this system, and can they prove it?

The case that ought to be keeping HR and finance executives up at night is Mobley v. Workday. The AI-powered hiring platform Workday could be held accountable as a “agent,” the court ruled, even if it had no direct employment connection to the individuals its system excluded. The potential class includes hundreds of millions of job applicants who were turned down by automated systems since 2020. This is not a specialized technical disagreement.

For every business that used algorithmic hiring tools and believed the vendor took on all the risk, this is the start of a reckoning. They don’t. No longer.

Garcia v. Character.AI followed, carrying a different kind of weight. A mother filed a lawsuit after her 14-year-old son committed suicide after interacting with a chatbot for a long time. The court’s decision to treat AI outputs as a “product” rather than protected speech, thereby undermining the tech industry’s most dependable legal defense, was the kind of decision that completely changes legal strategy. AI outputs may be flawed if they are products. Additionally, there is strict liability for defective products.

The larger picture is an unplanned global convergence. Attorneys from Brazil, Turkey, the United Arab Emirates, Hong Kong, and the United States participated in a panel at the WLF Litigation Summit in Dubai last January. They continued to reach the same conclusions despite operating in drastically different legal systems. Brazil is incorporating AI liability into its traditions of constitutional rights. Human dignity frameworks are being applied to algorithmic rulings by Turkish courts.

Even the United Arab Emirates, a country known for its innovation-first governance, is witnessing procurement disputes arise from smart city initiatives. The routes are not the same. Enforceable accountability is the same end result.

When the high-risk provisions of the EU AI Act become fully enforceable in August 2026, there will be more than just regulatory changes. It is proof. Businesses that use AI systems for tasks like credit scoring, hiring, and gaining access to necessary services are now required to record human oversight, carry out impact analyses, and prove accuracy.

Defendants enter a lawsuit with nothing if there is no paper trail. The silent presumption that algorithmic complexity provided its own defense, known as the “black box” defense, is no longer effective. Counsel for plaintiffs is already aware that compliance documentation should be the focus of discovery.

It’s possible that a lot of financial institutions are still underestimating the future. According to Tufts University’s American AI Jobs Risk Index, 9.3 million American jobs could be significantly disrupted by AI in a mid-range scenario. The most vulnerable jobs are those that are concentrated in large financial centers, such as analysts, developers, writers, and programmers.

New York, Chicago, San Francisco, and Boston are expected to lose more than $20 billion in revenue each year. These projections are not abstract. The people who develop and implement AI systems reside and work in these cities.

The days of using AI tools and discreetly shifting responsibility to suppliers are over. These days, courts on three continents operate under the same premise: responsibility follows if you use it, profit from it, and maintain control over its use. It turns out that a lot of things can be automated by technology. One of them is not responsibility.

Share.

Comments are closed.