The 'High-Variance' Problem

Why a recent McKinsey's Agentic AI report validates the AgLabs vision for Specialty Insurance


After a year of explosive hype, the industry is getting a much-needed reality check on agentic AI. A recent, pragmatic report from McKinsey's AI division, QuantumBlack, cuts through the noise.

Titled "One year of agentic AI: Six lessons from the people doing the work," the report analyses over 50 agentic AI builds. It confirms what many are discovering: building valuable agents is hard work , and many companies are struggling to see real value.

But for us, the report was a powerful validation. It essentially provides a blueprint for exactly what we are building at AgLabs and why specialty insurance is the perfect fit.


Lesson 1: Agentic AI Shines in 'High-Variance' Workflows

The report's most crucial observation is about where to deploy agents.

They contrast "low-variance, high-standardisation" workflows (like regulatory disclosures) where agents can add complexity, with their ideal use case:

"By contrast, high-variance, low-standardisation workflows could benefit significantly from agents."

"High-variance, low-standardisation" is indeed the perfect description of the specialty insurance market.

Every risk is bespoke. Every broker's submission has different data formats. The entire placement process is a "long tail of highly variable inputs", managed through ad-hoc "ping-pong" and clarifications.

This is precisely the environment where rigid, rules-based automation fails, but an agent which has the ability to reason, interpret, and clarify ambiguous data, can thrive.

Lesson 2: It's About the Workflow, Not the Agent

The McKinsey report warns that many projects fail because they "focus too much on the agent or the agentic tool" instead of the entire business workflow.

This directly supports our core thesis: the value isn't "task automation" (like parsing a document), it's "transaction throughput" (automating the entire end-to-end placement flow).

McKinsey notes that in complex workflows (like claims handling), agents act as "the glue that unifies the workflow". This is exactly our vision for the A2A (Agent-to-Agent) interaction layer: a secure protocol where a Broker Agent and Underwriter Agent can autonomously negotiate, clarifying the details of a risk until it moves from "initial inquiry" to "all data collected and decision-ready".

Lesson 3: Trust Requires Verification

The report highlights a common user complaint: non-sensical or low-quality outputs that destroy trust (a.k.a “"AI slop").

Their solution?

  1. Invest in evaluations: Treat agents like new employees who need training and "continual feedback".

  2. Verify every step: They warn that "many companies track only outcomes," which makes it impossible to find out "precisely what went wrong" when a mistake occurs.

This is where the AgLabs vision where Agents interact through a fully transparent A2A Layer is fundamentally safer than a "black box" model.

The "continual feedback" isn't a separate, manual process — it's the conversation itself. The entire agent-to-agent negotiation is a "plain-text, auditable conversation". Every query, every response, and every clarification is logged. We don't just see the final outcome: we can "verify every step" of the reasoning, which is essential for compliance, audit, and building human trust.

Lesson 4: Underwriters and Brokers humans remain essential

Finally, the report confirms our "human-in-the-loop" philosophy. It states clearly: "humans will remain an essential part of the workforce equation".

The new roles for people are to "oversee model accuracy, ensure compliance, use judgment, and handle edge cases".

This is the very definition of the AgLabs model. We are not building a system to replace brokers or underwriters. We are building a system that handles the 80% of mundane discovery and data work, specifically so it can "surface only decision-ready outcomes” for them.

The agent does the admin; the human makes the judgment call.

The McKinsey report makes it clear: the path to value with agentic AI is to focus on complex, high-variance workflows; to design for the entire process; to build in auditability from day one; and to design for human-agent collaboration.

We couldn't agree more.