A Fortune commentary by Thomson Reuters CPO David Wong captures a pivotal market moment — and carries a pointed warning for lawyers still treating AI as interchangeable. Our analysis, with key observations for legal practitioners.
By LawFuel Editors | March 2026
“The companies that understand this will win. The rest will eventually learn the hard way.” — David Wong, CPO, Thomson Reuters
Two announcements landed within hours of each other last week and, taken together, they crystallised something the legal profession has been circling around for months without quite grasping: legal AI has formally split into two distinct categories, and treating them as interchangeable is increasingly a professional hazard.
Thomson Reuters announced that its AI platform CoCounsel had reached one million users across 107 countries. Almost simultaneously, Anthropic launched a suite of enterprise plugins for Claude Cowork, including a legal-specific tool capable of contract review, NDA triage, risk flagging, and compliance tracking.
The Fortune commentary by Thomson Reuters Chief Product Officer David Wong — bluntly titled “Legal AI is splitting in two — and most people miss the difference” — is, in effect, a market positioning document dressed up as analysis. But that doesn’t make its central argument wrong. And for lawyers, the distinction Wong draws is not academic.
The Wikipedia Moment That Started It All
A few weeks before these announcements, a screenshot went viral on LinkedIn: a general counsel had used Anthropic’s Claude for contract review, and the AI had pulled from Wikipedia. AI sceptics declared it proof that foundation models can’t handle legal work. AI optimists shrugged it off as a growing pain. Wong argues — persuasively — that both camps missed the point.
The issue was not Claude’s intelligence, but rather it was the absence of authoritative legal infrastructure. Claude did what it was designed to do: draw on available sources. There was no law database, no curated regulatory content, no firm precedents provided.
The output reflected the inputs — or the lack of them. Wong calls this an architecture problem, not an intelligence problem, and he is right.
For lawyers, however, architecture problems have professional consequences. As LawFuel has reported, the hallucination epidemic in legal practice is accelerating, not slowing. Stanford researchers found that even purpose-built legal AI tools produce incorrect information more than 17% of the time.
With over 300 documented AI hallucination cases in court filings since 2023, and sanctions mounting in courts from New York to the UK, the stakes of getting this wrong are not theoretical.
The Two-Category Framework (And Why It Matters)
Wong’s core thesis is that legal work divides into two broad categories: work that requires authoritative legal sources, and work that doesn’t. The legal profession has long understood this intuitively, even if vendors have muddied the water.
| DIMENSION | AUTHORITATIVE AI (CoCounsel / Westlaw) | OPERATIONAL AI (Cowork / Harvey / Legora) |
|---|---|---|
| Primary Purpose | High-stakes research & citable work product | Workflow automation & document operations |
| Data Sources | Westlaw, Practical Law, statutes, case law | Internal docs, email, Drive, DocuSign |
| Risk Tolerance | Near-zero — output must be citable | Higher — errors caught downstream |
| Key Products | CoCounsel, Lexis+ AI | Claude Cowork, Harvey, Legora |
| Hallucination Risk | Reduced via RAG + editorial curation | Higher with general-purpose LLMs |
| Cost to Build Moat | Decades + billions (Westlaw) | Replicable; LLM access sufficient |
| Professional Liability | Designed for work attached to a lawyer’s signature | Not designed for authoritative work product |
| Competitive Position | Protected by content moat; durable | Exposed to foundation model competition |
Operational Legal Work (no authority required):
- Standardising document formatting
- Comparing contracts against internal playbooks
- Managing billing, timesheets, NDA triage
- Automating internal workflows and approvals
- Integrating emails, cloud storage, e-signatures
Authoritative Legal Work (requires curated law):
- Researching unsettled or novel legal questions
- Developing and validating legal arguments
- Cross-jurisdictional statutory analysis
- Producing work product that must be cited, audited, or defended in court
- Any output attached to a professional’s signature
This second category is where professional liability attaches. And it is where the Wikipedia problem — or any hallucination problem — becomes a career risk.
What Anthropic’s Cowork Plugin Actually Does
Anthropic’s legal plugin for Claude Cowork connects to Google Drive, Gmail, DocuSign, and other enterprise systems. It handles contract review, risk flagging, NDA management, and compliance tracking. It is, as Wong concedes, an extremely capable tool for operational legal work — and a direct competitive threat to vertical legal AI startups like Harvey and Legora.
As LawFuel reported when the plugin launched in late January, the market reaction was swift and brutal. Thomson Reuters shares fell more than 30% in the initial selloff as investors processed what it meant for specialised legal AI platforms. The LawFuel analysis at the time described it as the end of the “wrapper economy” — the model where startups built businesses by layering legal-specific prompts on top of OpenAI or Anthropic’s foundation models and charging firms premium subscription fees.
Wong’s subsequent framing is a deliberate repositioning of that narrative. Anthropic’s plugin, he argues, doesn’t threaten CoCounsel — it clarifies what CoCounsel is actually for. The 11% stock jump on the CoCounsel one-million-user announcement appears to validate that framing, at least in market terms.
The Uncomfortable Question for Harvey (and Legora)
Wong does not spare the legal AI startups. He states plainly that Harvey and Legora now sit in a strategically uncomfortable position: squeezed between Thomson Reuters and LexisNexis (incumbents with authoritative content moats built over decades), and Anthropic, which can now handle operational legal work at foundation model scale.
This is not a new concern. LawFuel’s coverage of Harvey’s $8 billion valuation noted Harvey CEO Winston Weinberg himself acknowledged the company’s biggest long-term competitor would not be other legal tech startups, but OpenAI itself. Anthropic’s Cowork plugin makes that concern concrete. Harvey has reacted in part by partnering with LexisNexis to add authoritative content — an acknowledgement that operational AI without authoritative infrastructure has limits. But it also reveals a dependency on the very incumbents whose moat Harvey cannot easily replicate.
The Moat That Took Billions to Build
Wong’s most pointed argument concerns the irreproducibility of Thomson Reuters’ content infrastructure. Westlaw’s database — encompassing millions of court decisions, statutes, and regulations curated by legal experts over 175 years — cannot be rebuilt through fine-tuning alone. Nor can Practical Law’s thousands of attorney-drafted practice notes.
Thomson Reuters has indicated it invests more than $200 million annually in productised AI and has approximately $11 billion in capital capacity through 2028. As we reported when CoCounsel Legal launched with agentic Deep Research capabilities, the system doesn’t just answer legal questions — it plans, reasons through them, and sources every answer from Westlaw and Practical Law content, with human oversight baked in. That’s a meaningfully different product from a general-purpose LLM given a legal-flavoured prompt.
The question every law firm CTO should now be asking: which category does the work I’m automating actually fall into?
Observations for Lawyers
The Fortune piece is written from Thomson Reuters’ perspective, and should be read as such. Wong is making a market argument that flatters his own platform. But the underlying framework is sound, and has practical implications for law firms and in-house legal teams evaluating their AI stack:
- The AI tool that drafts internal policies and the AI tool used for novel statutory analysis are not the same product category — even if they share a similar chat interface. Conflating them creates both professional risk and budget waste.
- Professional responsibility obligations do not flex for architecture problems. If a hallucinated citation ends up in a court filing, the question asked of disciplinary bodies will not be which vendor’s product was used — it will be why the lawyer signed the document without verifying it.
- The bar councils and law societies are moving. The UK Bar Council, the Law Society, and the Judicial Office have all issued guidance that effectively codifies AI literacy as baseline professional competence. “The machine may hallucinate, but the advocate must not” is now jurisprudence, not just a warning.
- General-purpose tools have a role — but that role is defined by the absence of authority requirements, not by their capability. Anthropic’s Cowork plugin is a legitimate and powerful tool for legal operations. It is not a substitute for Westlaw. Firms that have not yet thought through this distinction should start now.
The Market, Bifurcated
The legal AI market has arrived at a point of structural clarity that was absent twelve months ago. There are now two distinct product categories, each with its own competitive dynamics:
Operational AI — dominated by Anthropic’s Cowork at the foundation model level, with Harvey and Legora competing in the middle market. Consolidation likely. Margins under pressure.
Authoritative AI — dominated by Thomson Reuters (CoCounsel/Westlaw) and LexisNexis, with significant barriers to entry. Durable, high-value, defensible. The category where professional liability demands performance.
The firms that choose tools without understanding which category their work falls into will, as Wong puts it, learn the hard way. The legal profession has enough cautionary tales of AI-generated fictional citations to know that the hard way has real consequences.
Related LawFuel Coverage: