AI and the legal services ecosystem

We wanted smarter legal tech, but instead got an expensive dependency.

The legal industry poured billions into artificial intelligence with a seductive promise: faster reviews, leaner operations, sharper insights. What it got, increasingly, looks like the same old work wearing a new interface, and a steeper invoice to match.

That observation stings because it should not be surprising. Across the enterprise technology landscape, the gap between what AI vendors promised and what organisations have actually received is widening into a chasm that even the most optimistic chief technology officers cannot ignore. Forrester’s 2026 predictions put it bluntly: enterprises will defer 25 percent of their planned AI spending into 2027, as financial rigour catches up with the hype. Only 15 per cent of AI decision-makers reported any measurable lift to their organisation’s EBITDA over the preceding 12 months, according to the same research. Fewer than one in three could tie AI’s value to changes on their profit-and-loss statement.

PwC’s 29th Global CEO Survey, released in January 2026, delivered an even starker verdict. Fifty-six per cent of CEOs worldwide, across 4,454 respondents in 95 countries, said their companies had realised neither revenue gains nor cost reductions from AI investments. Just one in eight reported achieving both. PwC Global Chairman Mohamed Kande attributed the shortfall to organisations chasing AI deployment while neglecting foundational work: data infrastructure, process redesign, and governance frameworks. The unsexy plumbing that determines whether any technology actually delivers results.

The legal industry sits squarely inside this reckoning. According to the 2026 Report on the State of the US Legal Market from Thomson Reuters and Georgetown Law’s Center on Ethics and the Legal Profession, law firms increased technology spending by 9.7 percent and knowledge management spending by 10.5 per cent—growth rates the report described as likely the fastest the legal industry has ever experienced. Firms scrambled to deploy generative AI capabilities while simultaneously managing a 2.5 per cent increase in billable hours. The money flowed. Whether the returns followed are a different question entirely.

Here is the uncomfortable math. While Clio’s data shows AI adoption among legal professionals surged from 19 per cent to 79 per cent between 2023 and 2024, the 2025 Legal Trends Report revealed that the figure flatlined at the same 79 per cent, a plateau that signals the transition from adoption to productive implementation has stalled against the friction of legacy billing models and outdated data infrastructure. Meanwhile, the share of legal professionals using legal-specific AI tools actually dropped from 58 per cent to 40 per cent, suggesting much of the industry’s AI activity involves general-purpose tools rather than purpose-built legal technology. Axiom’s 2026 In-House Legal Budgeting Survey, conducted by The Harris Poll across 530 senior legal decision-makers, found that 78 per cent of legal departments have been mandated to implement AI without dedicated funding, creating an unfunded mandate that undermines the careful integration AI requires.

And even where AI is deployed, only six per cent of law firms pass efficiency gains to clients through reduced fees, while 34 per cent actually charge premium rates for AI-enhanced work, according to Axiom’s separate research on general counsel. A joint survey from the Association of Corporate Counsel and Everlaw, drawing on 657 in-house professionals across 30 countries, sharpened the picture: nearly 60 percent of in-house counsel reported no noticeable savings from their outside counsel’s use of AI. Fifty-eight percent pointed to a deeper structural issue — law firms have not adjusted their pricing to reflect generative AI-driven efficiencies.

This is the legal industry’s AI paradox. Firms deploy technology capable of completing in minutes what once took hours of associate time—and then try to bill for it by the hour anyway. Everlaw’s 2025 eDiscovery Innovation Report found that nearly half of legal professionals reclaim one to five hours per week through generative AI—time savings that, across a year, amount to over 30 working days. Yet 90 per cent of respondents in the same survey said that AI has either already altered conventional billing practices or will do so within two years, an acknowledgment that the billing model has not kept pace with the technology. Ninety percent of legal spending still flows through standard hourly rate arrangements, according to the Georgetown report, creating a structural tension so acute that the report itself called it “almost absurd”. The efficiency gains exist in a vacuum. They accrue to firm profitability, not to the clients who ultimately fund the technology through rising rates.

Where the gains are real, and where they aren’t

The eDiscovery sector illustrates both the promise and the trap. Per-document AI review costs have dropped to between 0.11 and 0.50 US dollars, down from the 1.50 to 3.00 US dollars that human reviewers commanded as recently as two years ago, according to eDiscovery industry pricing surveys. Relativity reported that its aiR product line—adopted by hundreds of customers across over 2,000 projects, with over 190 million review decisions as of early 2026—has delivered time savings of 50 to 70 per cent in certain review and data breach response workflows. In October 2025, Relativity announced it would fold its aiR for Review and aiR for Privilege generative AI tools into the standard RelativityOne package starting in early 2026, a move that effectively commoditises a capability vendors have been pricing as premium.

Those are real gains in specific, well-defined tasks, and an important distinction applies here. Technology-assisted review and continuous active learning have over a decade of case law validation and measurable performance data behind them. Courts have accepted TAR methodologies since Judge Andrew Peck’s landmark ruling in Da Silva Moore v. Publicis Groupe in 2012, and subsequent decisions have reinforced their defensibility. Nobody disputes that mature, well-understood AI-assisted review can reduce the volume of documents requiring human eyes by 80 to 90 percent when properly deployed. The global eDiscovery market already exceeds 15 billion US dollars and is forecast to grow at eight to 11 per cent annually through 2032, driven in large part by AI-enabled review and analytics.

The ROI challenge is sharper for the newer generative AI capabilities now being layered on top of those established workflows—summarisation, privilege detection, document drafting, and case strategy extraction. These tools are 18 months into enterprise deployment, not a decade. Exception handling, quality control, and contract structures around generative AI services in eDiscovery remain underdeveloped by the industry’s own admission. The question hanging over every AI-accelerated review is whether the cost savings are being reinvested in human quality control or simply pocketed—with oversight reduced in the name of efficiency. When the technology makes errors at scale, the consequences compound at scale too. A missed privileged document in a review of millions carries the same risk it always did; the only thing that changed is how fast the mistake was made.

The verification tax nobody measures

Beyond eDiscovery, the productivity claims that vendors attach to generative AI tools across industries deserve serious scrutiny—and the pattern they reveal has direct implications for legal work. A randomised controlled study by METR, a nonprofit AI research organisation, published in mid-2025, recruited 16 experienced open-source software developers and randomly assigned 246 real coding tasks to be completed with or without AI tools. The developers using AI—primarily Cursor Pro with Claude 3.5 and 3.7 Sonnet—actually completed their tasks 19 per cent slower than those working without it. The perception gap was jarring: those same developers estimated before starting that AI would make them 24 percent faster, and still believed they were 20 per cent faster after completing the tasks. They felt productive. The stopwatch said otherwise.

The METR study measured software engineering, not legal drafting—and those are different disciplines with different complexity profiles. But the underlying dynamic it exposed is domain-agnostic: when professionals trust AI output without fully verifying it, they feel faster while actually losing time to the hidden costs of correction. Workday’s January 2026 research confirmed this across a broader workforce, finding that 37 per cent of time supposedly saved by AI gets consumed by reviewing, correcting, and verifying AI-generated output. Only 14 per cent of employees consistently achieved clear, positive net outcomes from their AI use.

In legal work, where precision carries professional liability, those verification costs are likely higher, not lower. Consider an illustrative scenario: a mid-size litigation firm that invested six figures in a generative AI drafting platform last year. An associate uses it to produce a motion to compel in 30 minutes instead of three hours. The partner reviewing it spends 90 minutes verifying every citation, checking for hallucinated case law, and rewriting passages that sound confident but say nothing. The net time saving is 60 minutes, and that assumes the verification catches every error. No published study has yet measured this net-of-verification cost in legal practice specifically, which is itself part of the problem. The firms absorbing these verification costs rarely quantify them, which means the productivity metrics they report to clients and in industry surveys systematically overstate AI’s net contribution.

The gap between perceived and actual productivity is not a minor inconvenience. It distorts investment decisions. When a firm’s leadership believes AI is saving 20 percent of associate time but the actual saving—net of verification—is closer to five per cent, the return on their six-figure AI investment looks very different.

The information governance blind ppot

Information governance professionals face a variant of the same problem that rarely makes the trade publication headlines. Vendors have aggressively marketed AI-powered records classification, automated retention scheduling, and defensible disposition workflows. The pitch is compelling: let machine learning sort through decades of accumulated data, classify it according to retention policies, and flag what can be deleted.

In practice, training these models on organisation-specific retention schedules—which vary by jurisdiction, by record type, and by regulatory framework—remains a labour-intensive and error-prone process. An AI system that confidently classifies a document as eligible for disposition when it should have been held under a litigation hold creates a spoliation risk that no efficiency gain can offset. The audit trail requirements for defensible disposition mean that every AI classification decision must be traceable, explainable, and reviewable—adding layers of governance overhead that partially negate the time savings the technology was supposed to deliver.

The problem compounds for organisations operating across multiple jurisdictions. A multinational’s retention policy might touch GDPR’s right to erasure, US state privacy laws, SEC record-keeping requirements, and industry-specific regulations simultaneously. Training an AI model to navigate those overlapping obligations reliably—and documenting that it did so correctly—is a compliance challenge that vendors’ marketing materials consistently understate.

The regulatory cost nobody budgeted for

Compounding the investment challenge, a wave of regulatory requirements around AI in legal practice is adding compliance costs that most firms did not factor into their AI budgets. The American Bar Association’s Formal Opinion 512 established a national baseline requiring lawyers to verify all AI-generated legal citations before filing. California’s State Bar issued practical guidance mandating that attorneys understand large language model limitations (including hallucination risks and data privacy exposure) before deploying them. The New York State Bar Association’s AI Task Force produced a phased roadmap for secure AI adoption that creates ongoing compliance obligations.

The judiciary is charting its own uneven course. Since the Mata v. Avianca sanctions in 2023—where attorneys were fined for submitting ChatGPT-hallucinated case citations—federal and state judges have issued hundreds of standing orders governing AI use in court filings, with no uniform standard. Some require disclosure of which AI tool was used and where; others demand certification that a human verified every citation; still others impose no requirements at all. Federal judges are themselves experimenting with AI in their chambers—even as the rules they impose on practitioners vary courtroom to courtroom. For litigants, the patchwork means that AI-assisted work product acceptable in one jurisdiction may trigger sanctions in the next, adding yet another compliance variable to the cost of deployment.

Those domestic requirements arrive alongside the EU AI Act, which becomes fully applicable on August 2, 2026. AI systems used in legal contexts—particularly those involved in access to justice or interpretation of law — plausibly face classification as high-risk under the Act, though how regulators will apply those categories to specific legal technology tools is still being interpreted through guidance documents and early enforcement decisions. Where a tool does fall into a high-risk category, the obligations are substantial: risk management frameworks, conformity assessments, technical documentation, and registration in the EU database. Industry estimates from early compliance analyses place implementation costs for high-risk AI systems at $2 million to $15 million, depending on organisational size—figures the regulation itself does not prescribe but that reflect the operational burden of meeting its requirements. Penalties for non-compliance, by contrast, are statutory: up to 35 million euros or seven per cent of global turnover.

For legal technology buyers already struggling to demonstrate ROI from their AI investments, these regulatory costs represent a new line item that further erodes the business case. A firm that deployed generative AI tools in 2024, expecting quick efficiency gains, now faces the prospect of spending additional resources to ensure those same tools meet evolving ethical and regulatory standards—before they have recouped the original investment.

The broader reckoning

Gartner added its own sobering projection in June 2025: over 40 per cent of agentic AI projects across all industries will be cancelled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. The research firm estimated that only about 130 of the thousands of vendors marketing agentic AI capabilities are building genuine agent technology. The rest are engaged in what Gartner called “agent washing”—rebranding existing automation products, chatbots, and robotic process automation tools with an agentic label. The legal technology market, already fragmented and prone to buzzword adoption, is especially vulnerable to this dynamic.

At the enterprise level, the numbers reinforce the pattern. Forty-two percent of companies abandoned most of their AI initiatives in 2025, up from 17 per cent the prior year, according to S&P Global’s Voice of the Enterprise survey. A report from MIT’s NANDA initiative, “The GenAI Divide,” found that roughly 95 per cent of AI pilots delivered no measurable impact on profit-and-loss statements — though critics note the study’s six-month evaluation window may undercount longer-horizon returns. A Deloitte survey of director-to-C-suite leaders found 66 percent claiming productivity gains from AI, but only 20 per cent reporting revenue growth—a gap that suggests much of the reported productivity either does not translate to financial results or gets absorbed by the cost of the AI infrastructure itself.

None of this means AI in legal technology is worthless. It means the industry has been measuring the wrong things, or measuring the right things in ways that flatter the technology rather than testing it. A parallel from outside the legal world is instructive. Estonia built one of the most celebrated digital government platforms on earth—a national e-state initiative that became a case study in public-sector technology, drawing delegations from dozens of countries eager to replicate its model. But in an April 2026 opinion piece for ERR News, the English-language service of Estonian Public Broadcasting, journalist Nils Niitra argued that the programme’s real legacy was an expensive dependency: IT spending and government staffing both grew rather than shrank, and the promised leaner, cheaper state never materialised. Estonia’s digital investment created what Niitra described as a new layer of bureaucratic fat atop the old one—the paper folder became a digital folder, the stamp became a digital stamp, the queue became a portal, but nothing substantive changed.

Legal technology risks the same trajectory. A keyword-and-filter document review becomes an AI-intensive document review. A template-driven contract analysis becomes an AI-assisted contract analysis. A records clerk’s classification judgment becomes an algorithm’s classification judgment. The vocabulary changes; the underlying workflow stays remarkably similar. And layered on top are new costs—licensing fees, integration expenses, training hours, quality assurance processes for AI outputs, regulatory compliance overhead, and the specialised staff required to manage and prompt the systems. The old process has not been replaced. It has been supplemented at a premium.

General counsel offices are noticing—and the structural inertia is becoming harder to defend. Axiom’s 2026 GC Survey of 516 senior in-house legal leaders across eight countries found that 61 per cent continue sending work to law firms out of habit rather than strategic choice, even as 80 percent plan to move certain firm work in-house or to alternative providers within 24 months. When 94 per cent of in-house leaders express interest in alternative legal service models that combine flexible talent with vetted AI tools—as Axiom’s research found—that is not enthusiasm for technology. That is a market signal from buyers who feel they are paying for someone else’s AI experiment.

The path forward requires a level of honesty the industry has so far resisted. Firms and legal technology vendors need to separate measurable, repeatable productivity gains from the warm glow of novelty. They need to publish net time savings—accounting for verification, correction, and oversight—rather than gross figures that ignore the human labour still required downstream. They need to address the billing model contradiction head-on rather than pocketing efficiency gains while raising rates. And they need to factor regulatory compliance costs into ROI calculations from the outset, not as an afterthought when the ethics opinion or the enforcement notice arrives.

For eDiscovery professionals, information governance specialists, and cybersecurity teams who increasingly intersect with legal workflows, the stakes extend beyond billable hours. An AI-driven review that sacrifices thoroughness for speed creates data governance risks. Privilege calls made by algorithms without adequate human oversight expose organisations to waiver arguments. Automated classification systems that have not been validated against jurisdiction-specific requirements generate compliance liabilities that may take years to surface. And AI-powered disposition workflows that lack defensible audit trails turn a records management tool into a spoliation time bomb.

The technology itself is not the problem. The problem is an industry that adopted AI with the enthusiasm of a convert but the rigour of a bystander—spending freely, measuring loosely, and deferring the hard question of whether any of it makes the practice of law better, cheaper, or more accessible for the people who ultimately pay for it.

If the legal industry cannot answer that question with data rather than anecdotes, then what it has built is not innovation. It is an expensive dependency dressed in a smarter interface—and the invoice, as always, lands on someone else’s desk.

What would it take for your organisation to require net productivity metrics—verification costs included—before renewing a single AI contract?

Read the complete article We Wanted Smarter Legal Tech, but Instead Got an Expensive Dependency.


Photo: Dreamstime.