I continue to prepare our May 22 webinar on AI tools for contract drafting and, in the process, I’m testing those tools.
I’ve seen a lot of impressive systems, and I remain optimistic about these tools. I think they’re going to make our work easier and free us to focus on the most interesting tasks. I also think they’ll eventually make legal services less expensive, solving some of the Western world’s justice gap.
However, these tools have serious problems. And it’s crucial that we understand them. In particular, we need to understand the type of errors they make. A great example came up in my testing.
Reproducing Typical Errors about Reciprocal Indemnities
I’ve been uploading a SaaS agreement on vendor paper and asking AI tools to review it on behalf of the customer. Just about every system red-flags the fact that the indemnity is not reciprocal or mutual. The customer indemnifies the vendor against third party claims but not vice versa. The AI systems recommend the vice versa: the same indemnity protecting the customer. And often the AI drafts reciprocal language (sometimes badly, sometimes well). But there’s a problem.
Why would you ever want a reciprocal or “mutual” indemnity?
We don’t need reciprocal indemnities. We need indemnities addressing the key risks of third party lawsuits resulting from the deal. So if you’re the customer, you don’t need an indemnity against claims by your end-users just because the vendor has one. If you’re not worried about those claims, the indemnity doesn’t work much? Maybe you’re worried instead about third party IP or data claims. Or maybe you’re worried that the vendor’s subcontractors will sue you for compensation that the vendor didn’t pay. If so, you want indemnities against those claims, not a reciprocal indemnity about end-user claims.
But that’s not what the AIs recommend.
On the other hand, maybe you’re not worried about third party suits at all. If so, why waste your powder on indemnities at all? Just to be reciprocal? Instead, how about swapping the vendor’s requested indemnities for something of real value, like a lower price or higher limit of liability?
Where AI Does Better
AI does better with more straightforward issue-spotting and redlining. Your AI has good odds of catching and fixing problematic termination for convenience terms, for instance – or an overly low or high limit of liability, or one party’s unilateral right to amend the contract.
The systems also catch and recommend fixes for “non-reciprocal” language where balance between the two parties does make sense. If the limit of liability protects the other party but not you, you probably want a reciprocal limit. And you can count on a lot of these systems to catch and fix that issue.
Subtle Issues
AI doesn’t do as well with subtle issues, which unfortunately litter most contracts. I chose the reciprocal indemnity for that reason – because it’s subtle: easily missed – but also because of its stark outcome: low value terms, important terms left out, opportunities lost, and wasted negotiation capital. But the indemnity issue isn’t unique. Contract-drafting AIs rack up a lot of these more subtle errors.
The AIs get these issues wrong because a lot of contract-drafters get them wrong, including experienced lawyers. So their training data suggests we need reciprocal indemnities.
Even curated systems – AI with human “editing” – can get this wrong. If a lot of lawyers get it wrong, it’s reasonable odds that the AI company’s human curators did wrong too.
Unfortunately, subtle errors don’t necessarily have subtle costs. Choosing a reciprocal indemnity instead of one you need could cost orders of magnitude more than the deal’s value.
What’s the solution?
First, pick your AI vendor with care.
- Learn all you can about how it trained its AI and about the role of human experts.
- Confirm quality.
- We’ll talk more about that in our May 22 webinar on AI tools.
Second, learn the stuff yourself. So far, AI has not offered a substitute for your expertise.
Conflict of Interest
This article needs a conflict of interest statement because I have several biases – three in particular:
- As you’ve no doubt realized, I’m encouraging you to attend our webinar, and raising this issue helps.
- Through Tech Contracts Academy, I provide training, and that’s just what this article implies you need.
- I’m considering offering products and services – e.g., playbooks and related consulting – meant to solve this problem, and this article helps seed that market.
So take my views with a grain of salt. But please think it through.
Our webinar on this and related issues is The Best AI and Other Software for Contract Drafting – on May 22, 2025 at 10:00 a.m. PDT. It’s for everyone, but for California lawyers, it provides required credit under new CLE subfield: Technology in the Practice of Law. (For other jurisdictions, we provide self-reporting resources for CLE.)
THIS ARTICLE IS NOT LEGAL ADVICE. IT IS GENERAL IN NATURE AND MAY NOT BE SUFFICIENT FOR A SPECIFIC CONTRACTUAL, TECHNOLOGICAL, OR LEGAL PROBLEM OR DISPUTE, AND IT IS NOT PROVIDED WITH ANY GUARANTEE, WARRANTY, OR REPRESENTATION. LEGAL SITUATIONS VARY, SO BEFORE ACTING ON ANY SUGGESTION IN THIS ARTICLE, YOU SHOULD CONSULT A QUALIFIED ATTORNEY REGARDING YOUR SPECIFIC MATTER OR NEED.
© 2025 Tech Contracts Academy, LLC