Artificial Intelligence Contracts: An Issue-Spotter – Part 3 of 3

This is the third and last in our series of posts about contracts for purchase and sale of AI. Click here for the first post in the series and here for the second post.

bridge-construction disaster resulting from bad artificial intelligence output
Maybe that AI-driven engineering software wasn’t such a good idea …

D. Errors in Outputs

  1. Errors in Outputs – Provider Disclaimers: Software vendors often warrant or otherwise promise that their systems will work according to a set of specifications. In most cases, that promise addresses operation of the system, not its outputs or other results. The system could work but produce bad results because of customer error or bad data. AI providers draw the same line, and they have an extra incentive to do so. AI can “work” but still produce bad outputs because of problems with prompts and input data from the customer. And generative AI and other machine learning systems can produce bad outputs because of issues with training data, which could come from the customer, the provider, or both. So an AI provider should take two precautions. First, consider a broad disclaimer: “Except as specifically set forth in Section __ (Compliance with Specifications), CUSTOMER ACCEPTS ALL OUTPUTS ‘AS IS,’ WITH NO REPRESENTATION OR WARRANTY WHATSOEVER, EXPRESS OR IMPLIED.” (Also add the more detailed warranty disclaimer language recommended for all software deals, which you can find in our clause library.[mfn]Specifically, see Typical Warranty and Indemnity Disclaimers and Additional Disclaimers.[/mfn]) Second, review the AI’s specifications to make sure they don’t offer promises about outputs – or at least inappropriate promises.
  2. Errors in Outputs – Customer Response/Concerns: The customer should consider pushing back against total disclaimers related to outputs. Ask questions that test what the provider can promise. What DO you promise about outputs, regardless of underlying weaknesses in the data? Based on the provider’s answer, consider creating or citing specifications that specifically address outputs, and use them for performance promises. “Provider warrants that Outputs will conform to the requirements of Section __ (Output Specifications).” Also, explore whether the provider can stand behind the quality or accuracy of its training data or input data and, as a result, for the quality of outputs. Obviously, the provider can’t promise accurate outputs for a generative AI system trained on massive data drawn from the Internet, with all its errors and issues – or on data provided by the customer. But most AI draws on smaller datasets. In some cases, promises about output quality could work.

E. Liability Related to Third Parties: Indemnities and Warranties

  1. Third Party IP and Privacy in Outputs – Provider Disclaimers: Customers typically want IP warranties and indemnities covering their use of software. When those terms address the customer’s right to reproduce or use AI software, they don’t raise unusual concerns, though that doesn’t mean the provider will grant them. However, IP warranties and indemnities related to outputs, as opposed to use of software, often do raise unusual concerns. The same goes for indemnities against privacy suits related to outputs, as well as warranties that outputs won’t include personal information (PI). As discussed in bullet 18, outputs could rely on prompts and input data from the customer. And machine learning outputs could rely on training data from the Internet, third parties, or again the customer. Any of those sources could import content subject to third party IP or privacy rights. So in those cases, the provider should avoid IP and privacy warranties and indemnities related to outputs. In fact, it should consider all-caps disclaimers like those in bullet 18. And consider adding more specifics: “PROVIDER DOES NOT REPRESENT OR WARRANT THAT OUTPUTS WILL BE FREE OF CONTENT THAT INFRINGES THIRD PARTY RIGHTS, INCLUDING WITHOUT LIMITATION PRIVACY AND INTELLECTUAL PROPERTY RIGNTS.” The provider’s argument: This risk is inherent in use of our type of AI, and the parties should share it. Some providers, in fact, go further. They grant no IP or privacy indemnities whatsoever, even related to use of their software.[mfn]Don’t fall into the trap of considering indemnities remedies for infringement or wrongdoing. They’re not. See The Tech Contracts Handbook, Ch. II.L. We also cover this topic in The Tech Contracts Master Class™ and in our Oct. 12, 2023 webinar, Key Liability Terms in Contracts about AI, the Cloud, and other Software : Warranty, Indemnity, Limit of Liability, and More.)[/mfn]
  2. robot sharing vicious office gossip
    You’ll never guess what I heard last night about Jordan and a certain someone from accounting.

    Third Party IP and Privacy in Outputs – Customer Response re Sources: Before accepting the provider arguments above in bullet 20, the customer should ask a question. If outputs could reproduce data from third parties, are those third parties the provider’s suppliers? If so, can’t the provider take responsibility for that data? That gives the customer an argument for IP and privacy indemnities and warranties covering outputs. However, the provider still might not offer those terms if the system also relies on customer training data or other input data. In generative AI and some other systems, the parties would never know which side’s data led to the output. So they’d never know whether the warranty or indemnity applies. (That’s a recipe for a lawsuit.)

  3. Third Party IP and Privacy in Outputs – Customer Response re Use of Data: Again before accepting the arguments in bullet 20, the customer should ask: Do the system’s outputs actually reproduce training data or other information from the customer, third parties, or the Internet? Or does that information just guide creation of outputs, without actually appearing within them? If so, the customer again has an argument for IP/privacy warranties and indemnities related to outputs. But think this through before making that argument. The provider might request the same warranties and indemnities from you, the customer – in this case about IP and privacy related to customer-provided prompts and data. See bullet 24 below.
  4. Defamation, Discrimination, and Similar Torts by Outputs: Outputs could harm third parties in other ways. Generative AI Outputs sometimes defame third parties. And much AI can produce outputs that encourage ethnic, gender, religious, or other discrimination. (AI-guided hiring systems, for instance, have discriminated on the basis of race and gender.) So the customer should seek warranties against those errors, as well as indemnities against resulting third party lawsuits. However, those terms raise the same set of issues as third party IP and privacy warranties and indemnities, discussed above. So the provider should resist those requests. See bullets 20 to 22 above.
  5. Third Party Rights in Customer-Provided Data – Provider Concerns: A cloud service customer could upload infringing, private, defamatory, or otherwise harmful content to the provider’s computers. If so, the (innocent) provider might face liability for hosting or publicizing customer content. AI providers face those risks too, of course, when they provide their software via the cloud. So the provider should seek warranties and indemnities related to prompts, customer input data, and customer training data. “Customer warrants that: (a) it has and will collect Customer Data in compliance with all applicable laws, including without limitation laws on intellectual property, privacy, and disclosure of personal information; and (b) it has and will obtain such intellectual property licenses and other consents as are required by applicable law for Provider to access and use Customer Data as authorized by to this Agreement.” This request, of course, reverses all the issues discussed above in bullets 20 and 23, with the customer arguing that it can’t be responsible for certain data.

F. Security, Privacy, and Responsible Use

  1. giant AI-driven robot looms over city
    Don’t worry; I have only the best of intentions.

    Security and Privacy Terms in General: AI raises the same security concerns as other software, along with a few unique issues. For instance, some AI systems access unusually large datasets. So they risk high impact data breaches. Also, AI processes are often invisible and impossible to reconstruct. So it’s hard to know whether the system has been tampered with or whether it’s misused personal information (PI). These problems don’t come with easy solutions, and I have little to offer, except awareness. The parties should start by identifying system vulnerabilities. From there, the customer should ask for security-related specifications, as well as warranties and indemnities related to data breach. In some cases, the provider should ask for those same terms from the customer – particularly where the provider hosts the AI and makes it available through the cloud. The provider would request terms on customer protection of its own computers, assuming they can access AI. The provider might also request promises that the customer has protected the security of input data and training data. Of course, even as it requests those provisions from the other party, each party should consider also protecting itself through the opposite set of terms: security-related disclaimers. “PROVIDER DOES NOT REPRESENT OR WARRANT THAT THE SYSTEM WILL BE FREE FROM THIRD PARTY INTERFERENCE OR OTHERWISE SECURE.

  2. Personal Information in Prompts and Other Customer Data – DPAs, SCCs, BAAs, etc.: If the customer gives personal information (PI) to the provider, privacy law may require that the parties execute a data protection addendum (DPA) or some other set of data terms. For instance, if customer prompts, training data, or input data include “protected health information,” then HIPAA[mfn]The U.S. Health Insurance Portability and Accountability Act.[/mfn] requires that the parties execute a business associate agreement (BAA). If the customer’s data includes European PI, then GDPR[mfn]The EU’s General Data Protection Regulation.[/mfn] may require execution of “standard contractual clauses” (SCCs) before the data moves to the U.S. or certain other jurisdictions. Various U.S. state laws may require special terms too.[mfn]See, for instance, the California Consumer Privacy Act (CCPA) and the Virginia Consumer Data Protection Act (VCDPA).[/mfn] In most cases, the law imposes these obligations on the customer. It’s the data controller (probably), and it’s required to get the necessary contract promises from the data processor: the provider.[mfn]CCPA uses “business” and “service provider” instead of “controller” and “processor.”[/mfn] But some privacy laws impose contracting rules directly on the processor. In that case, the provider violates the law if it doesn’t sign onto the required contract terms. Check the law(s) in question.[mfn]This post addresses contract terms required by privacy law, not broader compliance questions. But don’t forget that your compliance obligations probably don’t end with signing the right terms.[/mfn]
  3. Personal Information in Outputs – DPAs, SCCs, BAAs, etc.: Personal information could flow in the other direction, from provider to customer. Could outputs include PI from the provider’s training data or input data? If so, privacy law may consider the provider the data controller and may require that it secure a DPA or other contract terms from the customer. See the example laws and issues discussed above in bullet 26.
  4. Requirements for Responsible Use – AUPs and Other Conduct Terms: Because AI can do so much harm, providers should consider codes of conduct for their customers. Often, a typical acceptable use policy (AUP) will do the trick. The sample AUP here at TechContracts.com, for instance, forbids harassment, defamation, violation of privacy and IP rights, hacking, and fraud. But the provider should also consider terms specific to its form of AI. A machine learning provider, for instance, might add the following to its AUP: “Do not use the System: (a) to reverse engineer AI outputs in order to generate underlying information, including without limitation training data (model inversion attacks); or (b) to generate, transmit, or otherwise manage fake or intentionally misleading training data for any AI system.” And the provider might go further: “If outputs generated by the System include material that would violate the AUP, do not distribute or publicize those outputs, and do not use them in any way that could cause harm to a third party.” (That last clause raises many of the issues discussed above under parts D and E.) For its part, the customer should consider demanding similar codes of conduct governing future use of prompts and customer training data, assuming the system uses that information to serve other customers. “Provider warrants that it will not authorize use of a Further-Trained Model (as defined below) by a third party that does not first agree in writing to conduct restrictions consistent with those of Attachment __ (AUP). (‘Further-Trained Model’ means any artificial intelligence software trained on Prompts or Training Data provided by Customer pursuant to this Agreement.

© 2023 by Tech Contracts Academy, LLC. All rights reserved.

Illustrations created by the author using generative AI from NightCafe.

Share the Post:

Related Posts