Restricting Vendor Use of AI in Client Engagements

When a client engages a third-party vendor, they extend trust. Trust that their data will be protected, that qualified humans are doing the work, and that the vendor operates within the same legal and ethical boundaries as the client. AI introduces a layer of opacity that can quietly undermine all of that. Most clients will not learn that AI was involved until something goes wrong, and by then, the damage, whether legal, reputational, or financial, may already be in the rearview mirror.
AI-generated content has an uncertain copyright status in many jurisdictions. If the vendor secretly uses AI, the client may pay for work they ultimately do not own. Contract language stating that the vendor warrants that all deliverables are human-authored and that full intellectual property ownership vests in the client, free of any third-party AI platform claims, is the floor.
This is not legal advise, but these are issues that every person who engages with vendors should be discussing with their legal counsel.
Why Clients Restrict Vendor Use of AI
There are six common reasons clients want to restrict or prohibit vendors from using AI on their projects. Each is more substantive than a checkbox on a procurement form.
Data Privacy and Security
When a vendor pastes client information into a public AI tool, that data may be used to train the underlying model, retained indefinitely on third-party servers, surfaced in a competitor’s output, or breached in an incident the client never hears about. For regulated data, including protected health information under HIPAA, personal data under the GDPR, student records under FERPA, and attorney-client privileged communications, a single prompt can constitute a reportable breach.
Accuracy and Accountability
Generative AI produces output that sounds confident but can be biased or outright wrong. Citations are invented, statistics are fabricated, and edge cases are quietly smoothed over. When a credentialed human professional makes a mistake, there is a license, a malpractice policy, and a chain of accountability. When an AI hallucination makes it into a deliverable, accountability is murky, and the client is often left holding the bag.
Vendor Substitution Without Consent
Clients hire specific firms for named experts, methodologies, and track records. If the vendor quietly routes the work through a large language model, the client is not getting what they paid for. They are paying senior consultant rates for output generated by a twenty-dollar monthly subscription.
Regulatory and Compliance Risk
AI use is increasingly regulated, and the regulatory landscape is moving fast. The EU AI Act, Colorado's Artificial Intelligence Act (SB 24-205), NYC Local Law 144 on automated employment decision tools, sector-specific guidance from federal regulators, and state attorney general enforcement priorities all create exposure that may flow back to the client even if the AI use was the vendor's decision.
Bias and Reputational Risk
AI models reflect the biases in their training data. Outputs used in hiring, lending, healthcare triage, content moderation, or accessibility evaluation can produce discriminatory results that violate civil rights laws and cause reputational harm long after the underlying decision is made.
Lack of Transparency and Auditability
Most commercial AI tools do not produce reliable logs of prompts, outputs, or edits made before delivery. Without that record, the client cannot verify what was generated by AI, defend the deliverable in litigation, or remediate effectively if a problem surfaces later.
Can You Block a Vendor From Using AI?
With the right contract clauses, yes. The remainder of this article walks through the provisions that, taken together, give a client meaningful control.
Pre-Contract Disclosure
The strongest position is set before the signature. RFPs and bid documents should require respondents to disclose, in writing, every AI tool they intend to use in performance of the contract, the specific tasks for which the tools will be used, the data classifications that may be exposed to those tools, and the data residency and training data policies of each platform. This places vendors on a level playing field, lets the client compare proposals on a like-for-like basis, and prevents the awkward conversation at execution where the lowest bidder turns out to have priced the work assuming heavy AI use.
A related point on pricing: a true human-only deliverable will cost more than one produced with AI assistance. Clients who require AI restrictions should expect higher quotes and budget accordingly. Pretending otherwise creates pressure on the vendor to cheat.
Defining AI in the Contract
Definitions matter. Courts and arbitrators will look for clear ones. The contract should define AI to cover generative AI, large language models, machine learning models, neural networks, automated decision systems, and AI-assisted coding tools. It should also distinguish clearly among the three categories of use:
Prohibited uses: generative drafting of deliverables, code generation, data analysis where outputs flow into the deliverable, summarization of meetings or documents, and any task that produces text, code, images, or analysis that becomes part of the work product.
Permitted assistive features: spell-check, traditional grammar correction in non-generative form, basic search, and calculator and unit-conversion functions.
Approval-required uses: anything that falls between the two, including AI features now embedded in tools the vendor already uses.
The Main Prohibition
The core prohibition will look something like:
Vendor shall not use, deploy, or incorporate any artificial intelligence, machine learning, large language models, generative AI, or automated decision-making tools in the performance of services under this Agreement, including but not limited to tools such as ChatGPT, GitHub Copilot, Claude, Gemini, or similar technologies, without the prior written consent of Client. This prohibition applies to all phases of the engagement, including discovery, analysis, drafting, review, and delivery.
Sample Carve-Out Language
AI is everywhere. Grammarly's generative features, Adobe's generative fill, IDE autocomplete, AI Companions, note takers, meeting summaries, and AI-enhanced search results inside otherwise familiar tools all sit in the gray zone. Locally hosted or self-hosted models present yet another set of facts, with different data exposure but similar accountability questions. The contract should give the client a clear approval mechanism for these edge cases rather than pretending a one-line prohibition resolves them. Vendors may request that these items be carved out as allowable AI use. A workable carve-out reads something like this:
Notwithstanding the foregoing prohibition, Vendor may use the following Permitted Assistive Tools in performance of services: spell-check and grammar correction features that operate locally and do not transmit Client Confidential Information to third parties; basic search engines used solely for general reference; and calculator and unit conversion utilities. Any AI feature embedded in a tool used by Vendor that is not listed above is prohibited unless and until specifically approved in writing by Client. The fact that an AI feature is enabled by default in a tool does not constitute Client approval.
Flow-Down to Subcontractors and Personnel
The restriction needs to flow down to every subcontractor, freelancer, and offshore resource the vendor uses. If the contract has a key personnel clause, it should require named personnel to certify their own non-use of AI at signing and at each delivery milestone. Flowdown language should look something like this:
Use of Subcontractors. Vendor shall ensure that all subcontractors, consultants, and third parties engaged in performance of this Agreement are bound by the same restrictions and shall provide Client, on request, with copies of the relevant contractual provisions.
Data Security, Confidentiality, and Training Data
This is where the practical harm usually occurs. Most clients focus on the prohibition itself but forget that the real exposure lies in a vendor feeding sensitive data into a third-party AI platform.
The confidentiality clause should explicitly prohibit entry of client data, deliverables, or work product into any AI or third-party machine learning platform, regardless of whether the platform is being used to produce the deliverable. It should also separately and explicitly prohibit the use of client data, deliverables, or work product as training data for any AI model by the vendor, the AI provider, or any downstream entity. Even AI tools that promise not to retain inputs may use them to fine-tune or evaluate models. The contract should require the vendor to obtain written assurances from any AI provider it uses that client data will not be used for training, evaluation, or improvement of any model, and to make those assurances available to the client on request.
Insurance and Cyber Liability Coverage
Most vendor errors and omissions and cyber liability policies were written before generative AI became commonplace and may exclude or fail to cover AI-related incidents. The contract should require the vendor to confirm in writing that its insurance covers unauthorized AI use, hallucination related errors, and intellectual property claims arising from AI training data, and to name the client as additional insured. Certificates of insurance should be delivered annually and on request, and the vendor should be required to give the client notice of any policy change that would affect AI related coverage.
Records Retention and Evidence Preservation
Audit rights are only useful if the records still exist. The contract should require the vendor to preserve prompts, outputs, API logs, browser history relevant to the engagement, tool licenses, and software inventories for the duration of the engagement and for a defined period afterward, commonly the limitation period for the governing law, plus a buffer. In extreme cases, companies may ask for proof that documents were created without using large copy-and-paste blocks. On notice of a dispute or potential litigation, the vendor must suspend any deletion or rotation that would destroy responsive records.
Audit Rights
The audit clause should specifically reference the right to inspect tools, software licenses, browser history, API logs, prompt and output histories, and workflows to verify compliance, not just financial records.
Right to Audit. Client reserves the right, on reasonable notice, to audit Vendor's processes, tools, software inventories, prompt and output logs, and workflows to verify compliance with this section. Vendor shall cooperate with such audits and provide access to records, systems, and personnel as reasonably required.
Representations, Warranties, and Disclosure
A one-time warranty at signing is not enough. The vendor should warrant at signing and at each delivery milestone that no AI tools were used in producing the deliverables, or that only those approved in a written carve-out were used. The contract should also impose an affirmative disclosure obligation:
Disclosure Obligation. Vendor shall promptly disclose to Client any use of AI tools, whether intentional or inadvertent, upon discovery, and shall include the date, scope, deliverables affected, data exposed, and remediation steps taken or proposed.
Notice, Cure, Look-Back, and Termination
Unauthorized AI use should be a termination-for-cause trigger, not just a general breach. Because the harm from secret AI use is often already complete by the time it surfaces, the right to cure should be expressly waived for AI related breaches. The contract should also include a look-back provision allowing the client to require the vendor, on request, to recertify that prior deliverables were AI free, and to bear the cost of any human re-performance of work found to have been produced with unauthorized AI.
Liquidated Damages
Actual damages from secret AI use are notoriously difficult to prove. How does a client value the loss of human authorship, the unquantified data exposure, or the reputational or intellectual property risk of an AI-generated deliverable? Liquidated damages provisions tied to per-incident or per-deliverable amounts give the client a meaningful remedy without the burden of proving precise harm. Liquidated damages should be calibrated to be enforceable as a reasonable estimate, not punitive, under applicable law.
Indemnification and Intellectual Property Allocation
The vendor should indemnify the client against third-party claims arising from AI use, including intellectual property infringement claims from content scraped by AI training data, errors introduced by AI that make it into deliverables, and regulatory claims arising from AI use that violates applicable law. The vendor should also warrant that no deliverable contains AI-generated content and assume full liability for any IP infringement, hallucination-related errors, or regulatory violations arising from unauthorized AI use.
Governing Law and Dispute Resolution
AI-specific regulation varies significantly by jurisdiction. The EU AI Act imposes obligations on providers and deployers, including transparency and risk management requirements. Colorado's Artificial Intelligence Act, signed in 2024, regulates high-risk AI systems used in consequential decisions. NYC Local Law 144 governs automated employment decision tools and requires bias audits. California has enacted multiple AI laws covering disclosure, training data transparency, deepfakes, and generative AI in employment. The governing law and venue clauses should be intentional, choosing a jurisdiction favorable to the client's enforcement position and accounting for the regulatory regime that will apply to a dispute.
Survival
Confidentiality, indemnification, audit rights, records retention, the AI non-use warranty, and the look-back provision should all survive termination for a defined period. Without explicit survival language, these protections may evaporate at the moment they are most needed, which is typically after the engagement ends, and a problem emerges in the deliverable.
Conclusion
AI is not going away. Many vendors will use it whether the contract permits it or not, unless the contract makes the consequences of unauthorized use unambiguous. The clauses described above (definitions, prohibitions, carve-outs, pre-contract disclosure, ongoing disclosure obligations, audit rights, insurance requirements, records retention, indemnification, liquidated damages, survival, and explicit termination triggers) work together. Any one of them, in isolation, can be argued about. Together, they create a framework in which the vendor's incentives align with the client's expectations.
Three practical next steps for clients:
Review existing vendor contracts at the next renewal cycle and add AI provisions to the master services agreement, the data processing agreement, and any statements of work.
Update RFP and bid templates to require AI disclosure before contract award, and train procurement staff to evaluate the disclosures received.
Consult outside counsel on jurisdiction-specific provisions, particularly when operating in regulated industries or in states with active AI legislation.
The goal is not to prevent vendors from ever touching AI. It is to ensure that any use of AI is disclosed, approved, scoped, logged, insured, and accountable. Trust, but verify, and put the verification in writing.
Add a comment: