Applied AI

AI Licensing Tip – Sculpting Confidentiality Protections for AI Transactions

Written by Vorys | Dec 23, 2025 2:39:02 PM

On Applied AI, we will regularly share AI licensing tips from Craig Auge, a partner in the Vorys Columbus office and a member of the firm’s technology and commercial transactions practice.  This tip focuses on confidential information and associated provisions.

Traditional non-disclosure and non-use prohibitions pertaining to “Confidential Information” – and exceptions or exclusions – require sculpting and extensions in AI-related agreements.

Approaching conceptually: (i) how is “Confidential Information” defined; (ii) what are the restrictions and scope of use; and (iii) most importantly, what are the exceptions or exclusions?

As in SaaS agreements, a Provider will want an exception to the non-use prohibition, making clear it can “use” Customer’s (or a third-party supplier’s) Confidential Information for the provision of the AI service or tool to Customer. But better practice in an AI context is also:

  • expanding on what “use” means or cross-reference a robust “ingestion license” granted to: “receive, access, use, reproduce, store, display, process, analyze, de-identify, combine with other data, integrate into other technology-related assets, create data sets, distribute, and create derivative works of training data (if any) and prompts (collectively, Input).”  The foregoing includes rights beyond those addressed in the U.S. Copyright Act.

Also as with SaaS or similar hosting arrangements, Customer will want to except out Customer’s exercise of rights and licenses granted to use the tool or service from the broad restrictions on Provider’s Confidential Information. But in an AI context, best practices also include:

  • intentionally addressing applicability of common exceptions to non-disclosure and non-use, such as excluding information rightfully received from a third party (e.g., input from one of Provider’s other customers being similar to Customer’s Input) or independently developed information (e.g., output for others nearly replicating Output to Customer).

Customers may:

  • instinctively ask that all forms of Input from them and all Output received are Customer “Confidential Information,” and indeed trade secret protection (e.g., in a prompt) would require attempts to protect; and
  • ask for special treatment for personal information or other regulated data; and
  • expressly prohibit Provider’s use for further training the underlying model or AI tool; and
  • prohibit Provider personnel from “human review” of prompts or Output, except for security or legal compliance purposes.

But Providers may:

  • soften the literal equivalence of Input and Output as Customer Confidential Information by adding: “Customer’s Inputs and Outputs may be similar or identical to content inputted by or generated for others (such as other customers) or even publicly available”; and
  • except out or otherwise gain broader rights to use Customer’s Input and Output, if anonymized (or pseudonymized) or de-identified, for maintaining or improving the AI services provision; and
  • untangle definitions of Input and Output for more specific confidentiality treatment, such as:
    • Provider’s third-party supplier training data being (as between the parties) Provider’s Confidential Information or not being either party’s Confidential Information, and
    • Customer’s enhanced training data and prompts being Customer’s Confidential Information, and
    • Output bi-furcated such that only the derivative portion is Customer’s Confidential Information (but not any Input elements in it).

Thoughtfully applied in an AI context, these contractual provisions buttress positions on ownership and licensing and complement security and regulatory commitments.

By: Craig Auge