12 Contract Redlining Examples Every Security Team Should Get

Twelve real vendor clauses that show up in DPAs, MSAs, and security addenda — paired with the exact redline language to send back. Steal them, adapt them, save your team hours per contract.

April 20, 2026

4 min read

12 Contract Redlining Examples Every Security Team Should Get

I watched a deal die over a single line of contract language last quarter.

A vendor had sent over a DPA. The breach notification clause read, word for word: "Vendor will use commercially reasonable efforts to notify Customer of security incidents promptly." The CISO flagged it. Asked for 48 hours. Vendor's lawyer pushed back. Three weeks of email tag, two escalations, one missed quarter-end. Deal walked.

Here's the painful part. That whole mess could have been avoided if the security team had a redlined version ready to send back the first time the contract crossed their desk. They didn't. So they negotiated from scratch. And lost.

That's what this guide fixes. Twelve specific contract clauses I see show up over and over in DPAs, MSAs, and security addendums — paired with the exact redline language you can adapt. Save it. Steal it. Build your playbook around it.

A note on usage before we start

Every example below is in the starting language. Not final. Your industry, your data sensitivity, your customer commitments, and your appetite for friction will shape the version you actually send. A healthcare company holding PHI redlines harder than a marketing analytics startup. A vendor you've worked with for three years gets gentler treatment than one you've never heard of.

Treat these as a launching point. Adapt the rationale, keep the structure, change the specifics to fit your context.

One more thing. I've paraphrased every original vendor clause to protect anyone, but the patterns are real. If a clause below feels eerily familiar, that's because some version of it is sitting in your inbox right now.

1. The "commercially reasonable" breach notification

What the vendor sent:

Vendor shall use commercially reasonable efforts to notify Customer of any security incident promptly.

Why it's a problem: Timely according to whom? Commercially reasonable for whom? This clause means whatever the vendor's lawyers want it to mean during an actual breach. Which is exactly when you need clarity, not flexibility.

GDPR Article 33 gives you 72 hours to notify a supervisory authority. If your vendor takes 30 days to tell you about an incident, you're in violation through no fault of your own. Regulators don't care that your vendor moved slowly. They care that you missed the window.

Send back:

Vendor shall notify Customer of any confirmed or reasonably suspected Security Incident within forty-eight (48) hours of discovery. Initial notification shall include (a) the nature of the incident, (b) the categories and approximate volume of Customer Data affected or potentially affected, (c) the steps Vendor is taking to investigate and remediate, and (d) a single named point of contact for follow-up.

Comment to leave in the margin:
"Aligning to GDPR Article 33 timing. Open to discussing notification format, but the 48-hour window from discovery is non-negotiable for our compliance posture."

2. Blanket subprocessor authorization

What the vendor sent:

Customer authorizes Vendor to engage third-party subprocessors as Vendor deems necessary to provide the Services.

Why it's a problem: You just gave the vendor permission to send your customer data to anyone they want, in any jurisdiction, with no notice. If they hire a subcontractor in a country that your customers explicitly prohibited, you find out when something goes wrong.

Send back:

Vendor shall maintain a current list of all subprocessors at [URL] and shall provide Customer with at least thirty (30) days' prior written notice (or notice via the listed URL) before engaging any new subprocessor. The customer may object to any new subprocessor for reasonable security, compliance, or data protection concerns within the notice period. If the objection cannot be resolved within thirty (30) days, Customer may terminate the affected Services without penalty and receive a pro-rata refund of any prepaid fees.

Comment: "Standard subprocessor governance language. We need notification + objection rights, but we'll work with you on which subprocessors trigger formal review vs. the published list."

3. Audit rights "at vendor's discretion."

What the vendor sent:

Vendor may, in its sole discretion and upon reasonable request, provide Customer with information regarding Vendor's security practices.

Why it's a problem: Translation: you have no audit rights. Whatever the vendor decides to share, you get. Whatever they don't, you don't. Try explaining that to your auditor during your next SOC 2 review.

Send back:

Upon Customer's reasonable written request (no more than once per twelve-month period), Vendor shall provide: (a) a copy of Vendor's most recent SOC 2 Type II report or equivalent; (b) a summary of Vendor's most recent independent penetration test results; (c) responses to Customer's reasonable security questionnaire; and (d) evidence of Vendor's continued compliance with the security and privacy commitments in this Agreement. For Security Incidents materially affecting Customer Data, Vendor shall additionally cooperate with Customer's reasonable forensic investigation requests.

Comment: "Once-per-year baseline access plus incident cooperation. We're not asking for unlimited audit rights — just enough to satisfy our own compliance obligations."

4. No data deletion certification at termination

What the vendor sent:

Upon termination of the Services, Vendor will delete or return Customer Data in accordance with its standard practices.

Why it's a problem: Standard practices is doing a lot of work in that sentence. Does it cover backups? Disaster recovery copies? Subprocessor-held data? You don't know. Worse, you'll never know — there's no certification requirement, so you have no proof anything got deleted.

Send back:

Within thirty (30) days following termination or expiration of the Services, Vendor shall, at Customer's option, delete or return all Customer Data in Vendor's possession or control, including all copies held by subprocessors and copies in backup, disaster recovery, and archival systems (subject to commercially reasonable timelines for backup expiration not to exceed ninety (90) days from termination). Vendor shall provide written certification of deletion within sixty (60) days of termination, signed by an officer of Vendor.

Comment: "Need explicit deletion timing including backups, plus signed certification. Happy to discuss the backup window — most vendors land at 90 days for technical reasons."

5. The general liability cap that swallows breach incidents

What the vendor sent:

Vendor's total aggregate liability under this Agreement shall not exceed the fees paid by Customer to Vendor in the twelve (12) months preceding the claim.

Why it's a problem: This is the clause that bankrupts you. If a vendor's negligence causes a data breach that triggers $4M in regulatory fines, $1M in notification costs, and $2M in customer churn, this clause caps their liability at maybe $50,000. Your problem now.

The general cap is fine for routine commercial disputes. It's a disaster for security incidents.

Send back:

Notwithstanding the foregoing, the limitation in this Section shall not apply to: (a) Vendor's breach of its confidentiality obligations; (b) Vendor's breach of its data protection and security obligations under the Data Processing Addendum or this Agreement; (c) Vendor's indemnification obligations; or (d) damages arising from a Security Incident caused by Vendor's negligence, willful misconduct, or breach of this Agreement. For the matters in clauses (b) and (d), Vendor's aggregate liability shall be capped at the greater of three times (3x) the fees paid by Customer in the twelve (12) months preceding the claim, or one million dollars ($1,000,000).

Comment: "We need a security-specific carve-out. The 3x / $1M floor is what we're seeing across enterprise deals — happy to share a few comparables if useful."

6. Missing encryption specification

What the vendor sent:

Vendor will use industry-standard encryption to protect Customer Data.

Why it's a problem: Industry standard is whatever the vendor says it is. In 2026, that should mean AES-256 at rest and TLS 1.2 or higher in transit. In practice, "industry standard" lets vendors run TLS 1.0 on a forgotten server and call it compliant.

If you handle regulated data and your vendor's contract doesn't name the encryption standards, your auditor will flag it. Save yourself the cycle.

Send back:

Vendor shall encrypt all Customer Data (a) at rest using AES-256 or an equivalent industry-recognized encryption standard, and (b) in transit using TLS 1.2 or higher. Vendor shall use industry-standard key management practices, including separation of encryption keys from encrypted data and regular key rotation. Vendor shall not store Customer Data in any unencrypted database, file system, or backup.

Comment: "Naming the specific standards so neither side is guessing during an audit."

7. The vague "security incident" definition

What the vendor sent:

"Security Incident" means any unauthorized access to Customer Data that materially impacts Vendor's systems.

Why it's a problem: Two big problems. First, it requires unauthorized access to be confirmed before the clock starts. So a suspicious event that might be a breach doesn't trigger notification — only a confirmed one does. That's exactly backwards from how good incident response works.

Second, "materially impacts Vendor's systems" lets the vendor decide what counts. A breach that exposes your customer PII but doesn't touch the vendor's billing system? Not material to them. Definitely material to you.

Send back:

"Security Incident" means any actual or reasonably suspected: (a) unauthorized access to, acquisition of, use of, disclosure of, alteration of, loss of, or destruction of Customer Data; (b) compromise of the systems Vendor uses to Process Customer Data; or (c) violation of Vendor's security policies that affects or has the potential to affect Customer Data. A Security Incident shall be deemed to occur upon Vendor's discovery of facts giving rise to a reasonable suspicion, regardless of whether the incident is later confirmed.

Comment: "Aligning the definition to how our IR team actually works — suspicion triggers process, confirmation comes later."

8. Open-ended data retention

What the vendor sent:

Vendor may retain Customer Data for as long as necessary to provide the Services and for Vendor's legitimate business purposes.

Why it's a problem: Legitimate business purposes is a phrase that means everything and nothing. Marketing analytics? Fraud detection? AI model training? Future product development? They can all be argued under this language.

If you're handling PII subject to GDPR, CCPA, or any state-level privacy law, this clause violates your data minimization obligations.

Send back:

Vendor shall retain Customer Data only for the duration of the Services and for the limited additional period required to fulfill Vendor's documented legal obligations. Vendor shall not retain Customer Data for any secondary purpose, including but not limited to analytics, product development, marketing, or training of any artificial intelligence or machine learning model, without Customer's express prior written consent.

Comment: "Need to align this with our data minimization commitments to our own customers. The AI training restriction is non-negotiable."

9. The buried unilateral termination right

What the vendor sent:

Vendor reserves the right to suspend or terminate the Services at any time, in its sole discretion, with or without notice.

Why it's a problem: This usually lives near the end of the contract, somewhere in a section called "General." It's easy to miss. And it lets the vendor pull the rug whenever they want — which becomes a security and continuity issue, not just a commercial one.

If you've integrated this vendor into your stack and they shut you off without notice, that's an availability incident. Your customers feel it. So does your SLA.

Send back:

Vendor may terminate this Agreement for material breach by Customer only after providing Customer with at least thirty (30) days' written notice and opportunity to cure. Vendor may suspend the Services without notice only to the extent reasonably necessary to address an immediate security threat, and shall restore Services as soon as the threat is mitigated. Vendor shall not terminate or suspend the Services for any other reason without providing Customer with at least ninety (90) days' prior written notice.

Comment: "We've been burned before by silent shutdowns. Need a notice window we can plan around."

10. The unilateral terms-change clause

What the vendor sent:

Vendor may modify these terms at any time by posting an updated version on its website. Continued use of the Services constitutes acceptance.

Why it's a problem: You signed a contract for one set of terms. The vendor can change those terms whenever they want, and your only recourse is to stop using the service, which, after you've integrated it into your stack, is rarely a real option.

This is especially common in the AI vendor space right now. Terms change. AI training rights appear. Data sharing expands. By the time you notice, you've been operating under the new terms for months.

Send back:

Vendor may modify the terms of this Agreement only by mutual written agreement of the parties. Vendor may update its standard terms of service applicable to other customers, but such updates shall not apply to Customer unless Customer expressly agrees in writing. Vendor shall provide Customer with at least sixty (60) days' written notice before any change to security practices, data handling commitments, or subprocessor arrangements that materially affect Customer.

Comment: "Standard enterprise carve-out. We can't operate under terms that change without our agreement."

11. Missing GDPR / cross-border transfer mechanism

What the vendor sent:

Vendor may transfer and process Customer Data globally as required to provide the Services.

Why it's a problem: If your customer data leaves the EU and ends up in a country without an adequacy decision, you need a valid transfer mechanism — Standard Contractual Clauses, Binding Corporate Rules, or one of the approved alternatives. This clause acknowledges no mechanism, names no jurisdictions, and gives you nothing to point to during a regulatory inquiry.

If you want a deeper dive on the GDPR side, our piece on AI redlining for GDPR and IT compliance covers the full framework.

Send back:

Vendor shall process Customer Data only in the jurisdictions identified in Schedule [X]. Any transfer of Customer Data from the European Economic Area, United Kingdom, or Switzerland to a country not deemed to provide adequate protection shall be governed by the European Commission's Standard Contractual Clauses (Module Two: Controller-to-Processor) attached as Schedule [Y], or such alternative transfer mechanism as is approved by Customer in writing. Vendor shall provide Customer with at least sixty (60) days' notice before adding any new processing jurisdiction.

Comment: "Need explicit jurisdictions and SCCs in place. Happy to use the EU Commission's standard module — just need it referenced and attached."

12. AI / ML training rights on customer data (the 2026 sleeper)

What the vendor sent:

Vendor may use Customer Data, including queries, prompts, inputs, and outputs, to train, develop, and improve Vendor's products and services, including Vendor's artificial intelligence and machine learning models.

Why it's a problem: This is the clause that's quietly appearing in every AI vendor's standard terms in 2026. And if you accept it as written, you've just given the vendor the right to bake your customer data into a model that other customers — including potentially your competitors — will benefit from.

For most regulated industries and any organization handling sensitive customer data, this is a Red-tier clause. Treat it that way.

The trickier bit: this clause often shows up in the standard terms of service, not the negotiated DPA. So your DPA can look pristine while the underlying TOS quietly authorizes everything you thought you'd prevented. Always check both.

Send back:

Vendor shall not use Customer Data, including any queries, prompts, inputs, outputs, embeddings, or derivatives thereof, to train, fine-tune, or otherwise improve any artificial intelligence or machine learning model, whether such model is used by Vendor or made available to other customers, third parties, or the public. This restriction applies regardless of whether the data is anonymized, aggregated, or otherwise transformed. Any exception to this restriction requires Customer's express prior written consent on a per-use-case basis.

Comment: "Hard line. We have parallel commitments to our own customers that prohibit this. If your platform requires customer data for model training, let's talk about whether there's a path with explicit per-customer opt-in."

How to actually use these examples

Bookmarking this page isn't a strategy. Here's what to do instead.

Pull these into your playbook. Take the twelve patterns above, drop them into whatever document holds your team's redlining standards, and tag each one with a tier (Green / Yellow / Red, per the security teams' framework). Most of the examples above land in Yellow or Red for security-conscious organizations. Set the threshold once, apply it consistently.

Make them findable. The playbook is useless if it lives in a Google Doc no one opens. The teams that move fastest have their playbook embedded in the tool they use to redline — meaning when a reviewer opens a contract, the Yellow and Red language is already flagging itself. That's the reason AI-native redlining tools are eating the manual workflow. Speed of retrieval matters as much as quality of position.

Track which clauses come back. Every quarter, look at which redlines vendors fight hardest. Those are your friction points. Sometimes the answer is to soften your position. Sometimes it's to harden your justification. Either way, you can't tune the playbook without the data.

The math on doing this manually vs. with AI

I'll close with the case that always lands hardest with finance teams.

Manually applying a 12-clause redline to a single DPA takes a security reviewer somewhere between 90 minutes and 3 hours, depending on document length and how many of the clauses need attention. Multiply that by the 30, 50, sometimes 100+ contracts a growing SaaS company processes monthly, and the math gets ugly fast.

The Augment Code security team — where Cyberbase co-founder Jon McLachlan is CISO — tested the alternative. They loaded their playbook into Cyberbase's Context Engine, ran 155 contracts through it over one period, and tracked the time savings.

Augment Code Case Study: Using Cyberbase
Augment Code Case Study: Using Cyberbase

That's roughly nine months of a full-time employee's working time, returned to higher-leverage security work. Not because the AI replaced anyone's judgment — it didn't — but because it eliminated the comparison work. The team still made every decision. They just stopped reading every line.

If you want help evaluating tools, we've published a buyer's guide to redlining software and a comparison of the 7 best options for security teams. Both worth reading before any procurement conversation.

And if you want the broader argument for why this work belongs to the security org in the first place — not legal — our piece on contract redlining as a security operations problem makes that case.

Otherwise, save this page. Steal the examples. Adapt them. And the next time a vendor sends you "commercially reasonable efforts," you'll have a redline ready before the email even hits your inbox.

Open your workspace and start redlining contracts in minutes — not weeks.

Frequently Asked Questions: Contract Redlining

What are common contract redlining examples for security teams?

The most common security-team redlines target vague breach notification language ('commercially reasonable' becomes a defined 48-hour window), blanket subprocessor authorization (becomes prior notification with objection rights), open-ended audit rights (becomes specific SOC 2 and pen test access), missing data deletion certification at termination, general liability caps that swallow security incidents, vague encryption language ('industry standard' becomes 'AES-256 at rest, TLS 1.2+ in transit'), and increasingly in 2026, vendor rights to use customer data for AI model training. Each of these has a standard redline pattern security teams can adapt to their playbook.

How do you redline a breach notification clause in a DPA?

Replace any vague timing language ('without undue delay,' 'commercially reasonable,' 'in a timely manner') with a hard 48-hour or 72-hour window measured from discovery, not confirmation. The redline should specify what the initial notification must include: nature of the incident, categories of customer data affected or potentially affected, remediation steps in progress, and a single point of contact. Anchor your position to GDPR Article 33's 72-hour supervisory authority window. Most vendors accept 72 hours without significant pushback; 48 hours requires more negotiation but is increasingly standard for enterprise deals.

What is an example redline for subprocessor clauses?

Vendor templates typically grant blanket authorization to engage subprocessors. The standard security-team redline requires (a) prior written notification of any new subprocessor at least 30 days before engagement, (b) maintenance of a current public subprocessor list, (c) the right to object to any new subprocessor for reasonable security or compliance concerns, and (d) the right to terminate the affected services without penalty if the objection cannot be resolved. This pattern is common enough that most enterprise-focused vendors expect it and have accepted versions ready.

Should AI training rights on customer data be redlined?

Yes, almost always. As of 2026, many vendor contracts include provisions allowing the vendor to use customer data, prompts, inputs, or interactions to train AI or machine learning models. For most regulated industries and any organization handling sensitive customer data, this is a non-negotiable redline. The standard redline strips the AI training right entirely or restricts it to fully anonymized, aggregated data with explicit opt-in consent. Security teams should treat this as a Red-tier clause in their playbook and check for it in every new vendor agreement, especially with AI-native vendors where it often appears in the standard terms of service rather than the negotiated DPA.

How long does it take to redline a contract using these examples?

Manually applying a 12-clause redline pattern to a single DPA takes a security reviewer between 90 minutes and 3 hours, depending on the document length and how many clauses need attention. Using AI-native redlining tools that have your playbook loaded — Cyberbase, for example — the same review drops to under 15 minutes of human time, with the AI handling the comparison and first-pass markup. The Augment Code security team reviewed 155 contracts and saved 743 hours using this approach, a 13:1 ROI compared to manual review.

Recommended Redlining

Compliance shouldn't kill your pipeline

One workspace. Agentic AI. Trust center, DDQs, and contract redlining — done. Start free, see results this week.