profile

AI Governance Newsletter

The risk we identified that keeps leadership up at night.


Hey Reader

In the workshop, we identified 5 risks for the HireRight AI system. Risk #5 third-party accountability is the one I want to go deeper on.

Because this risk is about to explode.

Here's what's happening:

Right now, the average enterprise uses 30-50 AI-embedded vendor tools. Salesforce Einstein. ServiceNow AI. Workday's AI recommendations. Microsoft Copilot. ChatGPT Enterprise. Zoom AI Companion.

Most of these were purchased through standard IT procurement. No AI-specific risk assessment. No AI clauses in the contract. No audit rights over the model. No change notification when the vendor updates the algorithm.

And under the EU AI Act, the DEPLOYER not the vendor is responsible for ensuring compliance.

Read that again: the deployer is responsible.

Your company buys an AI tool. The vendor's model discriminates. Your company gets the regulatory action, the lawsuit, and the headline. Not the vendor.

Air Canada learned this the hard way. The tribunal told them: "You deployed it. You're responsible."

This is why AI GRC practitioners who understand third-party AI risk are in such demand. Every organization with AI vendors needs someone who can:

→ Conduct AI-specific vendor due diligence
→ Negotiate AI clauses into contracts (audit rights, change notification, data restrictions)
→ Monitor vendor AI performance post-deployment
→ Build the escalation and exit framework

This is one of 16 sessions in the 8-week cohort program. Session 14 goes deep on third-party AI risk management including a ready-to-use vendor questionnaire and 15+ contract clause templates.

But even without the program, here's something you can do this week: pick ONE AI vendor your organization uses. Ask yourself the 5 questions from the workshop:

  1. Do we have audit rights over their AI model?
  2. Are we notified when they update the model?
  3. Can they use our data to train models for other clients?
  4. Can we explain the AI's decisions to regulators if asked?
  5. Do we have an exit strategy if they can't meet our governance requirements?

If the answer to most of those is "no" or "I don't know" you just found your first AI governance quick win.

More tomorrow.

— François

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

AI Governance Newsletter

Every week I break down the real-world frameworks, regulations, and strategies organizations are using to govern AI responsibly.

Share this page