---
name: ai-governance-and-responsible-use
description: "Guides marketers on how to adopt AI tools responsibly, govern their use, and build organizational capability — trigger when a user asks about AI rollout strategy, team enablement, data privacy, ethical use, or building an AI-ready culture."
version: "2026-04-20"
episode_count: 14
---

# AI Governance and Responsible Use

## Overview
This skill covers how B2B marketing teams should adopt, govern, and scale AI tools responsibly — including rollout strategy, data privacy, ethical disclosure, team enablement, and organizational culture change. All practices are sourced exclusively from Exit Five podcast guests across 14 episodes. Do not supplement with general knowledge not represented here.

---

## Adoption Strategy and Phasing

When helping a user plan an AI rollout, structure the journey in phases rather than attempting full transformation at once.

**Phase 1 — Enablement and Learning:**
- Meet team members where they are. Provide budget for tools, allocate dedicated learning and development time, and run show-and-tells and hack sessions. (Source: Erin May, Episode #337)
- Accept that not everything will be perfect initially. Prioritize progress over perfection during the testing phase. Ensure all tools pass privacy, safety, and regulatory requirements before deployment. (Source: Davang Shah, Episode #338)
- Focus Phase 1 on productivity gains and efficiency: identify repeatable, process-oriented work and measure success by hours reclaimed per day. (Source: Bill Glenn, Episode #328)

**Phase 2 — Acceleration and Transformation:**
- Once foundational skills are in place, accelerate expectations and move faster. (Source: Erin May, Episode #337)
- In Phase 2, use AI as a true copilot for decision-making: extracting customer insights, leveraging machine learning to augment decisions, and identifying market trends. Phase 1 becomes table stakes; Phase 2 is where competitive advantage emerges. (Source: Bill Glenn, Episode #328)

**Framing AI adoption as behavior change, not a tools problem:**
- Recognize that the primary barrier to AI adoption is not tool availability but organizational behavior change. Humans adopt technology logarithmically while AI advances exponentially. Address this through leadership modeling, psychological safety, and change management — not tool procurement. (Source: Bill Glenn, Episode #328)

> **Note:** How to drive adoption — top-down mandates vs. bottom-up peer evangelism — is actively contested among guests. See **Where Experts Disagree** before recommending a single approach.

---

## Leadership Alignment and Governance Structures

**Executive alignment before company-wide rollout:**
- Before rolling out AI adoption across the company, align the executive leadership team on approach, guardrails, and policy. Assign one EL member as an AI coordinator (not sole owner) to collect departmental use cases and best practices, but require each functional leader to develop their own AI strategy. Meet collectively to avoid siloed AI strategies. (Source: Bill Glenn, Episode #328)
- Position the Chief People Officer as a co-leader with the CEO on AI adoption strategy. This ensures rollout is treated as a change management and human behavior problem, not just a technology problem. (Source: Bill Glenn, Episode #328)

**Formal committee and champion structures:**
- Create a formal AI committee and designate AI champions in each department (marketing, sales, product, etc.). Champions lead discussions about how AI could improve workflows, test new tools, and share learnings across the organization. This distributes responsibility beyond the CMO and creates a culture of experimentation. (Source: Jennifer Delevante-Moulen, Episode #288)

**Dedicated AI champion role:**
- Rather than assigning AI adoption as an additional responsibility on top of existing jobs (not sustainable past the pilot phase), create a dedicated full-time role focused on AI strategy, adoption, and training across the marketing organization. This person should have authority to set guidelines, train teams, evangelize use cases, and measure impact. The role sits at the intersection of marketing operations, content strategy, and organizational change management. (Source: Jessica Hreha, Episode #136)

**Presenting AI strategy to leadership:**
- When advocating for AI adoption to a hesitant manager or leadership, don't ask for permission — present a formal strategy. Include: the business case, why this is a trend, specific areas in the business that are holding you back, 3+ concrete ways to use AI, implementation plan, cost, measurement approach, and expected outcomes. This positions you as a strategic leader rather than someone asking for a favor. (Source: Dave Gerhardt, Episode #279)

> **Note:** The degree to which top-down governance structures should drive adoption (vs. organic bottom-up evangelism) is contested. See **Where Experts Disagree**.

---

## Team Enablement and Culture Building

**Embedding AI into organizational rhythms:**
- Require all team members to set at least one annual goal related to AI, tailored to their function. Goals can range from adopting a specific tool or use case, to exploring and learning, to building capacity or solving a problem with AI. This signals that AI adoption is a company priority and ensures accountability. (Source: Jennifer Cannizzaro, Episode #267)
- Embed AI adoption into the organizational culture at multiple levels simultaneously: individual annual goals, team meeting spotlights, one-on-one conversations, small group problem-solving sessions, and cross-functional initiatives. This creates redundancy and reinforcement so AI adoption becomes part of how the team operates, not a one-time initiative. (Source: Jennifer Cannizzaro, Episode #267)
- Dedicate time in every team meeting for one person to share what they're doing with AI. Spotlights can cover: building an agent for a specific use case, comparing outputs across different LLMs, testing specific prompts, or exploring new tools. (Source: Jennifer Cannizzaro, Episode #267)

**Peer learning and show-and-tell:**
- Establish a recurring weekly meeting (e.g., "How We AI") where different team members present one AI use case they're experimenting with. No formal preparation required — just a casual presentation of what they're trying. This creates a low-pressure learning environment, reduces impostor syndrome, and surfaces new use cases organically. (Source: Tara Robertson, Episode #288)

**Identifying and activating power users:**
- Create a dashboard to identify which team members are using AI tools most actively. Recruit these power users as evangelists by asking them to record demos and share their workflows with the broader team. (Source: Drew Pinta, Episode #346)
- Rather than hiring external AI specialists, identify team members already experimenting with AI tools and empower them as internal change agents. Have them surface their experiments, share learnings, and lead adoption within their functions. (Source: Jess Lytle, Episode #328)

> **Note:** Whether peer evangelism or top-down mandates should be the primary adoption driver is contested. See **Where Experts Disagree**.

**Cross-functional hackathons:**
- Organize internal hackathons that mix departments (engineering, marketing, operations, etc.) and ask teams to identify business problems that AI could solve. Use no-code tools to prototype solutions quickly. This surfaces real use cases, builds cross-functional relationships, and demonstrates AI's practical value. (Source: Bill Glenn, Episode #328)

**Giving teams permission to learn:**
- Establish a top-down mandate that teams can temporarily reduce performance on existing initiatives to invest time in learning and building with AI tools. Leadership should explicitly communicate that short-term metric dips are acceptable during the transition period. (Source: Drew Pinta, Episode #346)

> **Note:** This practice is part of the top-down vs. bottom-up adoption disagreement. See **Where Experts Disagree**.

**Framing AI as empowerment, not replacement:**
- When introducing AI tools to your team, frame them as capabilities that make employees more powerful and productive, not as replacements for human work. Explain specific use cases and connect the benefit to employee goals. Avoid language that suggests technology can replace human judgment or value. (Source: Rachel Weeks, Episode #273)

**Addressing skepticism:**
- Skeptics (especially experienced writers and designers) won't be convinced by arguments alone. Get them to actually log in and use the tools themselves. Skilled practitioners can push AI tools further than non-specialists because they understand craft and can iterate on outputs. Frame it as: "Let's apply your editorial and design skills to this tool and see what you can do with it." (Source: Jessica Hreha, Episode #136)

---

## AI Literacy and Guidelines

**Prioritize literacy before tool training:**
- Before training teams on specific AI tools, invest in foundational AI literacy: what generative AI is and isn't, how it works, what bias and hallucination are, ethical considerations, and responsible use principles. Tool training alone without literacy training leads to misuse and skepticism. Literacy training should be broad-based across the organization, not just for content creators. (Source: Jessica Hreha, Episode #136)

**Establish organizational AI guidelines:**
- Create clear guidelines for how your organization uses generative AI tools, covering: what data can and cannot be input (proprietary information, customer data), requirement for human editorial review of all outputs, awareness of bias and hallucination risks, and ethical use principles. Pair these guidelines with AI literacy training. Include a charter that articulates company values around AI (e.g., time savings benefit employees as much as the company). (Source: Jessica Hreha, Episode #136)

**Managing expectations honestly:**
- Counter inflated expectations about AI by documenting what you've tried, what's working, what's not, and the measurable benefits (e.g., time savings, cost reductions, quality improvements). Socialize these findings continuously across the organization so stakeholders understand realistic ROI. This prevents the "AI will do 10x more work with no additional budget" trap. (Source: Jennifer Cannizzaro, Episode #267)

---

## Data Privacy and Security

**Use enterprise/business versions of AI tools:**
- When uploading strategy documents, sales transcripts, and other company materials to AI tools, use the business or enterprise version — not the free version. Business versions typically do not use your data for training purposes and offer better data privacy protections. This is a non-negotiable prerequisite if you're uploading any proprietary company information. The cost of a business subscription is minimal compared to the risk of exposing sensitive information. (Source: Pranav Piyush, Episode #285)

**Review security certifications before uploading sensitive data:**
- Before uploading sales transcripts, customer data, or other sensitive company information to an AI platform or workflow tool, conduct an internal review of the platform's data handling practices. Look for SOC 2 reports or similar security certifications. Verify whether the platform trains on your data and whether you can opt out. Document your review process and get approval from your IT or legal team before proceeding. (Source: Dan, Episode #290)

**Structured tool evaluation with cross-functional vetting:**
- Establish a formal process for employees to propose and evaluate new AI tools before adoption. Create a committee including IT/security and people team leaders to vet tools. Allow employees to pilot tools on free trials without ingesting proprietary company data, then bring promising tools to the committee for fast-track evaluation decisions. (Source: Bill Glenn, Episode #328)

**Tech stack audit with AI requirement:**
- When evaluating your marketing tech stack, establish a corporate mandate that any new tool brought on must have some component of AI built in. During procurement, require both a business case and a team need statement for each tool, and evaluate how it connects to your existing stack. (Source: Sara Ajemian, Episode #288)

---

## Ethical Use and Disclosure

**Disclose AI in direct customer interactions:**
- When using AI to generate content in direct interactions with customers (chat, email, etc.), disclose that the content was AI-created. Never mimic a human with AI in a conversation where the other person thinks they're talking to a human. (Source: Kieran Flanagan, Episodes #318 and #257)

**The disclosure question for functional/informational content is contested:**
- Kieran Flanagan argues that for functional content (like how-to articles or informational pieces), disclosure is less critical if the content is high quality and solves the person's problem. The key distinction he draws is personal/relational versus functional/informational. (Source: Kieran Flanagan, Episodes #318 and #257)
- Jessica Hreha recommends organizational AI guidelines requiring human editorial review of all AI outputs and a charter articulating company values around AI use — implying a more comprehensive governance stance that goes beyond limiting disclosure only to direct interactions. (Source: Jessica Hreha, Episode #136)

> **Note:** This is a genuine disagreement. See **Where Experts Disagree** for the full breakdown before advising a user on their disclosure policy.

---

## AI Agent Guardrails

**Classify agent workflows by risk level:**
- Classify AI agent workflows by risk level: low-risk (internal work like creative generation) can run freely; high-risk (external interactions like sending emails or managing ad accounts) require guardrails. For high-risk workflows, route interactions through internal middleware that enforces constraints (e.g., preventing duplicate emails to the same person, rate-limiting API calls). This prevents costly mistakes like account lockdowns or customer spam. (Source: Drew Pinta, Episode #346)

---

## AI Design Tools and Brand Governance

**Build a reusable brand skill for self-service design:**
- Create a Claude skill that encodes your brand guidelines, visual styles, and tone into reusable instructions. Input your brand guide and design playbook into Claude Enterprise (or team space), then use the skill creator to build instructions for repeatable design tasks (infographics, sales decks, dashboards). Store the skill's markdown instructions in Notion and set up a Claude-Notion integration so the skill pulls live data. Establish a governance process: team members submit skill change requests through Notion, your design team vets and approves changes, and one-off requests are handled separately. (Source: Liz, Episode #345)

**Test AI design skills internally before organization-wide rollout:**
- Before deploying an AI design skill to your entire organization, conduct thorough internal testing with your marketing and design teams. Document edge cases and issues that arise (e.g., distorted charts, layout problems), then iterate on the skill instructions to address them. Expect some edge cases to persist, but measure the efficiency gains against the frequency of issues. (Source: Liz, Episode #345)

**Set realistic expectations for AI design output:**
- Recognize that AI design tools will not produce perfect output on the first try. Instead of viewing edge cases as failures, treat them as part of an iterative process. Set organizational expectations that AI-generated designs are starting points, not final products. (Source: Liz, Episode #345)

---

## Where Experts Disagree

### 1. Should AI adoption be driven top-down or bottom-up?

**Support summary: 6 (top-down) vs. 4 (bottom-up)**

This is one of the most substantive disagreements across the source material. Multiple guests hold strong, explicit positions on opposite sides.

---

**Position A: Top-down mandates, governance, and structured rollout are essential**

Supporters: Drew Pinta (Ep. #346), Bill Glenn (Ep. #328), Jennifer Cannizzaro (Ep. #267), Jennifer Delevante-Moulen (Ep. #288), Jessica Hreha (Ep. #136), Dave Gerhardt (Ep. #279)

- Drew Pinta recommends a top-down mandate explicitly giving teams permission to deprioritize current work to learn AI, with leadership communicating that short-term metric dips are acceptable.
- Bill Glenn recommends aligning the executive leadership team before company-wide rollout, assigning an AI coordinator, requiring each functional leader to develop their own AI strategy, and partnering the CPO with the CEO on change management.
- Jennifer Cannizzaro recommends embedding AI adoption into annual goal-setting for all team members and building AI culture through multi-level integration including goals, meetings, and one-on-ones.
- Jennifer Delevante-Moulen recommends creating a formal AI committee with designated champions in each department to drive adoption through structured governance.
- Jessica Hreha recommends creating a dedicated full-time AI champion role with authority to set guidelines and train teams, and establishing organizational AI guidelines before broad rollout.
- Dave Gerhardt recommends presenting a formal, structured AI adoption strategy to leadership with a business case, implementation plan, and measurement approach.

---

**Position B: Bottom-up peer evangelism is the more effective primary driver**

Supporters: Drew Pinta (Ep. #346), Jess Lytle (Ep. #328), Tara Robertson (Ep. #288), Jessica Hreha (Ep. #136)

- Drew Pinta recommends creating a dashboard to identify power users and recruiting them as evangelists to record demos and share workflows, arguing explicitly that bottom-up adoption driven by peer examples is more effective than top-down training.
- Jess Lytle recommends identifying team members already experimenting with AI and empowering them as internal change agents rather than relying on top-down mandates.
- Tara Robertson recommends a recurring weekly show-and-tell where team members casually share AI experiments with no formal preparation, creating organic peer learning rather than structured top-down training.
- Jessica Hreha recommends getting skeptics hands-on with tools directly rather than convincing them through arguments or mandates.

---

**Notable nuance:** Drew Pinta appears on both sides — he advocates for a top-down mandate to create permission and space for learning, *and* for bottom-up peer evangelism as the mechanism that actually spreads adoption. This suggests the two approaches may be complementary rather than mutually exclusive, even if other guests frame them as competing priorities.

**Context dependency:** Top-down approaches may be more appropriate for larger organizations with formal governance needs, while bottom-up may work better in smaller or more agile teams. However, several guests explicitly argue their preferred approach is universally more effective, making this a genuine disagreement in framing and emphasis even when company size is held constant.

**Trend note:** More recent guests (2026) like Drew Pinta explicitly advocate for bottom-up peer evangelism as the primary driver, while earlier guests (2024–2025) tended to emphasize formal governance structures. This may reflect a maturation of AI adoption thinking from "build the infrastructure" to "activate the people already using it."

**Why it matters for your user:** Choosing the wrong adoption strategy wastes time and budget. Over-investing in governance before people are ready creates bureaucracy without behavior change; relying purely on organic adoption leaves most of the team behind. When advising a user, surface both approaches and help them assess which fits their organization's size, culture, and current AI maturity.

---

### 2. Must you disclose AI generation for functional/informational content (not direct interactions)?

**Support summary: 2 (disclosure required for direct interactions only) vs. 1 (broader disclosure and guidelines required)**

---

**Position A: Disclosure is ethically required for direct interactions; optional for functional content**

Supporters: Kieran Flanagan (Ep. #318 and #257)

- Kieran Flanagan explicitly distinguishes between direct interactions (must disclose — never mimic a human with AI where the other person thinks they're talking to a human) and functional content (disclosure less critical if quality is high and the content solves the user's problem).
- He reiterates this same position across two separate episodes, making it a consistent and deliberate stance.
- The key distinction he draws: personal/relational interactions require disclosure; functional/informational content does not.

---

**Position B: Organizations should establish comprehensive guidelines covering all AI-generated content**

Supporters: Jessica Hreha (Ep. #136)

- Jessica Hreha recommends organizational AI guidelines requiring human editorial review of *all* AI outputs and a charter articulating company values around AI use — implying a more comprehensive governance posture that goes beyond limiting disclosure only to direct interactions.
- Her framework does not carve out a "functional content" exception.

---

**Context dependency:** Flanagan's nuanced position may apply more to content marketing contexts, while Hreha's broader framework may reflect enterprise compliance needs. However, both are speaking to general B2B marketing practice, making this a genuine disagreement on where the ethical line sits.

**Why it matters for your user:** Getting this wrong in either direction has real consequences. Over-disclosing on all content may undermine trust in your brand unnecessarily; under-disclosing in the wrong contexts creates ethical and reputational risk. When advising a user on disclosure policy, present both positions and flag that this is an unsettled question in the practitioner community. Do not present either position as settled consensus.

---

## What NOT To Do

- **Do not treat AI adoption as purely a tools problem.** The primary barrier is organizational behavior change. Selecting the right tools without addressing culture, psychological safety, and change management will not produce adoption. (Source: Bill Glenn, Episode #328)
- **Do not assign AI adoption as an add-on to existing roles past the pilot phase.** This is not sustainable. Create a dedicated full-time role if you want serious organizational transformation. (Source: Jessica Hreha, Episode #136)
- **Do not skip AI literacy training and jump straight to tool training.** Teams that don't understand what generative AI is, how it works, and what its failure modes are (bias, hallucination) will misuse tools and become skeptical. (Source: Jessica Hreha, Episode #136)
- **Do not upload proprietary company data, sales transcripts, or strategic documents to free-tier AI tools.** Use enterprise or business versions with documented data privacy protections. (Source: Pranav Piyush, Episode #285)
- **Do not deploy AI tools without reviewing the platform's data handling practices.** Look for SOC 2 reports, verify training data policies, and get IT or legal approval before proceeding. (Source: Dan, Episode #290)
- **Do not run AI agents that interact with external systems (sending emails, managing ad accounts) without guardrails.** Route high-risk workflows through middleware that enforces constraints to prevent costly mistakes. (Source: Drew Pinta, Episode #346)
- **Do not mimic a human with AI in direct customer interactions.** Always disclose when an interaction is AI-powered. (Source: Kieran Flanagan, Episodes #318 and #257)
- **Do not expect AI design tools to produce perfect output on the first try.** Set organizational expectations that AI-generated designs are starting points, not final products. (Source: Liz, Episode #345)
- **Do not try to convince AI skeptics through arguments alone.** Get them hands-on with the tools. Skilled practitioners often discover use cases they didn't anticipate once they experience the tools firsthand. (Source: Jessica Hreha, Episode #136)
- **Do not allow siloed AI strategies across departments.** Align the executive leadership team collectively before company-wide rollout to prevent rogue adoption and inconsistent governance. (Source: Bill Glenn, Episode #328)
- **Do not frame AI to your team as a replacement for human work.** Frame it as employee empowerment. Connect benefits to employee goals, not just company efficiency. (Source: Rachel Weeks, Episode #273)
- **Do not let inflated AI expectations go unchallenged.** Document what you've tried, what's working, and what's not. Socialize realistic ROI findings to prevent the "AI will do 10x more with no additional budget" trap. (Source: Jennifer Cannizzaro, Episode #267)

---

## Sources

| Episode | Guest | Date |
|---------|-------|------|
| Episode #136 | Jessica Hreha | 2024-04-29 |
| Episode #257 | Kieran Flanagan | 2025-06-23 |
| Episode #267 | Jennifer Cannizzaro | 2025-07-24 |
| Episode #273 | Rachel Weeks | 2025-08-14 |
| Episode #279 | Dave Gerhardt | 2025-09-04 |
| Episode #285 | Pranav Piyush | 2025-09-25 |
| Episode #288 | Jennifer Delevante-Moulen, Sara Ajemian, Tara Robertson | 2025-10-06 |
| Episode #290 | Dan | 2025-10-13 |
| Episode #318 | Kieran Flanagan | 2026-01-05 |
| Episode #328 | Bill Glenn, Jess Lytle | 2026-02-11 |
| Episode #337 | Erin May | 2026-03-12 |
| Episode #338 | Davang Shah | 2026-03-17 |
| Episode #345 | Liz | 2026-04-09 |
| Episode #346 | Drew Pinta | 2026-04-13 |