---
name: ai-adoption-and-team-enablement
description: "Guidance for marketing leaders on adopting AI tools, enabling teams, structuring workflows, and navigating the human-vs-AI creative tradeoff in B2B marketing contexts"
version: "2026-04-20"
episode_count: 31
---

# AI Adoption and Team Enablement

## Overview
This skill covers how B2B marketing teams should adopt AI tools, structure enablement programs, select the right tools, and think about the evolving relationship between human creativity and AI-generated output. All practices are sourced exclusively from Exit Five podcast guests across 31 episodes. Where guests disagree, those disagreements are surfaced explicitly rather than resolved.

---

## Phase Your AI Adoption Journey

Two guests independently proposed compatible two-phase models. Bill Glenn (Episode #328) framed these as productivity vs. impact phases; Erin May (Episode #337) framed them as Year 1 enablement vs. Year 2+ acceleration. The practices below draw from both.

**Phase 1 — Enablement and Productivity:**
Focus on identifying repeatable, process-oriented work and automating it. Measure success by hours reclaimed per day. This phase is about building foundational skills and confidence, not perfection. (Source: Bill Glenn, Episode #328)

Specific Year 1 tactics:
- Meet team members where they are — don't assume baseline competency
- Provide budget for tools and allocate dedicated L&D time
- Run show-and-tells and hack sessions so everyone begins using AI tools in low-stakes settings
- Accept that not everything will work initially; prioritize progress over perfection
(Source: Erin May, Episode #337)

**Phase 2 — Acceleration and Impact:**
Once Phase 1 is table stakes, move to using AI as a true copilot for decision-making: extracting customer insights, leveraging machine learning to augment decisions, and identifying market trends. This is where competitive advantage emerges. (Source: Bill Glenn, Episode #328)

Year 2+ expectations: accelerate the pace and raise the bar. The team now has foundational skills; push for faster adoption and higher-impact use cases. (Source: Erin May, Episode #337)

Before deploying any AI tools, ensure they pass privacy, safety, and regulatory requirements. Work cross-functionally with legal, IT, and engineering to maintain guardrails while pushing boundaries. (Source: Davang Shah, Episode #338)

---

## Framing AI for Your Team and Leadership

When communicating about AI internally, frame it as an enabler of human creativity and efficiency — not a replacement for marketers. AI can improve content creation, media optimization, and insights generation, but it cannot replace human creativity, judgment, authentic connection, and trust-building. This framing helps retain talent, manage change anxiety, and set realistic expectations. Note: Davang Shah (Episode #338) describes this as a framing principle rather than a step-by-step communication plan. One concrete application: at your next team all-hands, open with a specific example of a task AI handled this week and what the team did with the time saved. (Source: Davang Shah, Episode #338)

AI amplifies what's already working — it does not fix weak strategy, poor positioning, or unclear customer insights. Before investing in AI-powered content generation or automation, ensure your core strategy, competitive positioning, and customer research are sound. (Source: Chris Walker, Episodes #281 and #211)

Do not use AI to outsource thinking, strategy, or creative direction. AI lacks lived experience and observational skills, so output is mediocre without human curation. Use AI to speed up research, generate templates, or handle repetitive tasks so humans can focus on taste-making, strategy, and authentic creative work. (Source: Dmitry Shamis, Episode #238)

*(Note: whether human creativity or AI-generated content is the stronger competitive moat is contested — see Where Experts Disagree)*

---

## Building Team AI Capability

### Protect Experimentation Time
Allocate and protect a minimum of one to two hours per week on team members' calendars specifically for AI tool exploration and learning. Without protected time, learning gets crowded out by urgent work. This also signals organizational permission to invest in AI capability building. (Source: Bill Glenn, Episode #328)

### Use Teaching Assignments
When introducing a new AI tool, require team members to use it and then teach it back to peers. This forces deeper learning, surfaces gaps in understanding, and builds organizational expertise. The teacher learns more than the student. (Source: Jess Lytle, Episode #328)

Extend this further: have team members who have mastered a tool explain what they learned, what failed, and how they'll iterate. This reinforces that failure is part of the learning process and creates psychological safety. (Source: Bill Glenn, Episode #328)

### Run Cross-Functional Hackathons
Organize internal hackathons that mix departments (engineering, marketing, operations) and ask teams to identify business problems AI could solve. Use no-code tools like Lovable to prototype solutions quickly. This surfaces real use cases, builds cross-functional relationships, and demonstrates AI's practical value. Winning ideas can be evaluated for actual implementation. (Source: Bill Glenn, Episode #328)

### Build AI Literacy Before Tool Training
Before training teams on specific AI tools, invest in foundational AI literacy: what generative AI is and isn't, how it works, what bias and hallucination are, ethical considerations, and responsible use principles. Tool training alone — how to use Jasper, ChatGPT, etc. — without literacy training leads to misuse and skepticism. Literacy training should be broad-based across the organization, not just for content creators. (Source: Jessica Hreha, Episode #136)

*(Note: whether this literacy-first approach should be led by a dedicated role or distributed across the team is contested — see Where Experts Disagree)*

### Address Skepticism Through Hands-On Experience
Skeptics — especially experienced writers and designers — won't be convinced by arguments alone. Get them to actually log in and use the tools. Skilled practitioners can push AI tools further than non-specialists because they understand craft and can iterate on outputs. Frame it as: "Let's apply your editorial and design skills to this tool and see what you can do with it." (Source: Jessica Hreha, Episode #136)

### Onboard Junior Marketers with an AI-First Mindset
Teach junior marketers from day one that AI is part of their operating system, not an optional tool. Emphasize that success requires being more curious and strategic than AI itself — reading widely, building reference points, questioning outputs, and knowing which tool to use for which task. The goal is to shift from "finding information" to "discerning and questioning," so junior marketers can leverage AI to start their careers in more meaningful, strategic roles rather than doing rote research work. (Source: Jennifer Cannizzaro, Episode #267)

### Build Taste, Not Just Proficiency
Use AI-generated outputs as teaching tools to build taste on your team. When an AI email or piece of content performs well, break down its anatomy: What context was provided? How is it relevant to the user? What makes it stand out? Show the difference between generic AI output and contextual, creative AI output. This helps team members understand what good looks like and builds their ability to recognize and create quality work even when using AI tools. (Source: Aditya Vempaty, Episode #304)

### Coach Against Over-Reliance
When you detect that a team member has over-relied on AI and the output is obviously AI-generated, provide direct feedback: "It was obvious that you used AI here — let's work on this." Validate your assessment by running the output back through AI to confirm it matches AI's typical patterns. Use this as a coaching moment to reinforce the importance of human judgment, editing, and strategic thinking. (Source: Jennifer Cannizzaro, Episode #267)

### Model AI Learning as a Leader
As a CMO or marketing leader, personally invest time in learning new AI applications and tools (e.g., image generation with specific prompts, Canva with AI inputs, ChatGPT for specific workflows). This keeps you sharp, helps you understand what's possible, and models the behavior you want from your team. It also prevents you from being blindsided by capabilities your team is discovering. (Source: Kelly Hopping, Episode #255)

---

## Selecting and Using AI Tools

*(Note: whether to prefer general-purpose or specialized AI tools is contested — see Where Experts Disagree)*

### Evaluate Tools by Specific Use Case
Work backwards from the specific problem you're trying to solve rather than choosing based on general capability or brand familiarity. Identify what a tool excels at doing — not what it can do generally — then combine complementary tools for better output. Example: GenSpark is excellent at following detailed instructions and using JSON structure; Gamma is a rendering tool that makes output visually polished. Combining both produces better slide decks than either alone. Avoid the trap of using one tool for everything just because it's familiar. (Source: Jess Lytle, Episode #319)

### Use AI Tools as Starting Points, Not Final Output
Use ChatGPT, Gemini, and similar tools to generate initial drafts, brainstorm ideas, and get unstuck — but always apply your own judgment and context. The mistake many marketers make is copying AI output directly without editing. Use AI to move from zero to one, then layer in your own knowledge, brand voice, and strategic thinking. Compare outputs across multiple tools to find the best elements and remix them with your own work. (Source: Holly Xiao, Episode #270)

### Prioritize "Boring AI" Over Flashy Use Cases
When evaluating AI tools and use cases, prioritize those that eliminate repetitive, manual, low-value work (spreadsheet analysis, deck building, data formatting) over flashy or predictive analytics features. These "boring" applications free up cognitive capacity for strategic thinking and decision-making. Measure success by hours reclaimed and quality of output, not novelty. (Source: Jess Lytle, Episode #328)

### Categorize AI Opportunities Into Three Buckets
When looking for places to apply AI in your marketing, organize opportunities into three categories:
1. **Speed to value** — automate repetitive tasks to get from zero to one faster (e.g., webinar production, guest research)
2. **Enhance experience** — use AI to improve an existing process that already works well (e.g., auto-generate restaurant recommendations for event attendees based on venue address)
3. **Increase scope without increasing bandwidth** — do more of something without hiring more people (e.g., scale webinars from 1 to 3 per month)

This framework helps identify high-impact use cases and avoid building automation for tasks that don't fit these patterns. (Source: Dan, Episode #290)

### Expand AI Use Cases Beyond Copywriting
Systematically explore how AI can improve efficiency and quality across all marketing functions, not just content writing. Documented use cases include:
- Marketing ops: data analysis, decision frameworks, meeting prep
- PR and comms: speech writing, news releases
- Ad campaigns: A/B testing analysis, performance analysis
- Partner enablement: verticalization and personalization
- Events: email writing, logistics
- Internal communications: QBR prep, metrics updates
(Source: Jessica Hreha, Episode #136)

### Use AI as a Strategic Thought Partner
When preparing for board meetings or strategic presentations, use an LLM as a thought partner by instructing it to ask you one question at a time rather than providing solutions. This forces you to think through your strategy before building the output and helps you anticipate board-level objections. (Source: Tara Robertson, Episode #288)

Configure custom AI assistants to challenge your assumptions and poke holes in your ideas rather than simply affirming what you want to hear. Explicitly instruct your AI to "tell me all the reasons why I shouldn't do this" or "tell me all the questions my CEO will ask about this proposal." This transforms AI from a yes-man into a sparring partner. (Source: Ali Orlando Wert, Episode #307)

When working through a strategic decision, use AI iteratively by asking "why" repeatedly to each suggestion. Rather than accepting the first output, challenge the AI's reasoning by providing additional parameters or constraints. This forces both you and the AI to drill down to a source of truth. (Source: Sara Ajemian, Episode #288)

### Use AI for Meeting and QBR Preparation
Before attending a meeting, upload relevant materials (proposals, decks, vendor information) to a secure AI tool and ask it to generate questions you should ask, identify what you might be missing, or surface competitor information. (Source: Jessica Hreha, Episode #136)

Use AI to help with data analysis, summarization, and presentation preparation for quarterly business reviews. This can reduce prep time and improve the quality of insights presented. (Source: Jessica Hreha, Episode #136)

### Use AI to Train Brand Teams on Data Literacy
Help brand and communications teams develop data literacy by uploading their weekly or bi-weekly performance reports (social metrics, engagement data, post performance) into ChatGPT. Ask the AI to identify patterns, suggest what might have caused changes, and recommend experiments. This teaches team members to think analytically about their work and builds confidence in reporting to leadership. (Source: Sara Ajemian, Episode #288)

### Use AI to Structure Coaching Feedback
When reviewing work from junior or mid-level team members, input their submission and your shorthand notes into a custom ChatGPT that understands your feedback style. Have the AI structure your feedback to explain the reasoning behind changes rather than just rewriting the work. This teaches team members your decision-making process and saves time on one-on-one coaching. (Source: Sara Ajemian, Episode #288)

---

## Data Privacy and Security

Before uploading sales transcripts, customer data, or other sensitive company information to an AI platform, conduct an internal review of the platform's data handling practices. Look for SOC 2 reports or similar security certifications. Verify whether the platform trains on your data and whether you can opt out. Document your review process and get approval from your IT or legal team before proceeding. (Source: Dan, Episode #290)

Use the enterprise or business version of AI tools — not the free version — when uploading any proprietary company information, sales call transcripts, or strategic documents. Business versions typically do not use your data for training purposes. The cost of a business subscription is minimal compared to the risk of exposing sensitive information. (Source: Pranav Piyush, Episode #285)

---

## Organizational Design for AI-First Marketing

### Enable Self-Sufficient Team Members
Reduce handoffs and waiting time by enabling individual team members to be fully self-sufficient in their roles. Use AI tools and pre-built templates to allow team members to handle tasks they would normally hand off (e.g., social media content creators using AI or templates for design instead of waiting for the design team). The goal is for each person to own and complete their work end-to-end. (Source: Sylvia Lepoidevin, Episodes #283 and #199)

### Expand Marketer Roles Across Disciplines
AI tools enable marketers to blur traditional role boundaries by handling design work (image generation, slide creation), engineering work (building interactive tools with Lovable), and product work (creating calculators, dashboards) without waiting for specialized teams. This creates "hybrid operators" who can execute across multiple disciplines within a single day. (Source: Jess Lytle, Episode #319)

### Consider Outcome-Based Org Structure
In AI-first companies, consider replacing traditional functional silos (demand gen, product marketing, brand) with outcome-focused work streams (inbound, outbound, communications). Each work stream owns the full set of tactics needed to achieve its outcome. This structure enables faster shipping, clearer accountability, and better integration of AI tools. Example: inbound owns website, demand generation, content, and lifecycle marketing together; outbound owns events and field enablement together. (Source: Kady Srinivasan, Episode #276)

*(Note: whether AI adoption within this structure should be led by a dedicated role or distributed across the team is contested — see Where Experts Disagree)*

### Reinvest Time Freed by AI Into Higher-Value Work
When AI tools reduce time spent on tactical work, actively work with employees to identify how they should reinvest that time. Options include: client conversations, strategic learning, exploring new initiatives, or supporting other organizational priorities. Don't assume employees will automatically find high-value uses for freed time. Leaders should actively discuss and structure how that time gets used. (Source: Rachel Weeks, Episode #273)

### Measure AI Productivity Gains and Trace Downstream Impact
Don't stop at measuring time savings (e.g., "video scripts now take 5 minutes instead of 2 hours"). Dig deeper to understand what that time savings enables: Can you now apply verticalization to content? Can you increase personalization? What downstream performance improvements result? Frame time savings in terms of business outcomes: "This 20 hours of monthly savings allows us to create 10 additional personalized landing pages, which increased conversion by X%." (Source: Jessica Hreha, Episode #136)

### Conduct AI Impact Assessments for Customer Roles
Move beyond tactical AI use to strategic use by evaluating how your customers' roles and priorities will change in the next 12–18 months due to AI adoption. Ask: What will their role look like? What will they be focused on? Does your solution still serve them? If not, this signals a need for product roadmap changes, not just messaging changes. (Source: Lindsay O'Brien, Episode #304)

---

## Career and Skill Development in an AI-First World

Be an early adopter of AI tools and workflows. In every company, there are typically 1–2 people willing to experiment with new tools and break things. These early adopters gain outsized advantages because they learn the tools faster and can implement them before competitors catch up. Treat this like venture investing — make multiple bets, expect most to fail, but the winners will return significant value. (Source: Dave Gerhardt, Episode #279)

Place AI tools (ChatGPT, Claude, Perplexity, etc.) in your browser's top bookmarks bar as a visual reminder to consider using them for tasks throughout your day. This creates a habit of asking "Is there an AI tool I could use for this?" rather than defaulting to manual processes. (Source: Dave Gerhardt, Episode #279)

Kieran Flanagan (Episodes #318 and #257) argued that as AI commoditizes middle-ground marketing skills, the most defensible positions are at one of two extremes: either highly technical (deploying AI across marketing workflows, thinking like an engineer about systems and automation) or exceptionally creative (ideation, storytelling, copywriting, taste). He did not provide a step-by-step framework for making this transition, so treat this as a directional signal rather than an immediately actionable plan. One way to apply it: audit your last 10 work outputs and note what percentage required engineering-style systems thinking versus creative judgment. Use that ratio to identify where your natural strengths already lie, then identify one concrete skill in that lane to develop over the next quarter — for example, learning to build a Zapier workflow for the technical lane, or completing a copywriting course for the creative lane.

---

## Where Experts Disagree

### 1. Should marketers invest in human-created content over AI-generated content as a competitive moat?

**Support summary: 2 (human creativity as moat) vs. 6 (AI as content accelerator)**

**Position A — Human creativity is the primary competitive moat:**
Chris Cunningham (Episode #347, April 2026) argued that audiences still prefer human-created content and that real actors, genuine scripts, and authentic human moments create emotional connection AI cannot replicate. In a world where product differentiation is increasingly difficult, human creativity and brand personality become the primary competitive advantage. Invest in real creators and writers rather than automating content creation with AI.

Dmitry Shamis (Episode #238, April 2025) argued that AI lacks lived experience and observational skills, so output is mediocre without human curation. He explicitly stated "Don't outsource your thinking" and positioned AI as appropriate only for tedious execution work, while humans retain taste-making, strategy, and authentic creative work.

**Position B — AI is a legitimate accelerator for content creation:**
Ross Simmonds (Episodes #209 and #121) argued that AI tools speed up execution for great marketers — a great marketer using ChatGPT produces great content faster. He recommended using AI to translate content, convert voice notes to text, generate headline options, and create variations, comparing AI to calculators: they didn't devalue math, they made it faster.

Madhav Bhandari (Episode #183) advocated using AI to rapidly generate campaign ideas and accelerate time from concept to execution, reducing 4-week timelines to 5 days.

Gurdeep Dhillon (Episode #280) explicitly recommended on-brand content generation from prompts and campaign asset creation as core AI use cases, positioning AI-first execution as an imperative for marketing teams.

Holly Xiao (Episode #270) recommended using ChatGPT and Gemini to generate initial drafts and brainstorm ideas, framing AI as a legitimate starting point for content creation.

Dave Gerhardt (Episode #199) advocated for using AI tools to enable marketers to execute full-stack work across design, copy, and development.

**Context dependency:** Chris Cunningham's argument is strongest for brand advertising and video content where emotional resonance is paramount. The AI-accelerator position is stronger for written content, variations, and execution-heavy tasks. However, both camps are making claims about the same general content creation question, so a genuine disagreement remains even after accounting for format differences.

**Trend note:** The human-creativity-as-moat position (Cunningham, April 2026) is the most recent, potentially reflecting a backlash against AI content saturation. The AI-accelerator position dominated earlier episodes (2024–early 2025). This could indicate a field shift toward valuing human authenticity as AI content becomes commoditized.

**What this means for you:** If you're deciding where to invest budget and team capacity, consider the content format and channel. For brand advertising and video where emotional resonance drives results, the case for human creators is stronger. For written content, variations, and execution-heavy tasks, AI acceleration has broader support. The trend toward valuing human authenticity is worth monitoring as AI content becomes more ubiquitous.

---

### 2. Should marketers prefer general-purpose AI tools or specialized AI marketing tools?

**Support summary: 1 (general-purpose) vs. 2 (use-case-specific/specialized)**

**Position A — Prefer general-purpose tools:**
Lindsay O'Brien (Episode #304) explicitly recommended general-purpose tools (Claude, ChatGPT, Notebook LM, Zapier/Clay) over specialized AI marketing tools, citing cost-effectiveness, versatility, and reduced tool sprawl. She advised investing in niche tools only when there's a specific, high-impact use case that general tools can't address.

**Position B — Select tools by specific use case, including specialized tools:**
Jess Lytle (Episode #319) recommended working backwards from the specific problem to identify which tool excels at that task, then combining complementary tools. Her example: GenSpark (good at following detailed instructions) combined with Gamma (rendering/visual polish) produces better slide decks than either alone. She explicitly warned against using one tool for everything just because it's familiar.

Mychelle Mollot (Episode #182) argued that AI capabilities are being integrated into existing specialized platforms (Gong, HubSpot, etc.), shifting the evaluation to "which tools have the best AI features for our use case" rather than general-purpose standalone tools.

**Context dependency:** Team size and budget may influence this. Smaller teams with limited budgets may benefit more from general-purpose tools, while larger teams with specific high-volume use cases may justify specialized tools. However, both Jess Lytle and Lindsay O'Brien appear to be speaking to similar mid-market B2B marketing contexts, making this a genuine disagreement.

**What this means for you:** If you're building a lean stack with limited budget, the general-purpose approach reduces sprawl and cost. If you have specific, high-volume use cases (e.g., slide deck production at scale, or AI features already embedded in tools you own like Gong or HubSpot), the use-case-specific approach may yield meaningfully better output.

---

### 3. Should AI adoption be led by a dedicated full-time role, or distributed across the team?

**Support summary: 2 (dedicated role) vs. 5 (distributed enablement)**

**Position A — Create a dedicated, full-time AI champion role:**
Jessica Hreha (Episode #136) explicitly recommended creating a dedicated full-time role focused on AI strategy, adoption, and training — distinct from marketing ops. She argued this role is necessary because AI adoption as an add-on responsibility is not sustainable beyond the pilot phase. The role should have authority to set guidelines, train teams, evangelize use cases, and measure impact, sitting at the intersection of marketing operations, content strategy, and organizational change management. She also recommended that foundational AI literacy training precede tool training, and that this literacy program be broad-based across the organization — an undertaking she argued requires dedicated ownership to execute consistently.

**Position B — Distribute AI adoption across the team:**
Erin May (Episode #337) recommended a phased team-wide approach: Year 1, meet team members where they are with budget for tools, L&D time, show-and-tells, and hack sessions. Year 2+, accelerate expectations. No mention of a centralized AI champion role.

Bill Glenn (Episode #328) recommended calendar-blocked experimentation time for all team members, cross-functional hackathons, and empowering team members to teach peers — a distributed model of AI capability building.

Jess Lytle (Episode #328) recommended teaching assignments where team members learn AI tools and teach them back to peers, distributing expertise organically across the team.

Dave Gerhardt (Episode #279) advocated for individual early adopters within companies to experiment and gain advantage, framing AI adoption as something driven by 1–2 motivated individuals rather than a centralized program.

Jessica Hreha (Episode #136) also recommended getting skeptics to actually log in and use tools themselves — a hands-on, distributed approach that is in some tension with her own literacy-first, centralized-role recommendation.

**Context dependency:** Jessica Hreha's recommendation for a dedicated role may be more appropriate for larger enterprise marketing organizations where AI adoption at scale requires dedicated program management. The distributed approach may work better for smaller teams. However, Hreha explicitly argued the dedicated role is necessary even past the pilot phase, making this a genuine disagreement for mid-to-large teams.

**Trend note:** The distributed enablement approach dominates more recent episodes (2026), while the dedicated role recommendation came from an earlier episode (April 2024). This may reflect a maturation of the field where AI adoption is now expected to be embedded in everyone's role rather than managed by a specialist.

**What this means for you:** For large enterprise marketing organizations managing AI adoption at scale, a dedicated role may be warranted. For smaller or mid-market teams, a distributed model with protected experimentation time, hack sessions, and teaching assignments may be more practical. The trend toward distributed ownership is worth noting as AI literacy becomes a baseline expectation rather than a specialty.

---

## What NOT To Do

- **Do not use AI to scale spammy outbound tactics.** Outbound performance is down 71% (per Bridge Group data cited by Jaleh Rezaei), and 50% of buyers are less likely to recommend brands using detectable AI emails. If a tactic only works because it's novel or because few competitors use it, it will fail once everyone adopts it. Build with the buyer in mind. (Source: Jaleh Rezaei, Episode #248)

- **Do not use AI to replace strategic thinking.** AI cannot fix weak strategy, poor positioning, or unclear customer insights. Using AI to generate volume without strategic direction produces more of the wrong thing faster. (Source: Chris Walker, Episodes #281 and #211)

- **Do not copy AI output directly without editing.** The mistake many marketers make is using AI output as final output. Always apply your own judgment, brand voice, and strategic thinking. (Source: Holly Xiao, Episode #270)

- **Do not outsource your thinking to AI.** AI lacks lived experience and observational skills. Output is mediocre without human curation. (Source: Dmitry Shamis, Episode #238)

- **Do not assign AI adoption as an add-on responsibility past the pilot phase without structural support.** This is not sustainable. Whether through a dedicated role or a structured distributed program, AI adoption requires intentional organizational investment. (Source: Jessica Hreha, Episode #136)

- **Do not upload sensitive company data to free versions of AI tools.** Always use enterprise or business versions when working with proprietary information, sales transcripts, or strategic documents. (Source: Pranav Piyush, Episode #285)

- **Do not assume employees will automatically reinvest time freed by AI into high-value work.** Leaders must actively discuss and structure how that time gets used. (Source: Rachel Weeks, Episode #273)

- **Do not use one AI tool for everything just because it's familiar.** Work backwards from the specific problem to find the best tool for that job. (Source: Jess Lytle, Episode #319)

- **Do not let AI experimentation get crowded out by urgent work.** Without protected calendar time, learning will not happen. (Source: Bill Glenn, Episode #328)

- **Do not coast in the middle of the technical-creative spectrum without a plan.** Kieran Flanagan (Episodes #318 and #257) argued AI will commoditize competent-but-not-exceptional work. A concrete first step: audit your last 10 work outputs and identify whether your strongest contributions were systems-oriented or creative. Use that as a starting point for deciding where to invest your development time.

---

## Sources

| Episode | Guest | Date |
|---------|-------|------|
| #121 | Ross Simmonds | 2024-02-29 |
| #136 | Jessica Hreha | 2024-04-29 |
| #182 | Mychelle Mollot | 2024-10-07 |
| #183 | Madhav Bhandari | 2024-10-10 |
| #199 | Dave Gerhardt, Sylvia Lepoidevin | 2024-12-05 |
| #209 | Ross Simmonds | 2025-01-09 |
| #211 | Chris Walker | 2025-01-16 |
| #238 | Dmitry Shamis | 2025-04-17 |
| #248 | Jaleh Rezaei | 2025-05-22 |
| #255 | Kelly Hopping | 2025-06-16 |
| #257 | Kieran Flanagan | 2025-06-23 |
| #267 | Jennifer Cannizzaro | 2025-07-24 |
| #270 | Holly Xiao | 2025-08-04 |
| #273 | Rachel Weeks | 2025-08-14 |
| #276 | Kady Srinivasan | 2025-08-25 |
| #279 | Dave Gerhardt | 2025-09-04 |
| #280 | Gurdeep Dhillon | 2025-09-08 |
| #281 | Chris Walker | 2025-09-11 |
| #283 | Sylvia Lepoidevin | 2025-09-18 |
| #285 | Pranav Piyush | 2025-09-25 |
| #288 | Tara Robertson, Sara Ajemian | 2025-10-06 |
| #290 | Dan | 2025-10-13 |
| #304 | Lindsay O'Brien, Aditya Vempaty | 2025-11-17 |
| #307 | Ali Orlando Wert | 2025-11-27 |
| #318 | Kieran Flanagan | 2026-01-05 |
| #319 | Jess Lytle | 2026-01-08 |
| #328 | Bill Glenn, Jess Lytle | 2026-02-11 |
| #337 | Erin May | 2026-03-12 |
| #338 | Davang Shah | 2026-03-17 |
| #347 | Chris Cunningham | 2026-04-16 |