AI Drivers vs. AI Passengers
By AJ Teachout
AI is splitting marketing teams into two camps: AI drivers (who use AI as a starting point and apply their own judgment) and AI passengers (who copy-paste and ship). The problem: passengers look productive. Their output is up, their turnaround is fast, but the thinking behind the work is eroding. If your team's last five deliverables all sound the same, you’ve probably got an AI content quality problem hiding inside your productivity metrics.
TL;DR: AI is splitting marketing teams into two camps: AI drivers (who use AI as a starting point and apply their own judgment) and AI passengers (who copy-paste and ship). The problem: passengers look productive. Their output is up, their turnaround is fast, but the thinking behind the work is eroding. If your team's last five deliverables all sound the same, you’ve probably got an AI content quality problem hiding inside your productivity metrics.
The AI Content Promise vs. Reality
10x your content marketing! Fully-automate your outbound with 10 sub-agents!
I know you've seen the posts on LinkedIn promising a hands-off process that will simplify your life and the leads will just come pouring in. I know because my feed is filled with them, too.
And, if you set up your Claude or ChatGPT environment right, the emails, case studies, social posts, graphics, and landing page content are all created in about 40 seconds. It's super-efficient and grammatically clean. You spent time setting up guardrails, so you feel confident the sources being cited are all recent and real — No hallucinations here…probably.
Yet, every single piece of content is completely forgettable and bland.
Everything reads the same way. Polished but flat. Professional but interchangeable. You could swap your company name for a competitor and nobody would notice the difference. Your content is brand-blind or brand-bland (I'm workshopping the term, hang in there with me). You know exactly what I mean, though.
If you're there, welcome aboard. You're entered passenger mode.
What's the Difference Between an AI Driver and an AI Passenger?
Greg Shove, CEO of Section, introduced a framework that every marketing leader needs to think deeply about: every knowledge worker using AI is becoming either a driver or a passenger.
Drivers manage their AI. They prompt with intent, verify the output, and make judgment calls about what stays and what gets rewritten. When the AI produces something, they pause, evaluate, and decide before moving forward. This is more than just the human-in-the-loop approach. Human-in-the-loop means someone reviewed it before it shipped. Driving means someone thought about it before it shipped. They questioned the framing, challenged the angle, and decided what the AI got wrong or didn't fully explain.
Review is a checkpoint. Driving is a mindset.
Passengers defer. They paste a prompt, copy the result, and ship it. They prioritize speed. They let the AI make the call and hope that it works.
What makes this tricky to detect is that passengers don’t look like they’re slacking. They look productive. Output is up. Turnaround times are down. Deliverables hit every checkbox. The problem only becomes visible when you step back and realize that the quality of thinking behind the work has quietly eroded.
How Do You Spot Passenger Mode on a Marketing Team?
Passenger mode shows up as technically correct, professionally polished work that says nothing distinctive — and it's hard to spot because the work still ships on time. But there are patterns worth paying attention to:
The content is technically correct but says nothing distinctive. Blog posts hit every SEO checkbox but read like they were written by the same algorithm that wrote your competitor’s blog posts. Because they were.
Social copy is polished but has no point of view. The posts are well-structured and grammatically clean. They also lack the kind of specific, opinionated perspective that makes someone stop scrolling.
Campaign briefs feel like summaries of the last three briefs. The structure is right. The strategy sections are filled in. But the thinking feels recycled rather than original.
First drafts never get meaningfully revised. The person who wrote it can’t articulate what’s missing because they didn’t do the thinking that would help them see it. The AI did the thinking for them, and they accepted the result.
One of the warning signs from Shove’s original article applies directly here: watch for team members who are suddenly calm and ahead of schedule when they used to scramble. If throughput went up but the work all sounds the same, it's likely cognitive outsourcing, not efficiency.
It's a Quality Problem Masquerading as a Technology Problem
The problem isn't AI — it's what happens to human judgment when AI delivers a good-enough first draft 90% of the time.
What happens is that people stop checking the work. Not out of carelessness, but because the output is “good enough” so often that the habit of critical evaluation fades. And in marketing, that erosion shows up in a few, very specific ways:
Brand voice flattens. AI defaults to a middle-of-the-road professional tone. When your team stops rewriting AI output in your actual voice, everything starts sounding like “default AI.” Multiply that across an industry and you get what we’re already seeing all across LinkedIn (and everywhere else): a sea of sameness.
Strategic thinking weakens. There’s a difference between a strong strategy and a plausible sounding one. AI is very good at producing the latter. When your team can’t tell the difference, they stop pushing for the former.
Your quality filter disappears. If the people producing the work can’t identify the gap between “fine” and “good,” you’ve lost your internal quality standard. And you won’t see it in output metrics because the work still looks professional. Stanford researchers call this “workslop”: polished output that looks finished but lacks substance. It’s the B2B marketing equivalent of an empty suit.

So what does it actually look like to “drive” AI on a marketing team?
Drivers use AI to generate raw material, then rewrite with their own voice and perspective. The AI draft is a starting point, not the final version. They add the specific industry knowledge, the opinionated framing, and the distinctive language that AI can’t produce on its own.
Drivers fact-check, push back on generic framing, and add specificity. When AI suggests “innovative solutions that drive results,” [<- snoring emoji] a driver rewrites it to say something concrete and verifiable. They know that specificity builds credibility and vagueness destroys it.
Drivers know when to turn AI off. Brand voice development, strategic positioning, and anything requiring taste or judgment about what not to say requires human thinking that AI can’t replicate. Drivers recognize those moments and do the work themselves.
Drivers treat the output as a starting point for thinking. They read the AI’s draft and ask: what’s missing? What would I (or the brand) say differently? Where is this playing it safe when we should be taking a position? The draft accelerates their process, but their judgment shapes the final product.
To demonstrate the difference, take a straightforward task like writing a case study introduction:
Passenger version: “Company X partnered with us to transform their digital presence, resulting in significant improvements across key metrics and positioning them as an industry leader.”
Driver version: “Company X came to us with five disconnected brands and a website that hadn’t been updated since their last acquisition. Within months of launch: 10.4% increase in monthly users, 11% boost in engaged organic visits, and 16.4% more events per session.”
Same task. Same AI as a starting point. Completely different output. Which one sounds more trustworthy?
Looking for more writing tips? Read this Insight about writing like an industry expert
Traits | AI Driver | AI Passenger |
Uses AI for | Generating a starting point for thinking. | One shot prompting and final product delivery. |
Brand Voice | Rewrites AI outputs in their own/brand voice. | This looks pretty good. Ships the default AI tone. |
Quality Check | Adds original perspective and unique examples. | Recycles plausible-sounding strategy and broad claims without citation. |
When AI is down | "So what?" | "So, what now?" |
How Should Marketing Leaders Evaluate Their Team's AI Use?
If you manage a team that uses AI daily, the question you need to ask yourself is simple: Is your team sharpening their judgment or is it dulling?
The difference compounds. A team of drivers gets better at using AI every quarter. They learn what to prompt for, what to override, and where human thinking adds the most value. A team of passengers gets more dependent. One Claude outage can put them into a tailspin and they go dark waiting for it to come back. Their ability to evaluate and improve AI output weakens with each cycle.
This isn’t about banning AI or adding more review layers. It’s about whether the humans on your team are still doing the cognitive work that makes marketing effective.
Here’s a simple diagnostic: Look at the last five deliverables your team produced with AI assistance. Can you tell which parts reflect human judgment and which were generated? If everything reads the same way, with the same tone and the same level of generic competence, that’s worth paying attention to. And no, using an em dash is not a tell-tale sign of AI-generated content. It is proper punctuation that AI tends to use a lot — so do many human writers.
Because your competitors’ AI has access to the same training data, the same prompts, and the same patterns as yours, the only differentiator left is the quality of human thinking your team brings to the table. Watch this discussion – Is AI Killing Your B2B Brand Voice?
The only true sustainable advantage in AI-assisted marketing is the quality of human judgment applied to AI output. Make sure they’re still bringing it.

FAQ
What is an AI Driver vs. AI Passenger?
An AI driver uses AI as a starting point — they prompt with intent, evaluate the output, and rewrite with their own judgment, voice, and expertise. An AI passenger copies the AI's output and ships it with minimal revision. The framework, introduced by Greg Shove of Section, describes a growing divide in how knowledge workers use AI — one that directly impacts content quality, brand voice, and strategic thinking.
How can you tell if your marketing team is over-reliant on AI?
Look at the last five deliverables your team produced with AI. If they all read the same way — same tone, same structure, same level of generic competence — that's a sign. Other red flags: team members who are suddenly ahead of schedule but producing interchangeable work, first drafts that never get meaningfully revised, and campaign briefs that feel like summaries of the last three briefs rather than original thinking.
What is workslop in B2B marketing?
Workslop is a term coined by Stanford researchers to describe AI-generated output that looks polished and professional but lacks the substance to meaningfully advance a task. In B2B marketing, workslop shows up as blog posts that hit every SEO checkbox but say nothing distinctive, social copy with no point of view, and case studies filled with vague claims like "significant improvements across key metrics" instead of specific results.
How can you maintain your brand voice when using AI?
Treat every AI draft as raw material, not a finished product. AI defaults to a middle-of-the-road professional tone — what we call "default AI." To maintain your brand voice, rewrite AI output in your actual voice, add industry-specific language and opinionated framing the AI can't generate on its own, and know when to turn AI off entirely. Brand voice development, strategic positioning, and anything requiring judgment about what not to say still requires human thinking.
What are the signs your team is in AI Passenger mode?
Five patterns to watch for:
Content is technically correct but says nothing distinctive.
Social copy is polished but has no point of view.
Campaign briefs feel like summaries of previous briefs.
First drafts never get meaningfully revised because the writer can't articulate what's missing.
Throughput went up but everything sounds the same — that's not efficiency, it's cognitive outsourcing.