AI has made marketing drafts cheap, but it has made approved marketing more obviously expensive.
This is the part most AI forecasts got wrong. They treated marketing as a production problem. It is not. Marketing is a trust, timing, taste, and distribution problem with production attached.
The first wave of generative AI attacked the visible work. Blog posts. Landing pages. Ad variants. Email sequences. LinkedIn posts. Sales scripts. It collapsed drafting latency. A blank page that used to take four hours now takes four minutes.
That is real productivity. It is also not the same as business output.
The useful distinction is simple: AI produces plausible artifacts. Companies need accepted work. Accepted work is accurate, on brand, strategically aligned, legally safe, usable by sales, relevant to customers, fitted to the channel, and tied to a measurable business outcome.
That conversion from artifact to accepted work is the last mile. It is where most of the cost still lives.
The 80 Percent Is Drafting Latency
The common line is that AI gets you 80 percent of the way there instantly, then the final 20 percent takes 80 percent of the time. The line is directionally right. The explanation is usually wrong.
AI does not complete 80 percent of the work. It completes 80 percent of the visible artifact.
In a writing task studied by MIT researchers, ChatGPT reduced completion time by about 40 percent and increased evaluated quality by 18 percent. That is meaningful. But the tasks did not require deep company context, precise factual accuracy, customer sensitivity, legal review, or downstream stakeholder approval.
That matters because most commercial marketing does.
A landing page is not a document. It is an interface between a business model and a buyer's hesitation. A case study is not a story. It is a negotiated asset involving a customer, legal boundaries, proof, numbers, brand risk, and sales utility. An ad is not a clever line. It is a controlled test inside a budget allocation system.
AI can draft all of these. It cannot automatically make them safe to ship.
The Bottleneck Moved
Before AI, the bottleneck was often creation. A marketer needed to write the first draft, build the structure, generate options, and polish the language. That consumed time.
After AI, the bottleneck moves to specification and judgment.
The team now has to answer harder questions earlier:
- Who exactly is this for?
- What belief are we trying to change?
- Which claim can we prove?
- Which words would legal reject?
- Which objection does sales hear every week?
- What is the difference between sounding polished and sounding like us?
- What would make this asset worth approving?
These questions existed before. AI just removes the excuse of slow drafting and exposes them.
Bad brief plus human writer creates delay. Bad brief plus AI creates fast mediocrity. The output arrives quickly, but the strategic debt remains. Someone still has to decide what the work is supposed to do.
Generation Is Cheap. Selection Is Not.
AI lowers the cost of producing options. That sounds like an obvious win. It is, up to a point.
But markets do not reward option volume. They reward selection quality.
If a team can generate 50 headlines in 30 seconds, the scarce resource becomes the person who knows which three are worth testing. If a model can produce six positioning territories, the value shifts to the executive who can identify the one that fits the company's wedge, pricing power, and sales motion.
This is a substitution dynamic. AI substitutes for low-context drafting. It complements high-context decision making. It does not replace taste, because taste is not decorative. Taste is compressed market judgment.
More options can also increase cost. Review fatigue rises. Version sprawl expands. Stakeholders debate language they did not ask for. Teams mistake abundance for progress. Production accelerates while approval slows down.
This is why AI can make an individual feel faster while the organization does not ship much more.
The Jagged Frontier Is the Operating Map
The strongest evidence for AI in knowledge work also contains the warning label.
In the BCG, Harvard, MIT, and Wharton jagged frontier experiment, GPT-4 improved speed by more than 25 percent, quality by more than 40 percent, and completion by more than 12 percent for tasks inside the AI frontier. But on a business analysis task outside the frontier, AI users were 19 percentage points less likely to reach the correct answer while still producing better-looking recommendations.
That is the commercial danger: wrong work with premium formatting.
Inside the frontier, AI is strong at summarization, rewriting, ideation, pattern-based copy, extraction, classification, and first-pass variants. Outside the frontier, it gets shaky: ambiguous strategy, contradictory inputs, tacit customer knowledge, novel positioning, high-stakes claims, and internal politics.
Marketing is full of outside-frontier work disguised as writing.
A founder wants a point of view that does not sound like category consensus. A sales team needs messaging that handles the real objection buyers will not put in a survey. A regulated company needs language that sells without triggering compliance risk. A premium brand needs restraint, not volume.
The model can help. It cannot know the tradeoff unless the organization has encoded it.
Verification Debt Is Now a Budget Line
Every generated claim creates verification debt.
A statistic needs provenance. A customer quote needs permission. A product feature needs current accuracy. A competitive comparison needs legal review. A superlative needs evidence. A medical, financial, or security claim needs a different review path than a newsletter intro.
The fluent surface makes this worse. A rough human draft invites scrutiny. A polished AI draft can suppress it. The work looks done, so reviewers lower their guard. That is exactly how risk enters the system.
Nature and other research bodies have continued to document hallucination as a structural feature of large language models. The practical implication is not that AI is useless. It is that unverified AI output is not a business asset. It is a liability with nice typography.
Companies need mechanical verification:
- Extract every factual claim.
- Classify each claim by risk.
- Attach sources or approved internal proof.
- Flag unsupported numbers.
- Flag legal-sensitive language.
- Flag competitor references.
- Flag product statements that need owner approval.
This is not bureaucracy. It is the price of using probabilistic systems inside revenue workflows.
Review Is Responsibility Transfer
Most leaders underestimate review because they think review means editing.
It does not.
Review is responsibility transfer. The moment a person approves an AI-generated asset, they inherit the risk. They are accountable for the claim, the tone, the timing, the promise, and the downstream consequence.
That is why the last 20 percent feels slow. It contains 100 percent of the accountability.
The reviewer is not asking, 'Can I improve this sentence?' The reviewer is asking, 'Can I stand behind this in front of the CEO, the customer, the regulator, the sales team, and the market?'
AI can reduce review burden only if it packages the evidence with the output. A draft without a source map is work pushed downstream. A draft with claim tables, rationale, risk flags, brand checks, and approval criteria is closer to usable inventory.
The Real Constraint Is Context
Most companies do not have a model problem. They have a context problem.
Their operating knowledge is trapped in Slack threads, meeting memory, sales calls, founder preferences, legal scars, campaign postmortems, and unwritten taste. The AI cannot retrieve what the company never structured.
So it defaults to category average.
That is why so much AI marketing sounds competent and dead. It has grammar but no memory. It has structure but no scar tissue. It knows the market's generic language, not the company's earned truth.
The fix is not a better prompt. It is a context layer:
- Positioning documents.
- ICP profiles.
- Approved claims and proof.
- Banned phrases.
- Customer quotes.
- Sales objections.
- Competitor maps.
- Voice examples and anti-examples.
- Legal rules.
- Channel playbooks.
- Campaign learnings.
If your context is tribal, your AI will be generic.
Copilots Help People. Systems Change Throughput.
The market bought copilots because they were easy to understand. A worker gets an assistant. Drafting gets faster. The productivity story is clean.
The data is more mixed.
GitHub Copilot's controlled experiment found developers completed a coding task 55.8 percent faster. In another setting, METR's 2025 randomized trial found experienced open-source developers working on familiar repositories were roughly 19 to 20 percent slower with frontier AI tools. The issue was not that AI could not code. It was that review, cleanup, prompting, and missing repository context consumed the gains.
Customer support shows a similar pattern. A generative AI assistant increased productivity by about 14 percent on average, with larger gains for less-experienced workers and little or negative effect for the most experienced workers.
This is the pattern investors should care about. AI is not a uniform labor replacement. It changes the shape of the work. It compresses some tasks, expands others, and shifts value toward people and systems that can define, verify, integrate, and learn.
Copilots improve individual throughput. Production systems improve organizational throughput.
A production system has structured inputs, reusable context, workflow routing, review gates, memory, analytics, and governance. It does not just help someone write. It helps the company ship approved work with less drag.
The Last Mile Is Where Brand Lives
AI raises the floor. That is good for weak teams and dangerous for average teams.
When everyone can produce clean copy, clean copy loses value. The middle gets compressed. The market gets more fluent and less distinctive.
BCG research on AI-assisted ideation found output quality improved but variation narrowed. That matches what buyers already feel. More content is being produced, but less of it carries a sharp reason to care.
Brand advantage comes from what the model does not know by default: proprietary point of view, customer intimacy, founder conviction, taboo opinions, timing, product reality, and the specific tradeoffs a company is willing to make.
The last mile is not cosmetic. It is where differentiation enters.
A generic AI draft says, 'Streamline your workflow and unlock growth.' A company with real positioning says, 'Cut review cycles from six to two by forcing every claim through a proof bank before copy reaches legal.' One is language. The other is a commercial mechanism.
The New Metrics
Most AI marketing dashboards measure the wrong thing.
Words generated is not a business metric. Prompts run is not a business metric. Assets created is not a business metric if the assets do not get approved, used, trusted, or tied to revenue.
The better metrics are closer to the bottleneck:
- Approved assets per week.
- Review cycles per asset.
- Human edit minutes.
- Claim error rate.
- Legal rejection rate.
- Stakeholder approval time.
- Brand consistency score.
- Sales usage.
- Conversion lift.
- Learning velocity from campaign to next brief.
This changes the budget conversation. The question is not, 'Can AI reduce content costs?' It is, 'Can AI increase approved output per dollar while reducing risk and improving market feedback?'
That is a better question. It points to systems, not toys.
How the Workflow Should Actually Work
The wrong request is: 'Write a landing page.'
The right workflow is:
- Normalize the brief.
- Retrieve ICP, offer, proof, voice, and product context.
- Extract buyer pains and objections.
- Generate positioning angles.
- Score angles against a rubric.
- Draft claims with required proof.
- Create variants by segment and channel.
- Extract and verify every claim.
- Run brand, conversion, and compliance checks.
- Send only the right risk tier to human review.
- Publish through the correct system.
- Capture performance and edits back into memory.
This is slower than typing one prompt. It is faster than pretending one prompt is a workflow.
The goal is not human-in-the-loop everywhere. That is how companies create expensive review queues. The goal is human-at-the-right-loop. Low-risk work gets light review. Medium-risk work gets AI checks and human spot checks. High-risk work gets expert approval. Novel strategy stays human-led with AI support.
That is how AI becomes operating leverage instead of operational noise.
The Market Expands After the Cleanup
The first phase of AI marketing was about cheaper content. That market gets crowded fast. If the product is more drafts, the buyer will push price down.
The larger market is not draft generation. It is approval-grade marketing infrastructure.
That includes context management, proof libraries, brand governance, claim verification, workflow routing, performance feedback, and memory. These are less flashy than a chatbot. They are also closer to the budget lines that matter: headcount leverage, agency spend, compliance cost, campaign velocity, sales enablement, and revenue conversion.
Enterprises do not buy AI because they want more text. They buy it because they want more trusted work moving through the system with fewer bottlenecks.
RAND has reported high AI project failure rates, with causes that include poor problem framing, bad data, infrastructure gaps, miscommunication, and overestimated model capability. MIT NANDA reporting on stalled GenAI pilots points to a similar issue: tools often fail to adapt to workflows, retain feedback, or integrate into operations.
The lesson is direct. AI projects fail when they automate artifacts instead of workflows.
The Strategic Takeaway
AI did not remove marketing work. It moved the work.
From drafting to briefing. From writing to verifying. From producing to selecting. From execution to orchestration. From individual skill to operating system design.
This is good news for serious teams. The cheap layer gets commoditized. The valuable layer becomes clearer.
The companies that win will not be the ones with the most prompts. They will be the ones with the best context, the tightest feedback loops, the clearest approval criteria, and the strongest taste layer.
AI makes the first draft instant. The market still pays for the last mile.
FAQ
What is the last mile of AI marketing?
It is the work required to turn AI-generated drafts into approved, trusted, on-brand, legally safe, and commercially useful marketing assets.
Why does AI make drafts faster but not always campaigns faster?
AI reduces drafting time, but campaigns still require briefs, proof, stakeholder approval, legal checks, publishing workflows, measurement, and performance feedback.
What should companies measure instead of AI content volume?
Measure approved assets per week, review cycles, claim error rate, legal rejection rate, stakeholder approval time, sales usage, conversion lift, and learning velocity.
How do teams reduce AI review burden?
They need structured context, claim extraction, source mapping, risk flags, approval rubrics, and feedback capture so reviewers are not forced to audit every output from scratch.