
May 19, 2025
The 80/20 Problem Nobody Talks About
AI gets you 80% of the way in 20% of the time. The other 20% is where the actual job lives.
Every AI demo you've ever seen is a lie of omission.
Not an intentional lie, necessarily. More like a magician's misdirection. Look at the speed. Look at the volume. Look at how fast I just generated fifty social media assets from a single prompt. Isn't that incredible?
It is incredible. And it is roughly 20% of the job.
I run AI production pipelines at Optix (Publicis Groupe) in Dubai for brands like e&, GMC, Saudia Airlines, and Philip Morris International. Every day, our team generates, reviews, refines, and delivers AI-assisted creative at scale. We've been doing this for months. Not experimenting. Not prototyping. Delivering. And the single most important thing I've learned is something every AI vendor carefully avoids discussing.
The 80/20 problem.
AI gets you 80% of the way to a finished deliverable in about 20% of the time. That first 80% feels like magic. You prompt, you generate, you iterate, and suddenly you have something that looks like a real asset. The remaining 20% takes 80% of the effort. And that remaining 20% is everything the client actually pays for.
The Demo vs. the Delivery
Here's what an AI tools demo looks like: someone types a prompt, hits generate, and thirty seconds later there's an image or a video clip on screen. The audience gasps. The presenter talks about efficiency gains. Someone in the back is already calculating headcount reductions.
Here's what actually delivering that same concept to a client looks like.
You generate the base asset. Good. Now someone with actual art direction skills needs to evaluate whether it matches the brand guidelines. Does the color palette align? Is the composition right for the intended placement? Does it read correctly at mobile sizes? Are there artifacts in the areas that matter?
Then the asset needs compositing. AI-generated images almost never come out of Midjourney V7 or Flux ready to place in a layout. There are elements to isolate, backgrounds to extend or replace, product shots to composite in (because the AI version of the product is never accurate enough), and text to add in a way that doesn't look like an afterthought.
Then color grading. Then format adaptation across every required size and placement. Then legal review, because when you're generating images for Philip Morris International or Saudia Airlines, "pretty close" is not a standard anyone accepts.
The generation itself? That was the easy part. The work that makes it a deliverable? That's the same work it's always been.
The Hidden Costs Nobody Puts on the Slide
When someone pitches AI production to a C-suite executive, the deck is full of efficiency metrics. "Generate 50 variations in minutes." "Reduce production timelines by 90%." "Create at the speed of thought."
What the deck never includes is the line-item reality.
Compute costs. Running serious AI production requires serious hardware. If you're running local generation through ComfyUI (and if you're handling client IP properly, you need local generation), you're looking at GPU infrastructure that costs $2,000 to $10,000 per month depending on your throughput needs. That's not a one-time investment. That's ongoing. GPUs degrade. Software updates break pipelines. Someone has to maintain the infrastructure.
Subscription stack. Runway Gen-3 Alpha for video generation. Midjourney for certain image generation tasks. HeyGen for avatar work. ElevenLabs for voice synthesis. Suno or Udio if audio is in scope. Each of these is a separate subscription, each with its own pricing tier, each with its own usage limits. For a production team, you're looking at $500 to $2,000 per seat per month across the stack. Multiply that by your team size.
New roles. Someone has to run these pipelines. Not a junior designer who watched a YouTube tutorial. Someone who understands prompt engineering, model selection, workflow architecture, and the specific failure modes of each tool. In our industry, these people are being called AI producers or AI pipeline leads. The salary range is $80K to $150K depending on market and experience. That's a role that didn't exist in your budget two years ago.
QA overhead. AI-specific quality control is a new cost center. You need people checking for hallucinated details, visual artifacts, brand safety violations, cultural insensitivity, and the dozen other ways generative AI can produce output that looks fine at a glance and catastrophic under scrutiny. This isn't optional. One wrong detail in a generated image for a major brand becomes a PR crisis. The QA process for AI-generated assets is, in many ways, more intensive than traditional QA because the failure modes are more unpredictable.
Legal review. Intellectual property questions around AI-generated content are still being litigated globally. Every major client we work with has their own legal stance on generative AI, and those stances are evolving. Some require disclosure. Some restrict certain tools. Some need approval workflows that add days to a timeline. This is a real cost in time and legal counsel that doesn't appear in the "AI will make everything faster" narrative.
Add all of this up and the per-asset cost of AI production is lower than traditional production. But it's not as low as the pitch deck suggests. Not even close.
The Iteration Tax
Here's a counterintuitive truth that took me months to fully understand: more output options don't make decisions faster. They make decisions slower.
When a traditional production workflow put three concepts in front of a client, the client picked one. The decision space was bounded. Three options. Choose.
When an AI workflow puts fifty variations in front of a client, something strange happens. The client doesn't feel liberated by the abundance of choice. They feel overwhelmed. And then they start Frankensteining. "I like the composition of version 12, but the color palette of version 37, and can we try the sky from version 4 with the product placement from version 23?"
That's not faster. That's slower. And it's not the client's fault. You gave them fifty options. Of course they're going to try to optimize across all fifty.
I call this the iteration tax. The more variations you generate, the more time you spend in review cycles. The selection process expands to fill the available options. And every additional round of "can we try one more version that combines X and Y" costs time, compute, and team attention.
The discipline required to use AI effectively in production is, paradoxically, restraint. Generate a lot, but curate ruthlessly before anything reaches the client. Present three to five strong options, not fifty mediocre ones. Use the volume internally as a creative exploration tool, not as a client presentation.
We learned this the hard way. More than once.
What the 80% Actually Looks Like
If you've never seen raw AI output in a production context, let me paint the picture.
An image comes out of Flux or Midjourney. It looks impressive at first glance. Scroll through social media and it might even get likes. But zoom in. Look at the details.
The product has the wrong number of buttons. Or the logo is slightly distorted. Or the skin texture on the model has that waxy, too-smooth quality that screams "generated." The background has a repeated pattern if you look carefully. The lighting is internally inconsistent: shadows falling in two directions. Text, if there is any, is gibberish or subtly misspelled.
For a social media post from a personal account? Nobody cares. For a billboard campaign for a luxury automotive brand? Every single one of those issues is a showstopper.
Video is worse. Runway Gen-3 Alpha and Kling 2.0 can produce impressive short clips. But "impressive for AI" and "ready for a client" are separated by a canyon of post-production work. Temporal consistency issues (objects that subtly shift between frames), motion artifacts, resolution limitations, and the general problem that AI video still can't reliably render human hands, text, or complex physical interactions.
That 80% is raw material. It's a starting point. Calling it a finished product is like calling lumber a house.
What the 20% Actually Requires
The remaining 20% is everything that separates a generated image from a brand deliverable. And it requires the same human skills it always required.
Art direction. Knowing whether a composition works for the intended context. Understanding visual hierarchy. Making the subjective judgment calls that turn "technically correct" into "actually good." No AI model has taste. None of them understand why a particular crop creates tension or why this shade of blue feels premium while that shade feels clinical. These are human decisions, and they're the decisions that determine whether work is mediocre or excellent.
Brand understanding. Not just the brand guidelines document (any AI can read a PDF). The unwritten stuff. The brand personality that exists in the collective memory of everyone who's worked on the account. The things the CMO hates but has never formally documented. The visual language that's evolved over campaigns. A model doesn't know any of this. A good creative does.
Cultural sensitivity. We produce work across the Middle East. The cultural considerations in this region are nuanced, contextual, and high-stakes. Color symbolism differs across markets. Gesture and posture carry different meanings. Religious and social sensitivities require deep understanding that can't be captured in a prompt. Getting this wrong isn't just embarrassing. In some markets, it's legally actionable.
Craft. The actual finishing work. Precise masking. Clean compositing. Proper color science. Typography that breathes. Layout that guides the eye. These are skills that take years to develop, and AI hasn't automated a single one of them. It's generated the raw material faster. The craft is the same.
The Honest Math
Here's what the actual efficiency gain looks like, based on months of real production data at Optix.
For certain asset types (social media statics, presentation visuals, concept exploration, early-stage moodboarding), AI reduces production time by roughly 50%. Not 90%. Not "instant." Fifty percent. That's a real number. That's meaningful. A social media campaign that used to take two weeks of production now takes one. That's valuable.
For other asset types (hero campaign visuals, video content, anything requiring photorealistic product accuracy, anything with complex brand compliance requirements), the efficiency gain is smaller. Maybe 20 to 30%. The generation is faster, but the refinement takes as long as it ever did because the standards haven't changed.
For some work, AI adds time. When teams chase the perfect generation instead of building from a good-enough base, or when the iteration tax spirals because everyone wants to see "just one more version," or when QA catches a subtle hallucination at the last stage and the asset needs to be regenerated and re-refined from scratch. This happens more often than anyone in the AI space wants to admit.
The aggregate, across all the work we produce? AI saves us roughly 30 to 40% on production time. That's real. I'm not dismissing it. That margin is the difference between hitting deadlines and missing them, between profitability and loss on certain projects.
But it's not the revolution the pitch decks describe. It's an efficiency tool. A powerful one. Used correctly, it transforms what a small team can produce. Used incorrectly, with unrealistic expectations and no understanding of the 80/20 split, it becomes a machine for generating beautiful rough drafts that nobody has time to finish.
Why This Matters Right Now
Here's why I'm writing this in May 2025 and not keeping it as internal knowledge.
The industry is at an inflection point. Agencies are pricing AI production work based on the vendor pitch, not the production reality. They're promising clients AI-speed delivery at AI-reduced rates, and then their teams are drowning in the 20% that takes 80% of the time. Timelines slip. Quality drops. People burn out. The client wonders why the AI-powered agency is slower and more expensive than promised.
Meanwhile, the agencies that understand the 80/20 ratio are pricing correctly. They're using AI to increase quality and throughput without making impossible promises. They're staffing appropriately for the post-production reality. They're setting client expectations based on what AI actually delivers, not what the demo suggested it could deliver.
The gap between these two types of agencies is going to widen fast.
If you're a creative director, know that AI doesn't replace your team. It changes where your team spends its time. Less time on initial asset generation. More time on refinement, QA, and the strategic decisions that make work effective. Plan for that.
If you're running an agency, do the real math. Factor in the infrastructure, the subscriptions, the new roles, the QA overhead, and the iteration tax. Then price accordingly. Your competitors who don't do this math will underbid you in the short term and collapse in the medium term.
If you're a client buying AI-powered creative services, ask hard questions. "How much of this is generated and how much is human-refined?" "What's your QA process for AI-specific issues?" "What happens when the generation doesn't match the brief?" The agencies that can answer these questions honestly are the ones that will deliver.
The 80/20 split isn't a problem to be solved. It's a reality to be managed. The tools will keep improving. The 80% will get better. But the 20% that requires human judgment, taste, cultural understanding, and craft? That's not going anywhere. And it's the part that actually matters.
Omar Kamel is AI Creative & Production Lead at Optix (Publicis Groupe), Dubai.
Feb 4, 2025
Sora Shipped. It's Fine.
Sora finally shipped after a year of hype. The verdict: it's fine—which is devastating when you've spent twelve months telling people you'll change everything.
Nov 14, 2024
AI Video Got Serious This Year
2024 was the year AI video crossed from novelty to production tool. Here's what actually works, what doesn't, and why Coca-Cola's Christmas ad matters.
Aug 16, 2024
What AI Gets Wrong About the Gulf
The Gulf's most AI-forward region is poorly served by generative tools that butcher Arabic, confuse regional dress codes, and flatten distinct cultural identities into Western defaults. Daily production reveals that cutting-edge AI fails at the basics of the markets betting their future on it.