
November 19, 2025
The Tools That Earned Their Place
The Tools That Earned Their Place
A year-end assessment of which AI creative tools survived the hype cycle, which didn't, and what the pattern tells us.
I've been building AI production pipelines for advertising clients since early 2023. In that time, I've watched tools launch to breathless coverage, raise absurd amounts of money, and then quietly disappear. I've also watched tools that nobody wrote breathless coverage about become load-bearing infrastructure in my daily work.
This is my 2025 year-end assessment. Not a listicle. Not a ranking. Just an honest accounting of what proved itself in actual production and what didn't.
The Survivors
These tools earned their place. Not because they had the best marketing or the biggest launch events, but because they solved real problems for people doing real work.
ComfyUI
If I had to point to one tool that defined my production workflow in 2025, it's ComfyUI.
ComfyUI is not easy. It's a node-based interface for building image generation workflows, and it has a learning curve that sends most casual users running back to simple text boxes. That's a feature, not a bug.
What ComfyUI does, that nothing else does, is treat image generation as a reproducible engineering process. You build a workflow once. You save it. You version it. You share it with your team. You run it again with different inputs and get predictable results. In advertising production, where a client might need 47 variations of a campaign asset across different markets, this isn't a nice-to-have. It's the whole game.
The ecosystem matured significantly this year. ControlNet for structural guidance. IPAdapter for style and character consistency. The Impact Pack for detection and segmentation within workflows. The LoRA ecosystem for fine-tuned control over specific aesthetics. Each of these tools slots into ComfyUI's node graph like a LEGO brick, and the combinations are where the real production value lives.
ComfyUI won because it respected its users as operators, not consumers. It gave skilled people more control instead of less. That distinction matters more than any other factor on this list.
Midjourney
Midjourney had a rough stretch. Through most of 2024, it felt like the tool was coasting on its early reputation while competitors caught up. The consistency problem (getting the same character, style, or scene across multiple generations) was a real limitation for production work.
V7 changed that. The model improvements were significant, but more importantly, Midjourney finally started addressing the workflow issues that production users had been screaming about. Character references, style references, and better parameter control turned it from a tool you used for exploration into one you could use for art direction.
It's still not a pipeline tool. You're not going to build automated workflows around Midjourney the way you can with ComfyUI. But for the ideation and art direction phase of a project, nothing else matches it for speed and quality. I use it to explore directions before committing to a production pipeline, and it earns its place in that role every single day.
Flux (Black Forest Labs)
The Flux story is one of the best in AI this year, partly because it emerged from one of the worst.
When Stability AI imploded (more on that below), there was a real fear that the open-source image generation ecosystem would collapse with it. The core researchers who had built Stable Diffusion left and founded Black Forest Labs. Flux 1.0 was their answer.
Flux 1.0 didn't just replace SDXL. It surpassed it. Better coherence, better text rendering, better prompt following, and a licensing model that made commercial use viable. It became the default base model for local generation, for ComfyUI workflows, for fine-tuning, for the entire open-weight ecosystem that depends on having a strong foundation model.
The significance goes beyond image quality. Flux proved that open-weight models could be commercially sustainable. That a small, focused team could build a model that competes with anything from the big labs. That the community-driven ecosystem didn't need a single corporate patron to survive.
Runway
Runway deserves credit for something rare in this space: restraint.
While other companies were making wild claims about AI replacing filmmakers, Runway shipped products. Gen-2 was useful. Gen-3 Alpha was better. Each version delivered measurable improvements in quality, consistency, and control without pretending it was something it wasn't.
The motion brush tool, camera controls, and style references in Gen-3 Alpha turned AI video from a novelty into something you could actually cut into a production. Not for everything. Not for hero shots in a Super Bowl ad. But for secondary footage, for mood boards, for pre-visualization, for social content? Yes. Every day.
Runway earned trust by being honest about what it could and couldn't do. In a market full of vaporware demos, that honesty became a competitive advantage.
Kling 2.0
Kling from Kuaishou quietly became one of the most capable video generation tools available. Version 2.0 brought improved motion quality, better prompt adherence, and longer generation times that made it genuinely competitive with Runway for certain use cases.
What's notable about Kling is the speed of iteration. While Western competitors were releasing carefully staged demos, Kling was shipping updates at a pace that kept changing the conversation about what was possible. The motion quality, in particular, leapfrogged some of the competition.
ElevenLabs
ElevenLabs solved a problem I deal with constantly: multilingual content for Gulf region campaigns.
When you're producing content for brands like e& or Saudia Airlines, you're working across Arabic, English, and sometimes French or Hindi. Traditional dubbing is expensive and slow. ElevenLabs made voice cloning and multilingual synthesis good enough to use in production. Not perfect. But good enough, and getting better with every update.
The consistency of their API, the quality of their voice cloning, and their multilingual capabilities turned them into genuine infrastructure. I don't think about whether to use ElevenLabs for voice work anymore. I just use it.
HeyGen
HeyGen followed a similar trajectory. What started as a novelty (talking avatar videos) matured into a production tool for video dubbing and localization.
The lip-sync quality improved dramatically through 2025. The avatar technology got more natural. And the use cases became clearer: corporate communications, training videos, market-specific content adaptation. Not everything. But the things it does, it does well enough that the alternative (reshooting everything) isn't worth the cost.
Adobe Generative Fill
This one doesn't get enough attention because it's boring. Adobe's Generative Fill in Photoshop is the quiet workhorse of my daily workflow. Need to extend a background? Fill in a removed object? Adjust a composition without reshooting?
It's not flashy. Nobody writes breathless articles about it. But it saves me hours every week, it's integrated into a tool I already use, and it works reliably. Sometimes the most important tools are the ones you stop thinking about because they just work.
The Casualties
Not everything survived. Some tools and companies that looked inevitable 18 months ago are now cautionary tales.
Stability AI
This one hurts, because Stability AI started the entire open-source image generation movement. Stable Diffusion changed everything. It democratized image generation in a way that no single tool before or since has matched.
But the company behind it couldn't sustain itself. Leadership issues, funding problems, talent departures, and a business model that never quite figured out how to monetize giving away your core technology. By mid-2025, Stability AI as a company is a shadow of what it was.
The silver lining is real, though. The models live on. The community lives on. And the researchers who left founded Black Forest Labs and built Flux. The technology survived the company. That might be the most important lesson in this entire article.
Jasper AI
Jasper AI was once valued at $1.5 billion. Let that number sit for a moment.
Jasper positioned itself as the AI replacement for copywriters. It raised enormous funding. It marketed aggressively to enterprises. And then ChatGPT launched, followed by Claude, and suddenly the core value proposition (AI-generated text with a nice interface) was available from models that were better, cheaper, and integrated into everything.
Jasper's decline isn't really about Jasper. It's about what happens when your entire product is a wrapper around a capability that becomes commoditized. If the only thing standing between your users and a free alternative is your UI, you're not a product. You're a feature.
Sora
This one is complicated.
OpenAI's Sora launched as Sora Turbo in December 2024, and Sora 2 arrived in September 2025 with dialogue sync, sound effects, and improved visual quality. Technically, it's impressive. Some of the output is stunning.
But impressive output isn't the same as a sustainable product. The compute costs are enormous. User growth has been underwhelming relative to the hype. And the practical utility for production work remains limited compared to tools like Runway or Kling that have been iterating on usability and workflow integration while Sora focused on raw capability.
Sora isn't dead. But it's not winning either. And the gap between "technically impressive demo" and "tool that producers actually reach for" is wider than most people realize. I wrote about this exact pattern when Sora was first announced in early 2024, and the underlying dynamic hasn't changed. Capability without usability is a science project, not a product.
The Unnamed Casualties
There's a graveyard of startups I won't name individually because most of them don't deserve the attention. "AI-powered creative suites" that promised to replace entire creative departments. Tools that raised seed rounds on the strength of a demo reel and never shipped anything production-ready. Companies that described themselves as "the Canva of AI" or "the Figma of AI generation" and turned out to be neither.
The common thread: they all tried to sell a future that didn't exist yet as though it were the present. Their marketing was better than their product. That works for a fundraise. It doesn't work for retention.
The Pattern
Step back from the individual tools and a clear pattern emerges.
Specificity wins. The tools that survived solved specific, well-defined problems. ComfyUI solved reproducible image generation workflows. ElevenLabs solved multilingual voice. Runway solved AI video for production. The tools that promised to "revolutionize creativity" or "replace entire teams" didn't solve anything specific, and they didn't survive.
Honesty outlasts hype. Runway never claimed its video generation was indistinguishable from real footage. Adobe never claimed Generative Fill would replace photographers. These tools marketed what they actually did, and users trusted them for it. The companies that led with impossible promises created expectations they couldn't meet, and the backlash was predictable.
Control beats automation. This is the big one. The tools that put control in the hands of skilled operators (ComfyUI, Midjourney's parameter system, Runway's motion controls) outperformed the tools that tried to remove operators from the equation entirely. The fantasy of "just type what you want and get perfect output" remains a fantasy. The reality is that skilled people with powerful tools produce better work than unskilled people with "easy" tools. Every time.
Open-source proved more durable than any single company. Stability AI collapsed. The ecosystem it spawned is thriving. Flux emerged from those ashes and is now the foundation model for an entire community of developers, artists, and producers. No single company's failure can kill an open ecosystem. That resilience is structural, and it's the strongest argument for open-weight models.
Integration matters more than innovation. Adobe's Generative Fill isn't the most technically impressive AI tool. But it's inside Photoshop, where I already work, and it fits into my existing process. Tools that required me to completely change my workflow to accommodate them mostly didn't survive. Tools that fit into existing workflows did.
What This Means for 2026
I'm not going to make predictions. Predictions in AI are a fool's game, and there are enough fools playing it already.
What I will say is that the pattern is clear enough to be useful. If you're evaluating AI tools for production work, ask these questions:
Does it solve a specific problem you actually have? Or does it promise to solve all your problems at once?
Does it give you more control, or less?
Is the company behind it being honest about limitations, or is every demo reel suspiciously perfect?
Can you build reproducible processes around it, or is every output a surprise?
Does it fit into your existing workflow, or does it demand you rebuild everything around it?
The tools that score well on these questions are the ones that will still be here next November. The ones that don't, regardless of how impressive their demos look today, probably won't be.
The hype cycle isn't over. But the tools that matter have separated themselves from the tools that don't. Pay attention to which column yours are in.
Omar Kamel is AI Creative & Production Lead at Optix (Publicis Groupe), Dubai.
Aug 14, 2025
Who Trains the Next Art Director?
Agencies are automating the junior roles that build creative expertise, hollowing out the pipeline for future leaders. In ten years, nobody will know how to think anymore.
May 19, 2025
The 80/20 Problem Nobody Talks About
AI gets you 80% done in 20% of the time. The remaining 20% requires 80% of the effort—and that's where the actual job lives.
Feb 4, 2025
Sora Shipped. It's Fine.
Sora finally shipped after a year of hype. The verdict: it's fine—which is devastating when you've spent twelve months telling people you'll change everything.