
May 27, 2024
Stability AI Is Imploding. Open Source Will Survive It.
The company that democratized AI image generation is falling apart. The thing it built is bigger than any company.
Emad Mostaque resigned as CEO of Stability AI on March 23, 2024. He framed it as stepping back to pursue "decentralized AI." The reality is less poetic. The company has been hemorrhaging money, talent, and credibility for months. Layoffs have gutted the team. Fundraising efforts have stalled or failed. The leadership vacuum is real, and nobody credible seems to be rushing to fill it.
If you've been paying attention, none of this is surprising. What might surprise you is how little it matters.
Not to Stability AI, obviously. It matters a great deal to them. But to the ecosystem that Stable Diffusion created? The one that millions of people use every day to generate, edit, remix, and build with AI images? That ecosystem doesn't need Stability AI anymore. It might never have needed them as much as they thought.
What Actually Happened
The short version: Stability AI spent money faster than it could raise it. Reports from Forbes and other outlets paint a picture of a company burning through cash with no clear path to profitability. The compute costs alone for training large-scale diffusion models are staggering. Add salaries, infrastructure, legal bills from the Getty Images lawsuit, and the general overhead of running a company that's trying to be both an open-source champion and a business, and the math stops working fast.
Mostaque's departure was the most visible crack, but not the first one. Key researchers had already left. Internal disagreements about the company's direction had become public. The gap between Stability AI's ambitions and its resources had been widening for over a year.
The company announced SD3 earlier this year, and the previews looked promising. Better text rendering, improved coherence, a new architecture. But promises from a company in freefall carry a different weight than promises from a company on solid ground. Even if SD3 ships and it's good, the question of whether Stability AI will exist in two years to support it is legitimate.
Why Stable Diffusion Mattered (And Still Matters)
Here's the thing people outside the practitioner community don't fully grasp: Stable Diffusion didn't just give people a tool. It gave people infrastructure.
When Stability AI released Stable Diffusion 1.5 in 2022, it detonated a bomb in the AI image generation space. Not because it was the best model (it wasn't, even then). But because it was open. Weights available. Run it locally. Fine-tune it. Break it apart. Rebuild it.
Compare that to Midjourney, which is a closed Discord bot you pay to use. Or DALL-E 3, locked inside ChatGPT and the OpenAI API. Or Adobe Firefly, which lives inside Adobe's walled garden. These are products. Stable Diffusion became a platform.
The ecosystem that grew around it is extraordinary. Automatic1111's Web UI made the model accessible to people who'd never opened a terminal. ComfyUI turned image generation into a node-based workflow engine that rivals professional compositing tools in its flexibility. ControlNet gave users precise spatial control over generation. Thousands of LoRA fine-tunes emerged for every conceivable style, subject, and use case.
I use this stack in production. At Saatchi & Saatchi, working on the e& account, Stable Diffusion and its ecosystem aren't toys or experiments. They're part of how we actually make things. SDXL with custom LoRAs, ComfyUI workflows for batch processing, ControlNet for maintaining brand consistency across generated assets. This is real production work, not a demo.
When I say the ecosystem matters more than the company, I'm not being philosophical. I'm being practical. My workflows don't call Stability AI's API. They run locally or on our own infrastructure. The models are already out there. The weights exist on thousands of hard drives and servers. The tools are open-source projects maintained by independent developers.
A Company Is Not an Ecosystem
This is the distinction that matters most, and the one that tech journalism keeps missing.
Stability AI, the company, is a legal entity with a bank account, employees, and investors who want returns. It can go bankrupt. It can shut down. It can be acquired by someone who guts it.
The Stable Diffusion ecosystem is a distributed network of models, tools, fine-tunes, research papers, community knowledge, and production workflows that exists across millions of machines worldwide. It can't go bankrupt because it's not a company. It can't shut down because no one entity controls it.
This isn't theoretical. We've seen this before.
Linux survived SCO's lawsuit campaign in the early 2000s. SCO tried to claim ownership of Linux code and sued IBM, Novell, and anyone else they could find. The company eventually went bankrupt. Linux is now the backbone of the internet, runs most of the world's servers, and powers every Android phone on the planet.
Firefox survived the death of Netscape. Netscape, the company, was crushed by Microsoft in the browser wars. But the code was open-sourced as Mozilla, and Firefox emerged from the wreckage. The company died. The project lived.
Blender survived NaN Technologies going bankrupt in 2002. The community literally bought the source code through a crowdfunding campaign, and Blender is now a world-class 3D tool used in major film and game productions.
The pattern repeats because the underlying logic is the same: once software is truly open and has a community of users and contributors, the original company becomes optional. Important, useful, sometimes critical for continued development. But not existential.
The Real Risk
I don't want to be glib about this. Stability AI's implosion does create real risks for the open-source AI image generation space.
The biggest one is research momentum. Training frontier diffusion models costs millions of dollars in compute. Community developers can fine-tune, optimize, and build tooling, but they can't train a new base model from scratch in someone's garage. That requires institutional resources: money, talent, and compute at scale.
If Stability AI goes under completely and nobody picks up the baton, the open-source image generation community could find itself stuck on SDXL while closed competitors like Midjourney and DALL-E continue to advance. That's a real scenario, and it's worth taking seriously.
There are encouraging signs, though. Several key researchers who left Stability AI have formed Black Forest Labs, and while they haven't released anything yet, the fact that the talent is staying in the open-source space rather than disappearing into Google or OpenAI is significant. The research knowledge didn't evaporate when those people left the building.
Hugging Face continues to build infrastructure for open-source AI models broadly. The Civitai community keeps producing fine-tunes and tools. ComfyUI's development is accelerating, not slowing. The ecosystem has its own momentum now.
The Getty Problem Isn't Going Away
One piece of this that deserves separate attention: the Getty Images v. Stability AI lawsuit is still active, and it's not about one company. It's about the legal foundation of training AI models on copyrighted data.
If Stability AI loses or settles in a way that establishes bad precedent, every open-source AI model trained on internet data is potentially affected. If Stability AI goes bankrupt before the case resolves, the legal questions don't vanish. They just become someone else's problem. Or everyone's problem.
This is the kind of thing that makes me genuinely nervous. Not because I think AI training on publicly available data should be illegal (I don't), but because the legal system is slow, the stakes are enormous, and the people making the decisions often don't understand the technology they're ruling on.
The outcome of Getty v. Stability AI could shape the open-source AI landscape more than Stability AI's financial health ever did.
The Sustainability Question
Stability AI's collapse forces an uncomfortable question that the open-source AI community needs to confront honestly: how do you fund the development of frontier AI models without a sustainable business model?
Meta can release LLaMA because Meta's business is advertising, not language models. LLaMA is a strategic tool, not a product. Google can open-source things because Google's revenue comes from search and cloud. These companies can afford to give away AI models because their businesses don't depend on charging for them.
Stability AI tried to be an open-source AI company where the AI was the product. That's the hard version of this problem. You're giving away the thing people would pay for, then trying to find other ways to monetize. API access, enterprise features, consulting. It's the same challenge every open-source company faces, but with compute costs that make the economics even more brutal.
Red Hat made it work for Linux (until IBM bought them and started messing with it, but that's another story). Canonical makes it work for Ubuntu, mostly. But these companies sell support and enterprise services around software that's cheap to distribute. AI models require massive ongoing investment in training, and each new generation costs more than the last.
I don't have a clean answer for this. Nobody does. But pretending the problem doesn't exist, or that pure community goodwill can fund the training of billion-parameter models, is not honest.
What Happens Next
Here's what I think actually happens.
Stability AI continues to stumble. Maybe they ship SD3, maybe they don't. Maybe someone acquires the company, or what's left of it. Maybe the brand survives as a shell around a different business. The company's story is uncertain.
The ecosystem continues to grow. SDXL is already good enough for production work. ComfyUI is getting better every week. The LoRA ecosystem is mature and self-sustaining. People are building real businesses on this stack, including advertising agencies like ours.
New players emerge to fund frontier model development. Black Forest Labs is the most obvious candidate, but they won't be the only one. The demand for open-source image generation models is too large, and the talent pool is too deep, for nobody to step into the gap.
The legal landscape remains the wild card. Getty, regulation, potential new lawsuits. These could change everything, or they could resolve in ways that protect the open-source ecosystem. We don't know yet.
And the closed competitors keep advancing. Midjourney V6 is genuinely impressive. OpenAI has Sora coming (eventually) for video. The closed-source side of AI image generation isn't standing still.
But here's the thing about open source that people keep forgetting: it doesn't have to be the best to matter. It has to be good enough, available, and controllable. Stable Diffusion is all three. I can run it on my hardware. I can fine-tune it for my clients. I can build workflows that do exactly what I need. I can't do any of that with Midjourney.
The Code Is Out There
In 1999, when Napster was taking off, the music industry thought that if they killed Napster, they'd kill file sharing. They killed Napster. File sharing didn't even notice.
Stability AI is not Napster (they released their models legally and intentionally). But the principle applies: you can't un-release open-source software. The weights for Stable Diffusion 1.5, 2.1, and SDXL are on Hugging Face, on Civitai, on personal hard drives, on company servers, on cloud instances around the world. They're embedded in thousands of workflows. They're the foundation of tools that millions of people use.
A company can die. Infrastructure persists.
I'm worried about Stability AI. Not sentimentally, but practically. They funded important research. Their collapse leaves a real gap in who pays for the next generation of open models. That problem needs solving.
But I'm not worried about Stable Diffusion. The cat is not going back in the bag. The community is too large, the tools are too mature, and the use cases are too real. Open source has survived the death of its creators before. It will survive this too.
The work continues. It always does.
Omar Kamel is AI Team Leader for e& at Saatchi & Saatchi, Dubai.
Jan 12, 2024
Sora Is a Demo Reel, Not a Product
OpenAI showed impressive Sora clips, but demo reels aren't products. The real questions about cost, speed, consistency, and reliability remain completely unanswered.
Dec 25, 2023
What I Actually Learned Using AI on Client Work This Year
A year shipping real AI-powered campaigns for a major telecom client reveals hard truths: AI excels at production, not ideation. Building trust, managing rapid tool evolution, and keeping data local are what separate viable workflows from hype.
Sep 15, 2023
Running Your Own Models: SDXL and the Case for Local AI
SDXL running locally on your own hardware isn't just a hobbyist upgrade—it's a production necessity for professional creative work that demands IP protection and workflow control.