AI-generated Everything
I got my Sora 2 invite last week, and I was excited to see how it worked. If you are not aware, Sora 2 is OpenAI's new video generation model and social media platform wrapped up into one. You don't have to guess whether the content you are viewing in Sora 2 is AI-generated because it is all 100% AI-generated.
The first hour was interesting. I watched beautifully crafted videos of people doing wild things like flying through clouds, interacting with historical figures, and taming dangerous animals in worlds that never existed, brought to life in stunning detail. But then something weird happened.
As I explored further, I realized you could not only create your own content from prompts, but scrolling horizontally revealed an entire world where people were remixing videos. They were adding increasingly ridiculous tiny details with short prompts of only a few words (ie. now make it a bunny
). This created an entire world of confusion. A simple beach scene would evolve into a beach with purple elephants, then purple elephants with laser eyes, then laser-eyed elephants riding motorcycles through a donut. And as the prompt quality deteriorates, it seems the video quality does as well. It was like watching creativity turn into chaos, imagination becoming brainrot.
The experience left me with an uncomfortable question: Are we moving toward a world where we outsource our imagination to LLMs?
I predict OpenAI's next move is even more ambitious. Soon, you'll be able to lease IP from existing studios and recreate scenes from Star Wars with you and your friends as the cast. Want to be Luke Skywalker in the cantina scene? Done. Want to recast The Office with your family? Why not?
This feels like a natural progression, but it's still just remixing existing creativity. We're not generating new stories. We're inserting ourselves into old ones.
The Future: Full length content generated by prompts
The real vision extends much further. Imagine a world where platforms like Netflix can generate full movies or series on demand based on your prompt. You ask for a sci-fi movie about Mars with your family as the cast, and it creates it on the spot. No licensing deals, no production schedules, no waiting for renewal decisions. Just pure, on-the-fly generation tailored exactly to what you want to watch tonight.
It sounds incredible, and in many ways it is. But this shift from licensed content to prompted content raises a fundamental question about creativity itself. Can an LLM be creative enough to generate content we will enjoy?
Here's where things get uncomfortable: LLMs are terrible at actual creativity. If you've ever tried a brainstorming session with an AI, you've probably experienced this firsthand. Ask it for business ideas, and you'll get the same obvious suggestions you thought of first with no innovations on the current market. Ask for novel software solutions, and it gravitates toward the known and obvious, rarely venturing into genuinely new territory.
LLMs excel at giving you the most predictable five to ten ideas, but they struggle deeply with putting together something truly novel. They're incredible pattern-matching machines, but creativity often requires breaking patterns, not just following them. LLMs are great at optimizing the pathway to predicting human language, but creativity is rarely an optimized endeavor. Creative processes take time, and often many failures, to succeed.
Overcoming the Quality Concerns
This limitation shows up everywhere if you know how to look. Scroll through any social media feed today, and AI-generated content practically announces itself. The cartoon characters have that telltale smoothness, the landscapes share an uncanny similarity, the art styles converge on a kind of algorithmic average. It's not bad, exactly, but it's becoming homogeneous in a way that feels concerning.
What happens when this becomes the dominant form of entertainment? When our movies, our stories, our cultural touchstones all emerge from the same underlying patterns? We risk losing the beautiful messiness of human creativity—the weird angles, the unexpected connections, the genuine surprises that come from a multitude of minds, not just small context variations on the same training data.
If we're not careful, we'll find ourselves in a world where all our stories sound like they came from the same voice, just with different prompts.
I'm not anti-AI. I'm literally using AI as a tool to help craft this very blog post. But the key word there is tool.
I'm guiding it, developing the novel ideas, steering the direction. The AI helps me paint, but I'm still the one deciding what the picture should look like.
This is where I think the future should lead us. AI can be an incredible amplifier of human creativity—helping us realize visions we couldn't execute alone, generating slight variations we wouldn't have considered, building worlds at scales that would take lifetimes to create by hand.
But we can't just ask for a beautiful picture
and expect something meaningful. We need to creatively craft the stories, bring the imagination, and provide the human perspective that makes art resonate with other humans.
The Balance We Need
I'm optimistic about a future where we can bring wild imaginations to life in ways we never thought possible. The technology is genuinely amazing. But I'm cautious about handing over the entire creative process to systems that fundamentally struggle with what makes us most human.
The magic isn't in the generation—it's in the imagination that guides it. The future of creativity isn't about better prompts; it's about better dreamers who know how to use these powerful tools to bring their unique visions to life.
So let's build that future together. Let's use AI to paint beautiful pictures, but let's make sure we're the ones deciding what beauty looks like.