Over the past year, I’ve been experimenting with how AI changes filmmaking and the nature of film itself. Initially I was focused on the generative AI tools that let you create images and clips from prompts but recently I’ve been experimenting with vibe coding (turning prompts into applications) too.
The combination of AI video and vibe coding will transform the media in dramatic and unexpected ways.
I think the combination of both of these new emerging technologies will transform the media in dramatic and unexpected ways.
In February 2024, I released my first AI-native short: Dr. Zhao and the Problem of Maximum Perception — a lo-fi speculative fictional piece that explored the ideas of Dr. Zhao, a scientist building a machine to determine what’s ‘true’. That was made using the early, impressive (but limited) versions of the tools that now dominate the AI creative landscape.

The Film Becomes the Product
Just over a year later, I’ve released an equally experimental follow-up project on the same subject but this time it’s not just a film. It’s also an app. I call it DECIDER.LAND.

DECIDER.LAND is a real functional product that uses AI to do what the fictional machine in the original video claimed it could do: to determine the ‘truth’.
It’s meant to be a parody and to bring into question the nature of truth and technology’s ability to ascertain it. AI pioneers like Elon Musk believe that finding the truth is an engineering problem. I think there’s more to it than that and any naive beliefs we have in technology’s ability to get us to the point of epistemological certainty is very dangerous.
DECIDER.LAND can be helpful though. It listens to multiple sides of an argument and tells you who’s right – at least in theory.

DECIDER.LAND extracts all of the stated factual claims of each argument and assesses whether they are true and provides a confidence score against this assessment. It also looks at how well the argument is made. Is is logical, objective and coherent?

Building the DECIDER.LAND App
As I said at the start, this project began as an extension of my work with AI filmmaking.
Over the last year I’ve made a few things in the AI film space. Some better than others. Some hold up. Some are just weird (in a good way I think). In addition to the Dr. Zhao film project I also worked with Reid Hoffman, the founder of LinkedIn, and his team to create a video about his newest book on AI called SuperAgency. I also did a series called Mr. Canada.
For these video projects I used tools like Midjourney, RunwayML, Krea, Final Cut Pro, Photoshop, and VEO3 to create cinematic sequences, characters, and performances. I wrote scripts with the help of ChatGPT, created voices with ElevenLabs and used Suno and Epidemic Sound to do the sound design. I even created a Mr. Canada Spotify channel. I also experimented with new model-based tools like Imagen 4, Wan, Flux, and Sora.
And then it was time to use AI to code, or more precisely, to vibe code. I’ve experimented with a number of the new emerging vibe coding platforms like Replit, Cursor, Bolt and others but had the most success with Lovable.

The best practices in vibe coding are still evolving but given my non technical experience in product development and building out MVPs I approached the build out one step at a time, creating small components, one by one.
I would ask Lovable and chatGPT to help strategize which parts should be built first to minimize the likelihood of something breaking. I connected the app to APIs from google for authentication and OpenAI for analysis and then began construction of the debating mechanism, admin screens and other features.
In fairly short order, I was using prompts to connect to APIs, assemble logic and publish micro-services in days instead of weeks or even months. The joke with vibe coding is that it takes days to build and weeks to debug but I found Lovable was pretty good at identifying the bugs and fixing them. That’s not to say the code is bug free but it seems bug free enough.
Product and Story Are Coalescing

This is, as far as I know, one of the first times that AI video tools have been used in combination with a vibe coded app (by the same person) to create an entirely new kind of media that is both AI film and interactivity powered by vibe coding. It’s early, and it’s definitely imperfect—buggier than I’d like—but I believe it points to something that could change media forever.
Modern media companies already operate more like software platforms than traditional production studios. Think Netflix rather than Universal Studios. Youtube and TikTok have become dominant in the space without producing a single piece of content.
So yes, DECIDER.LAND is a film that’s also a product—and a product that’s also a film. It’s early-stage and evolving, but it already hints at how creators could merge storytelling and software into a single, living medium. If streaming made media behave like software, this takes the next step: making the software itself part of the story.
Whether this becomes the future of media is still up for debate. But one thing is certain—creators now have the tools to make it happen. The only question is: who will pick them up first?