top of page
Search

Dreamflare AI Hires ex-MGM & Netflix Execs and Gets on A16Z's Next-Gen Pixar List







Aug. 30, 2024, Repost from Dreamflare AI, DEADLINE (By Andreas Wiseman) and A16Z (by Jonathan Lai) -- Recently launched DreamFlare AI has hired Cameron Purcell as Vice President of Development and Kelly Kanavas as Creative Executive. Purcell has held roles at Universal Pictures, Annapurna, and MGM, focusing on content strategy and distribution. He was most recently head of film and TV at Korean video game publisher Krafton, and he is also founder of creative label For The Story Entertainment. Kelly Kanavas joins from Netflix, where she helped develop animated film content, and she has also worked at Walt Disney Animation Studios, Paramount Animation, and CAA.


DreamFlare specialises in Gen-AI entertainment and interactive AI content across various genres. The company launched this summer with $1.65M in pre-seed funding from companies including FoundersX Ventures. Founders say the company has 20 hours of content so far and is “working with dozens of filmmakers creating in the space”.


According to its launch release, the company, co-founded by former Google employee Josh Liss and documentary filmmaker Rob Bralver, is envisioned as a sort of studio where creators work with storytellers to create video using third-party AI tools like Runway, Midjourney, ElevenLabs, and others. The videos will then be distributed through a subscription-based online service. Creators will earn money from revenue-sharing on subscriptions and advertising.


DreamFlare, the release said, will offer two types of animated content on the platform: Flips, which are comic book-style stories with AI-generated short clips and images that users can scroll through, and Spins, which are interactive choose-your-own-adventure short films where viewers can change certain outcomes of the story.


Purcell said: “DreamFlare stands at the intersection of technology and artistry, providing a space where creators can harness AI to push boundaries. The company is deliberately crafted to enrich the entire entertainment ecosystem, expanding the creative possibilities to partner with industry talent and discover all new opportunities for creators and audiences alike. Our mission is to support and amplify the human element of storytelling, making every collaboration a testament to innovation and artistic integrity.”


Meanwhile, Kanavas added: “Many creatives have all but given up on a career in Hollywood. With DreamFlare, these talented individuals can bring their stories to life for an audience in a whole new medium – while developing expertise in an important emerging technology. New tools enable opportunities to tell stories in truly new ways, and we are excited to see what storytellers from around the world dream up.”



Excerpts from A16Z post: The Next Generation Pixar: How AI will Merge Film & Games

By Jonathan Lai


State of Play: the Interactive Video Landscape



Given the rate at which we’ve seen underlying hardware and model improvements, we estimate that we may be ~2 years out from commercially viable, fully generative interactive video.


Today, we’re seeing progress in research with players like Microsoft Research and OpenAI working toward end-to-end foundation models for interactive video. Microsoft’s model aims to generate fully “playable worlds” in 3D. OpenAI showed a Sora demo where the model was able to “zero-shot” simulate Minecraft: “Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity.”


In February 2024, Google DeepMind released its own foundation model for end-to-end interactive video named Genie. The novel approach to Genie is its latent action model, which infers a hidden action in between a pair of video frames. With training from 300,000 hours of platformer videos, Genie learned to distinguish character actions – ex. how to jump over obstacles. This latent action model, in combination with a video tokenizer, gets fed to a dynamics model that predicts the next frame, thus piecing together an interactive video.


On the application layer, we’re already seeing teams explore novel forms of interactive video experiences. Many companies are working on producing generative film or television, designing around the limitations of current models. We’ve also seen teams incorporate video elements inside AI-native game engines.


Who will Build the Interactive Pixar?


Pixar was able to take advantage of a foundational technology shift in computer and 3D graphics to build an iconic company. There is a similar wave happening today in generative AI. However, it’s also important to remember that Pixar owes much of its success to Toy Story and the original animated films created by a world-class team of storytellers led by John Lasseter. Human creativity, leveraging new technology, produced the best stories.


Similarly, we believe the next Pixar will need to be both a world-class interactive storytelling studio as well as a top technology company. Given how quickly AI research is progressing, the creative team will need to be able to work hand-in-hand with the AI team to blend narrative and game design with technical innovations. Pixar had a unique team that merged art and technology, and also partnered with Disney. The opportunity today is for a new team to bridge the disciplines of games, film, and AI together.


To be clear, this will be challenging – and it’s not just limited by technology – this team will need to find new ways for human storytellers to work alongside AI tools in a way that empowers vs detracts from their imaginations. There are also many legal and ethical hurdles that need to be solved – legal ownership and copyright protection of AI-generated creative works is unclear today unless a creator can prove ownership of all the data used to train a model. Compensation for the original writers, artists, and producers behind training data still needs to be resolved.


Yet what’s also clear today is that there is immense demand for new interactive experiences. And long-term, the next Pixar could create not just interactive stories but entire virtual worlds. We previously wrote about the potential of never ending games – dynamic worlds that combine real-time level generation with personalized narratives and intelligent agents – similar to HBO’s Westworld vision. Interactive video addresses one of the greatest challenges with bringing Westworld to life – creating large amounts of personalized, high quality, interactive content on the fly.


One day, with the help of AI, we might start the creative process by crafting a storyworld – an IP universe we envision fully formed with characters, narrative arcs, visuals, etc – and then generate the individual media products we want for an audience or situation. This will be the final evolution of transmedia storytelling, fully blurring the lines between traditional forms of media.


Pixar, Disney, and Marvel were all able to create memorable worlds that became part of their fans’ core identity. The opportunity for the next Interactive Pixar is to leverage generative AI to do the same – to create new storyworlds that blur the lines between traditional storytelling formats, and in doing so, create universes unlike any we’ve seen before.





Comments


bottom of page