
AI-generated images created with Runway
As with any investment, your capital is at risk.
We’re only starting to understand how generative AI can empower creativity. We can now produce images from simple prompts in ways we hardly thought possible just a few years ago. Most attention has focused on ‘do-it-all’ chatbots, such as ChatGPT, but specialised tools targeting those with deep knowledge of their craft present an attractive investment opportunity.
Among these, video-generating specialist Runway AI stands out. The New York-based startup develops both image-related AI models and the platform to use them. After getting to know the people running it, their trailblazing product and how they forge relationships in the creative industries, we believe the firm is ideally placed to ride this wave of technological change and achieve lasting growth.
Runway lets subscribers make video clips based on either an AI-generated still they create within its software or an imported reference picture or footage. Users describe what they would like to see in plain language and can adjust ‘camera’ settings, controlling zoom, tilt and other movements. This can involve:
- turning real-life performances into expressive, cartoon-like animations
- creating cinematic, photorealistic sequences featuring actual people or imagined protagonists
- generating AI voices that synchronise to on-screen lip movements
- adding special effects, such as flames or flooding, to a pre-filmed scene
- turning photos of a product taken from various angles into a single rotating 3D model
WATCH: How Runway put Kirsty Gibson in the movies. Read about the creative process behind the film at the end of this article
As Cristóbal Valenzuela, the firm’s co-founder and chief executive, has observed, generative AI flips the paradigm of computer graphics development. Historically, artists had total control over their tools but were limited by the technological building blocks at their disposal. These gradually improved over time – eg the maximum number of tiny triangles that could be rendered to build objects and the quality of lighting effects.
By contrast, today’s AI models can already create arrestingly lifelike images. It’s getting the models to do exactly what you want that’s the challenge.
“We can’t easily adjust geometry, fine-tune materials or manipulate lighting with the granularity that artists expect,” Valenzuela blogged last November. “Models can create stunning imagery but lack… the ability to make precise, intentional changes at any level of detail.”
Since he wrote that, Runway has made progress. For example, it now offers ‘coherence’ – the ability to produce consistent-looking characters, locations and objects across multiple clips, with elements that stay in place when viewed from different perspectives. This lets users build lengthy, flowing sequences without jarring discontinuities. And with $308m in new funding from Baillie Gifford US Growth Trust and others, the company is now training more advanced ‘general world models’ to further enhance its capabilities.
The goal is for its models to better understand the visual world, including why and how objects interact with each other and their environment. This should make the output more predictable, realistic and controllable. Our investment hypothesis is that by 2030, creators will be able to use Runway to make amazing sequences for movies, TV shows and adverts for a fraction of the cost and time required today.
Early applications
That’s not to say Runway doesn’t already have commercial uses. It is becoming popular in pre-production to create moving storyboards, and in post-production for visual effects that require quick turnarounds. Podcasters employ it to make animations to enhance video editions of their shows. And at least one major retailer has used it to visualise 3D models of furniture listed on its website, reporting an uptick in sales.
Looking to the future, Lionsgate – producer of the John Wick action movies – has signed a deal for Runway to create a customised ‘closed off-the-shelf’ AI model for the studio’s exclusive use. One executive told The Wall Street Journal he expected it to deliver “millions and millions of dollars” in savings. By contrast, a leading video streamer told us it aimed to use the technology to stretch the $250m budgets of its most expensive productions even further.

One of Runway’s strengths over other AI technologies, such as self-driving cars, is that it can be far from perfect but still functional. Lives aren’t at stake.
So when, as we experienced, it creates a dancer who spins around to reveal a third arm, there’s nothing to stop you from simply regenerating the clip to fix the glitch.
Runway turns its current limitations to its advantage by integrating with other software to fit into existing workflows for video effects, colour grading and audio. The platform makes it easy to import highly detailed 3D models that it can’t accurately create from scratch. And in cases when its own output needs further work, it’s easy to transfer the content into other apps for refinement.
Fusing art with technology
Valenzuela’s history influenced this approach. The entrepreneur studied economics and business before earning a master’s in design in his home country of Chile. Around 2016, in his own words, he “randomly fell into a rabbit hole of machine learning-generated art”, spurring him to secure a scholarship to study the topic at New York University. There, he met his two co-founders, and Runway grew out of a thesis project to let apps such as Photoshop make and manipulate AI images.
That multidisciplinary background shapes Runway’s focus on augmenting artists rather than trying to make them obsolete. For example, it offers ‘turbo’ versions of its AI models – faster but less powerful – to encourage users to iterate on ideas. Valenzuela acknowledges that at some point, video-makers will need less specialised knowledge for some sophisticated tasks but adds that imagination will remain the key component for success.
That insight could be critical. Our research indicates junior workers are anxious about artificial intelligence threatening their jobs. Fear of AI was a factor in 2023’s US screenwriters’ strike. Rather than any competitor, Runway’s biggest challenge is probably suspicion of the technology leading to inertia.
A class-action lawsuit poses another risk. Artists allege Runway, among other companies, illegally stored their work on its systems. At some point, politicians must update existing copyright laws or create new ones to resolve issues surrounding AI and intellectual property. However, the fact that Runway pays to train its models on stills and footage from Getty Images, among other content owners, reassures us it takes the topic seriously.
Runway sits in our ‘earlier, less proven’ group of portfolio companies. To qualify, these businesses need both a competitive edge and the ability to capitalise on fundamental, enduring economic shifts.
The creative industries generate more than $2tn in annual revenue, and generative AI clearly represents a disruptive force. Moreover, Runway’s video-focused goals seem more attainable in the near term than the ‘superhuman intelligence’ that some other AI companies are pursuing.
Ultimately, though, this is a holding that requires patience and tolerance of ups and downs. We accept those provisions because the long-term potential is massive, and Runway has the qualities to seize it.
Creative intelligence: how Runway AI helped us turn Kirsty into a virtual star
Picture editor Aleksandra Kocela and I brainstormed ideas – a gladiatorial arena, a world made of kids’ coloured clay, a Regency ball – and set about creating them with the firm’s AI.

It was a multi-stop process. First, we used Runway’s Frames tool to create a static image of the desired scene from scratch. Then, using its Reference capabilities on photos of Kirsty, we asked the software to place her in that setting. Next, we switched into Gen-4 video mode and gave animation instructions, for example, for a scene with her as a rock star: “Woman sings, guitarist strums and drummer drums as the camera pulls back to reveal huge stadium.”
Finally, we commanded the software to increase clip quality to 4K resolution and exported it for editing.
Runway’s output can be a bit random – the imagery didn’t always perfectly match our descriptions, and satisfactory results could require up to a dozen versions. But the degree of control should improve with more advanced models. And the ability to create all this at speed and a fraction of what it would traditionally cost was both liberating and fun.
Leo Kelion
Runway AI is also held by The Schiehallion Fund
Important information
Unlisted investments such as private companies, in which the the US Growth Trust has a significant investment, can increase risk. These assets may be more difficult to sell and changes in their price may be greater.
This article does not constitute, and is not subject to the protections afforded to, independent research. Baillie Gifford and its staff may have dealt in the investments concerned. The views expressed are not statements of fact and should not be considered as advice or a recommendation to buy, sell or hold a particular investment.
Some of the views expressed are not necessarily those of Baillie Gifford. Investment markets and conditions can change rapidly, therefore the views expressed should not be taken as statements of fact nor should reliance be placed on them when making investment decisions.
Baillie Gifford & Co and Baillie Gifford & Co Limited are authorised and regulated by the Financial Conduct Authority (FCA). The investment trusts managed by Baillie Gifford & Co Limited are listed on the London Stock Exchange and are not authorised or regulated by the FCA.
A Key Information Document is available by visiting bailliegifford.com
171700 10057508





