This site uses cookies to ensure the best viewing experience for our readers.
Lightricks CTO: "We can generate HD video in 13 seconds"

Mind the Tech London 2025

Lightricks CTO: "We can generate HD video in 13 seconds"

"It’s not just about speed or cost," added Yaron Inger at the Mind the Tech conference in London. "Our technology allows broader applications, like enhancing old films to 4K resolution. It also helps 3D artists create scenes efficiently, saving labor and time."

Roee Bergman | 13:48, 16.09.25


צילום: שלו שלום, צילום וידאו: LONDON FILMED
Yaron Inger (צילום: צילום: שלו שלום, צילום וידאו: LONDON FILMED)

“Our company was founded 12 years ago in Jerusalem with the goal of bridging the gap between imagination and creation, and that’s what I came here to present today. Over the past decade, we have developed tools designed for creators and launched various applications to transform creators’ ideas into digital content,” said Yaron Inger, co-founder and CTO of Lightricks, speaking at the Mind the Tech London 2025 conference organized by Calcalist and Bank Leumi.

Inger continued: “We have had hundreds of millions of downloads and millions of users, which has allowed us to achieve an annual recurring revenue (ARR) of $250 million and employ over 500 people worldwide. Today, I’m going to talk about the LTXV model we developed, our video AI model, and why we decided to build it. It’s quite remarkable because we are currently the only Israeli company developing a video AI model. Apart from competitors from the U.S. and China, we are walking alone in this field.”

“When we approached this challenge, we asked ourselves: what makes a creative tool good? The first requirement is control. To create something effectively, users need degrees of freedom. If you’re a painter, for example, you need quality brushes and a versatile color palette.

“The second requirement is rapid iteration. If creating a video takes too long, it slows down creativity and productivity. We studied other generative AI video models and found that none offered both control and speed together.

“In AI video creation, control usually comes from the prompt you write, as in ChatGPT, and the first frame of the clip. But this approach is limited. Models are also slow, taking tens of seconds to generate a video, and often expensive, discouraging repeated attempts.

“That’s why we built LTXV. The first version, released six months ago, overcame both these obstacles: it gives creators precise control over content while remaining fast. We decided to release it as open source, allowing our community and academic researchers to use it freely and advance their own projects.

Related articles:

“Our results have been remarkable. The model translates everything written in a prompt directly into video. Unlike other systems, it also allows additional levels of control, such as defining opening and ending frames. This is particularly valuable for animation studios, enabling the placement of key frames throughout the video. The model handles them intelligently.”

Inger presented several clips during the conference, including one of a woman walking on puddle-covered ground reflecting sunlight. “The most amazing thing is that this video took only 13 seconds to produce. We can generate HD video quickly and at low cost.”

He added: “It’s not just about speed or cost. Our technology allows broader applications, like enhancing old films to 4K resolution. It also helps 3D artists create scenes efficiently, saving labor and time. One prompt can now replace hours of work. We are very excited about the opportunities this opens up, and many more innovations are in our pipeline.”

Watch his full remarks in the video above.

requirement is rapid iteration. If creating a video takes too long, it slows down creativity and productivity. We studied other generative AI video models and found that none offered both control and speed together.

“In AI video creation, control usually comes from the prompt you write, as in ChatGPT, and the first frame of the clip. But this approach is limited. Models are also slow, taking tens of seconds to generate a video, and often expensive, discouraging repeated attempts.

“That’s why we built LTXV. The first version, released six months ago, overcame both these obstacles: it gives creators precise control over content while remaining fast. We decided to release it as open source, allowing our community and academic researchers to use it freely and advance their own projects.

Related articles:

“Our results have been remarkable. The model translates everything written in a prompt directly into video. Unlike other systems, it also allows additional levels of control, such as defining opening and ending frames. This is particularly valuable for animation studios, enabling the placement of key frames throughout the video. The model handles them intelligently.”

Inger presented several clips during the conference, including one of a woman walking on puddle-covered ground reflecting sunlight. “The most amazing thing is that this video took only 13 seconds to produce. We can generate HD video quickly and at low cost.”

He added: “It’s not just about speed or cost. Our technology allows broader applications, like enhancing old films to 4K resolution. It also helps 3D artists create scenes efficiently, saving labor and time. One prompt can now replace hours of work. We are very excited about the opportunities this opens up, and many more innovations are in our pipeline.”

Watch his full remarks in the video above.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS