This site uses cookies to ensure the best viewing experience for our readers.
Lightricks CEO: "It's hard for me to envision a barrier that AI can't breach"

Lightricks CEO: "It's hard for me to envision a barrier that AI can't breach"

Zeev Farbman shared insights on the balance between traditional filmmaking and AI innovation, the implications for creative professions, and the future of visual storytelling in an evolving digital world. 

Omer Kabir | 14:18, 29.09.24

As generative artificial intelligence (GenAI) transforms the landscape of content creation, Zeev Farbman, CEO of Lightricks, stands at the forefront of this revolution. With the launch of LTX Studio, a system that enables commercial production through simple text commands, Farbman is redefining how advertisements are made. Farbman shared insights on the balance between traditional filmmaking and AI innovation, the implications for creative professions, and the future of visual storytelling in an evolving digital world.

"Since 2022, when GenAI models reached a certain level of maturity, we are experiencing a new paradigm that increasingly encroaches on the old one," Farbman explained. "Those who shoot commercials can always use a camera, but they can also ask 3D artists to handle more challenging shots or turn to GenAI-based systems. In practice, we haven't reached the point where you can do everything that can be done with a camera. However, there are significant advantages to the new model. The old paradigm still holds value, especially if you want perfect control over how things move; in that case, it’s better to use classic graphics tools."

Zeev Farbman. Zeev Farbman. Zeev Farbman.

One concern regarding image and video text models is their potential use in creating fake news and deep fakes, as seen in the U.S. election campaign. Does your system heighten this fear?

"This is a very justified concern. We are currently at a low point where everyone believes what they want. The challenge lies not in a person's ability to create content from imagination, but in understanding that this content is fictional. I don't know how to resolve this issue without education and without people recognizing the power of these systems and what they can do. The democratization of these systems is essential; they must be accessible to everyone. Only then will people understand the need to critically assess what they see. As for technological solutions to this problem, most I've encountered seem more like lip service.

"Right now, theoretically, OpenAI has a role to play: a few weeks ago, they blocked Iranian hackers who were using their platform to generate fake news. But Iranian hackers—what's next? They will simply take a GenAI model that can run on a personal computer and use it. There are models that can be operated on GPUs costing just a few thousand dollars. At that point, OpenAI won't be involved at all. I don't see how companies can stop these developments, which brings me back to my main point: I don't see a viable technological solution."

What about AI-generated content recognition systems?

"In my opinion, these won't work very effectively. Training AI often involves using a system that recognizes AI-generated content to train the next generation of models. Let's say you develop a model that can discern whether an image is from the real world or generated by AI. Ultimately, you'll end up in a constant, never-ending battle between the detection system and the generative system. It's hard for me to envision how to create a barrier that AI can't breach."

Related articles:

Since the advent of the first generative models, there's been talk that they will replace graphic designers, copywriters, actors, photographers, and more. How do these predictions align with reality?

"When DALL-E 2 was released in the summer of 2022, it was evident that a hype cycle would emerge, characterized by a massive spike in interest followed by a steep decline. Now, from the peak of the hype in 2022, we find ourselves almost at the opposite extreme. I read an article on The Information that suggested OpenAI's creation was essentially a system that enables children to cheat on their homework. This perspective starkly contrasts with the situation a year ago, when Sam Altman, the company's founder, predicted explosive growth. In terms of reality, not much has changed; it will take time for these technologies to be integrated into the real world, likely between five and ten years. It’s like being given a nuclear reactor without the surrounding electrical grid."

So, will there be professions that these systems will replace?

"At the very least, this will change the nature of these professions. We already see signs that AI adds great value in fields like medicine and finance. There are AI systems with diagnostic capabilities that surpass those of family doctors, as well as systems that can manage finances more effectively than human finance managers."

Will GenAI systems be able to replace graphic designers, advertisers, and others?

"Humanity has an infinite appetite for visual content. Consider virtual worlds; the biggest games today, like GTA, have seen a dramatic increase in the number of people working on them since they transitioned to 3D. Producing the next GTA now takes a decade, and everyone wants an infinite supply of games like it. GenAI systems will assist people in creating these virtual worlds, allowing even small gaming companies to develop games comparable to GTA. The potential for what larger companies can achieve with their budgets is hard to fathom. I'm not sure how this dynamic will play out—whether there will be fewer people creating the same amount of content, the same number of people producing even more, or something entirely different. So far, we've built increasingly powerful tools, yet the number of designers has been growing. Some jobs will also become more interesting; for example, I heard about a graphic designer at a gaming company whose job for two or three years was to draw grass. Such roles will likely disappear, and perhaps that’s a good thing because I doubt any designer dreams of painting grass for a living.

"Part of the problem in the content world right now is that enormous budgets stifle innovation. Take Hollywood, for instance. How many more Marvel movies can they make? I watched those films initially, but now they feel repetitive. I understand their position; they can't take risks when a half-billion-dollar budget hinges on a film’s success. The moment it becomes possible to create high-quality content on smaller budgets, we’ll see more innovation. The next GTA will likely be a version of 'Redemption 2' on steroids because that formula works. Most games that truly impress me come from small indie companies willing to take risks and innovate. My hope is that they will embrace more adventurous ideas."

So, by creating things cheaply, you can experiment without fear?

"Exactly. For example, while working on LTX Studio, we collaborated with Ari Folman, the director of 'Waltz with Bashir.' With his involvement, we only needed 10% of the budget. The rest of the production was handled by the studio. Over time, we will reach a point where we can achieve 100% of the budget for what we are doing. At that point, people will have a much greater appetite for such projects."

Let's discuss Lightricks. Last summer, you laid off 12% of the workforce—70 employees. What is the current situation within the company? Does it still have the financial capacity to continue producing innovations and new products in such a challenging climate?

"This is not as severe as it may sound. This year, we are experiencing growth in our existing business while managing costs effectively. That’s part of the story; we made the layoffs because we recognized that the nature of our work is evolving. Training AI systems is expensive, costing millions of dollars. Once we understood that our future is not limited to mobile and that we must invest in infrastructure, we realized that the type of personnel we need is changing. It’s not that we lack the funds to pursue our goals; we are simply shifting our focus to other areas. If you look at the vacancies on our website, you’ll see that there is still a vibrant atmosphere here."

If we compare the timeline of GenAI products to that of the iPhone, which was revealed in 2007, at what stage are we? Are we perhaps still in 2006?

"That's a good analogy, but the answer is a bit more complex. If we didn't have AI capable of developing code, I would say we have a capability that would take a year or two to master, another year or two to start building products, and yet another year or two to reach a certain level of maturity. In this view, if 2022 is the 2007 of AI, we should see interesting products starting to emerge around 2026. However, the wild card, which is difficult to quantify, is productivity gains, which could accelerate this timeline. Most products built around AI currently serve merely as a thin veneer over the models themselves. I believe that this year or next, we will see some significant products like LTX Studio. I know industry colleagues are working on similar-sized projects that will soon emerge."

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS