This site uses cookies to ensure the best viewing experience for our readers.
“Dip your toes in the water of AI. There's no substitute for experience.”

20-Minute Leaders

“Dip your toes in the water of AI. There's no substitute for experience.”

Being literate with AI and NLP will be important for most people as AI becomes an integral part of our lives, says Yoav Shoham, co-founder of AI21 Labs.

CTech | 08:06, 04.08.22

Being literate with AI and NLP will be important for most people as AI becomes an integral part of our lives, says Yoav Shoham, co-founder of AI21 Labs. He advises people to get experience by starting with the simplest use case for their business and experimenting. While he shares that the advances in machine learning today are beyond where most people imagined we could be, Shoham says language models are not yet sufficient. AI21 Labs’ mission is to change the way we read and write, and one way they are accomplishing it is by building applications based on language models. Shoham explains that language gives a window into the human mind, which makes it harder for computers to follow. He says he is impressed every day by the work being done at AI21 that he wouldn’t have even thought of, and he’s excited by the possibilities.


Click Here For More 20MinuteLeaders

Yoav, what led you into the founding of AI21 Labs?

AI21 Labs is an unusual company in a number of respects. One of them is the reason we started the company: because of where we saw AI in the historical perspective. Back in the 80s, AI was very popular. Then back in the 90s, you weren't allowed to admit you were doing AI. In the 80s, it certainly overpromised, and therefore, we entered what was called the "AI winter."

Now, the pendulum has swung back. Everybody is doing AI, but it's very different. Now it's all about statistics. Specifically, under the heading of machine learning. What we can do now with machine learning is something that, honestly, I think few of us imagined we could do. You saw the impact in machine vision first. You didn't see that dramatic impact in language until about five years ago. When you think about it, vision is “easy” to recognize that this is a bottle. But there's nothing local and simple in language. Machine vision is a lens into the human eye and language is a lens into the human mind. It's much harder for computers to make sense of language.

But then we saw the needles start to move because of the particular brand of neural nets called "transformers.” In a lot of the academic benchmarks that had puttered along for a while, suddenly computers approached, sometimes exceeded, human level performance. But the truth is that it's a little bit of an illusion. They do enable things that are amazing, but they're also limited.

If you go to a popular language model, for example, GPT-3 or Jurassic-1, and you ask it to complete sentences or to answer questions, they'll often do amazingly well. You'll ask it to add two-digit numbers, and it will give you the right answer. That is total nonsense because these models really don't understand addition. Language models, the term is a little misleading because you initially might think it's a model that learns the rules of language. And it does that, but it learns much, much more. It learns about the world as described by the text on which it was trained. It doesn't have access to current information. We founded the company based on the premise that language models are necessary but not sufficient. We need to augment them with symbolic reasoning and access to real time data and so on.


Yoav Shoham, co-founder of AI21 Labs Yoav Shoham, co-founder of AI21 Labs Yoav Shoham, co-founder of AI21 Labs


You've positioned the company at the intersection of research and creating offerings for the world. Tell me about balancing those two.

We decided early on—I started with my partner, Ori Goshen, and very shortly after, Amnon Shashua joined as a co-founder—we did not want to just create a research lab. We really wanted to create a large business. The question is what products or services we wanted to build that relate to this deep technology we were contemplating. In an ironic way, it limited us to applications that required this deep technology.

Our mission is to change how we produce and consume information, specifically, how we write and read. When you think about it, the way we write today is the way a product manager at Microsoft decided in 1980. The fundamental experience hasn't changed. When you think about reading, we're executing on the vision of Gutenberg with a printing press. We think that both of these experiences can be radically rethought if you take AI as a basic building block and turn the machine into a thought-partner in this creative process. That's our mission.

We started out by building our own applications. A year and a half ago, we released our first product called Wordtune, which focuses purely on the writing side. It's a first step of a much longer road. The tail end looks like total science fiction, except it will all be done in two years.

How do you experience or think about the way that teams around the world are going to experience NLP and enterprises are going to bring NLP into their work?

NLP is a very broad umbrella. If I limit the discussion of how it will impact how we write, for example, maybe I could best explain it by an analogy. If you're writing a book or an article, you used to have a copy editor. The editor is really a thought-partner. They look at it and say, "Listen, this part doesn't capture what you have in mind." Or, "This whole section isn't really adding to the story." This editor doesn't replace you, but it makes you a better version of yourself. That's what we'll see in a writing assistant, really a thought-partner in the writing process.

Talk to me about symbolic reasoning as an idea in hybrid with deep learning to advance the boundaries of foundational models.

First of all, Jurassic-1, the model we built more than a year ago, is not neuro symbolic. It's purely a neural system, very much inspired by GPT-3. It didn't break the mold in fundamental ways. It's just a very good workhorse. There are certain things that these neural models are not good for; they're not optimized for. It seems so obvious to us that we should have the best of both worlds, have the statistical inference and statistical access to world knowledge and augment it with structured knowledge, real time information, and optimize reasoning that optimized for certain islands of knowledge, and that’s what Jurassic-X is, which we announced a couple months ago.

Do you expect that every small business will create their own applications or to use these black box applications by companies like AI21 Labs in the next couple years?

These language models or foundation models, if you want to enlarge to think that are not purely linguistic, are becoming much more generally available. Increasingly, people, small businesses, and large businesses will incorporate them in all their offerings. People don't really know how to work with these models, what they're good for, and how best to use them. What you have right now is akin to Henry Ford announcing his great invention, an engine, and showing an engine to people and saying, "Go have fun." They're quite removed from the value proposition that an actual user or developer or organization can relate to. You'll see these models get not only more sophisticated but also enveloped in layers that make it much closer to the application; and you'll see the market get educated increasingly. I think we're facing probably on the order of two years before there's more maturity in the providers and consumers of NLP technology.

What should we be doing now so that in two years we can use these effectively and responsibly? How do you even think through these questions as a cutting-edge researcher?

You can't easily generate content with current models. If you'll go and prompt a model, you'll get surprisingly relevant things and a bunch of garbage. I think we have a way to go. It's not only in creating better language models. A language model is a general facility, on top of which you need to add a lot. So in our applications, the language model was a basis, essential, but definitely less than half of the work it took to actually provide a writing assistant, a reading assistant, or a video skimmer.

Foundation models are a good term because they do provide foundations for many applications that are focused on language. When you build a house, a foundation is essential. But it's only part of the house. To take this foundation and turn it into a useful product or facility, it takes a lot more work. That's something that I think perhaps some people don't realize.

What education do we need to provide people? And at what layers do you think for organizations around the world in two years time to be able to use these models effectively into our own use cases?

It will vary among organizations. But just as personal computers started to infiltrate industry and it was clear that you needed to be literate in computing, you need to be literate in AI. Typically you don't need researchers. A lot of what you need to know about a language model is at the level of common sense. My advice is, first of all, dip your toes in the water. There's no substitute for experience. Second, think about the simplest use case in your business, your organization where you have an intuition. It may not be the highest value, that's not important right now, but where the match with language is obvious. Don't necessarily rush to create a chatbot. Find a good problem, not the most complicated one, and start experimenting.

What are you most excited about right now in natural language?

To be honest, every day I go to the office, I'm blown away. People are doing such creative things, and every day I'm blown away by something that they're doing that I would not have thought of. The possibilities are so exciting.

Are you considering yourself more of a researcher, mathematician, computer scientist, or creative entrepreneur?

It's not that they’re mutually exclusive, right? So you're trying to do both. It's a lot of fun to create. Once you've done it, there's no going back. It's creation based on deep principle, deep mathematics. The combination appeals to me.

Michael Matias Michael Matias Michael Matias


Michael Matias, Forbes 30 Under 30, is the author of Age is Only an Int: Lessons I Learned as a Young Entrepreneur. He studies Artificial Intelligence at Stanford University, is a Venture Partner at J-Ventures and was an engineer at Hippo Insurance. Matias previously served as an officer in the 8200 unit. 20MinuteLeaders is a tech entrepreneurship interview series featuring one-on-one interviews with fascinating founders, innovators and thought leaders sharing their journeys and experiences.

Contributing editors: Michael Matias, Megan Ryan

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS