This site uses cookies to ensure the best viewing experience for our readers.
The Lanier sphere: "Artificial intelligence should be more like the Talmud"

The Lanier sphere: "Artificial intelligence should be more like the Talmud"

Jaron Lanier is one of the most influential tech gurus in the world, one of the founders of the internet and the godfather of virtual reality. He’s also a sharp critic of the tech industry, even though he’s a senior employee at Microsoft. In an interview with Calcalist, he explains why AI should become like the Talmud

Roni Dori | 14:39, 15.06.23

"There is no artificial intelligence entity. When you look at this technology in depth, you realize that it is nothing more than the combined efforts of flesh and blood beings. From my perspective, the danger isn’t that a new alien entity will speak through our technology and take over and destroy us - it's that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we drive ourselves mad ," says tech guru Jaron Lanier in an exclusive interview with Calcalist.

Check out all AI related news

Lanier is the godfather of virtual reality, a computer scientist and philosopher, an artist, author and futurist, who serves as Microsoft's Prime Unifying Scientist. Yes, the same Microsoft that is tightening its grip on OpenAI, which is responsible for the main agents of chaos in the field, Dall-E and ChatGPT. For two decades, Lanier has been criticizing the tech industry harshly and yet, he did not join the trend of open letters warning of the potential harm of AI.

“Should we let machines flood our information channels with propaganda and untruth?”" read a letter by the Future of Life Institute from March which sparked the trend. "Should we automate all jobs, including the fulfilling ones? Should we develop non-human minds, that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"

Jaron Lanier. Jaron Lanier. Jaron Lanier.

The signatories of the letter called on the industry to take a six-month pause "to jointly develop and implement a set of shared safety protocols." Thousands of prominent figures in the industry signed the letter including Yuval Noah Harari, Apple founder Steve Wozniak and Elon Musk. The father of AI Geoffrey Hinton and even the Founder and CEO of OpenAI, Sam Altman, signed other letters in the same spirit. But Lanier did not sign any of them.

Why didn't you sign?

"Not because of a conflict of interest or anything like that - other Microsoft employees did sign the letter. I do think it's the wrong approach because it mystifies AI too much. It talks about AI as this mystical thing, and that we should figure out some way to control it. But we don't even have the language to make sense of that idea."

What do you mean by that?

"We don’t know what the word 'safety' means, we don’t know what the word 'fairness' means. We think we know, but we don’t. If you try to make AI safe by defining the term, it’s going to be hard because safety is actually a difficult concept, and therefore turning it into code, into something specific, is pretty hard. But if you keep track of those who contributed to artificial intelligence, all of a sudden you can do something, and it gives you specific things you can do rather than being lost in terminology that you can’t sufficiently define."

Give me an example.

"If you have an AI that’s very weird, and it says it’s falling in love with you, as one of our chatbots did with a journalist from the New York Times, you can say: 'Okay, where did it get this?' and he can see this came from fanfiction and a soap opera and a little bit of this and a little bit of that, and all of a sudden it makes sense. It's very specific and very concrete, and I think that concentrating on people is spiritually healthy because if we forget people, it will be hard to build a civilization that’s good for them."

So you're not afraid that the next generation of artificial intelligence will bring with it a whole new set of risks?

"I definitely fear, for example, that advanced GPT models will be used to manipulate people more effectively, make society even crazier, and empower bad actors even more. And that’s a really desperate scenario to consider which could be truly awful. That scenario has the potential to lead to terrible outcomes for humanity. I think that that is a scenario that merits the highest levels of anxiety.

"But the good news is that most people in tech have become aware of this danger, and it’s one of the reasons why there’s more openness to different ideas about AI. And it’s one of the reasons why tech companies that develop artificial intelligence are taking the initiative to approach regulators and say: 'We need your regulations, this is your time to act because this is different from just ordinary stuff and you really need to be involved.' The danger is recognized, but it is very real."

How do you ideally envision AI?

"Artificial intelligence should be more like the Talmud, where different generations from many places commented in a centralized document, and the commentary was organized spatially, which is amazing and very relevant to my thinking about virtual reality. The design of the Talmud does something really extraordinary: it proves that you can have a collective work, where people collaborate without forgetting about human beings.

"You don't need to create a fantasy in which there are no people, and there is only a new collective entity. This means that it's possible to make collaborations without a golem (which was a lump of clay in the form of a person from Jewish folklore, into which mystics breathed a spirit), and the Talmud proves it.

"The Talmud is actually anti-golem. And if we start to think of it like that, when we ask for a picture of a cat in a spacesuit on the moon, baking a pizza in a vacuum, or text for programming code, we will get a result that you could say was brought to you mostly by 14 people, and to a lesser extent another 1,000 people and even less another 10,000 people. But it will be possible to trace the traces of those 14 people online, and in some cases they will also receive payment and a new economy will be created, which is good.

"The people who contributed to the collective intelligence should at the very least receive recognition. They must not be forgotten because they are human beings. And the Talmud does all of this. Wikipedia is also structured like this, although it is a bit pseudo-Talmud, because you can take a look and go behind the scenes and see where people made changes and additions, but it’s all mushed up, as if people simulated the wrong way of doing AI. The Talmud is better than Wikipedia and the current way we do AI, and in my eyes it is very much the model we should use for the future of AI."

Related articles:

Lanier is undoubtedly one of the brightest and most creative minds in the tech industry. In the mid-80's he founded VPL Research with Thomas Zimmerman, the first company to market a virtual reality kit that included glasses and gloves, and in 1987 he coined the term "virtual reality." In the 90's he played a central role in upgrading the internet, as the chief scientist of the Internet2 project (a non-profit computer network established by 207 universities from the United States and technology giants). He then founded a facial recognition startup which was sold to Google, and since 2006, he has been at Microsoft as an interdisciplinary genius who contributes his brilliant mind to a multitude of projects.

Among other things, he took part in the development of Kinect, the user interface that allows the player to control the game console through body gestures and speech; designed the 'Together' feature for the company's video calling software, where all conversation participants are seen sitting in one room; and was involved in AI projects, especially those that have an interface with virtual reality.

In between, Lanier moonlighted as a composer and collaborated with legends such as Yoko Ono and Philip Glass, and also wrote seven books, most of them bestsellers, and dedicated to criticizing the current model of the internet. Lanier's main claim is that this model, where people give up their data for free in exchange for various services, such as web search or social platforms, creates a concentration of wealth, stifles creativity and makes humanity even more polarized, nervous, and paranoid.

Against this model, he proposes the concept of data dignity, which calls for the implementation of digital traces to identify the people who contributed to the work, among other things, so that they can receive payment for their work, even when filtered and recombined through large models. The other side of this coin is that people will be required to pay for online services, which, in his eyes, will only improve quality. Lanier gives as an example the way in which Netflix - a service that people have become accustomed to paying for - woke up the television networks from the slumber of the mediocre content they were captive to.

"Artificial intelligence that creates new music is based on the works and voices of real people, and you have to recognize these people," he explains. "If musicians will always be forgotten and there's nothing but mash-ups (song hybrids), you'll end up with boring, average, gray music and depressed musicians. But as long as you recognize the musicians, you can do all the mash-ups and the weird AI stuff is fun, which is fantastic."

This solution, according to Lanier, is also relevant to the dangers of artificial intelligence because it addresses the increased risk of fake news - the digital traces will make it possible to know the source of every product that this technology produces. "If you know something is fake, then it won't be possible to trick you," he explains.

This approach led him to publish a 4,000-word article in The New Yorker in April with the provocative title "There is no AI," in which he tried to dispel the mystification surrounding the innovative technology and bring it back into proportion. "Seeing AI as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious—less like HAL 9000 or Commander Data. But that’s good, because mystery only makes mismanagement more likely,” he writes there.

“Anything engineered—cars, bridges, buildings—can cause harm to people, and yet we have built a civilization on engineering. It’s by increasing and broadening human awareness, responsibility, and participation that we can make automation safe; conversely, if we treat our inventions as occult objects, we can hardly be good engineers. Seeing A.I. as a form of social collaboration is more actionable: it gives us access to the engine room, which is made of people.”

The article concluded with the words: “Think of people. People are the answer to the problems of bits.”

Jaron Lanier. Jaron Lanier. Jaron Lanier.

Aren't you underestimating the dangers of artificial intelligence?

"Focusing on humans instead of machines doesn't reduce any of the potential threats or the potential benefits," he says now. "What it does is give us more practical ways to improve things down the road. It's a pragmatic argument. The innovation today is that we have algorithms that can recognize things, like whether a certain image is of a cat or a dog, or whether it's likely that Shakespeare actually wrote a certain text. But that is exactly what generative AI does—it doesn’t do anything more than that.

"In the last 20-something years, people have been experiencing things through the internet, which is calculated by an algorithm, and this has tempted many people to abuse this and manipulate others by modifying the algorithm. Sometimes these are official customers of companies like Google and Meta, who will pay for advertising to influence people. And very often it is not an advertisement itself but the placement algorithms and the recommendations that end up affecting people because they have a way of moving people into social zones that increase psychological and social difficulties. They tend to make people more paranoid or vain or irritable etc. There’s this whole world of search engine optimization that has a way of making the whole web a little more cranky, and then there’s just bad actors who want to go in and disrupt societies, such as Russia, which created armies of fake users."

Much of your argument is based on perspective, as if quantifying the harms of man-made technology won't alarm us. Could it be simply self-indulgence and not a sober view of the threat of artificial intelligence?

"No. It allows us to use ideas and bring in terms and actions that we can actually define, whereas otherwise we’re lost in words and abstractions that we can’t define. It's really hard to define the range of actions that malicious actors might take. A bit like in the fairy tale about the Genie who grants you three wishes, and you try to think of ones that won't be interpreted in a way that isn't what you wanted. That's impossible, isn't it? Because language is actually interpretable. However, if everything you receive has a detailed history, if you can say 'this is who asked for it,' 'these are the types of materials used in this result,' etc., then suddenly the problem will disappear. You will be able to say, 'Oh, the picture I received is the product of someone trying to scare me, I don't need to worry about it.'"

Yes, just two weeks ago, a verified Reuters account posted a fake photo of an explosion near the Pentagon, and the US stock market reacted with sharp declines.

"And it's a tiny thing, really tiny. Much more significant things can happen than that, but it's quite difficult to create an algorithm that will recognize that it's happening. Therefore, a history of digital traces can solve this problem."

But how do you create such a history? The internet just doesn't work like that, does it?

"How come this is possible with crypto? It's because algorithms allow us to do such things, and if we can do them for cryptocurrencies, why can't we do it for network communication when it's so important? The bottom line is that my proposal moves us away from terms that we don't understand to terms that we can understand, and that is the most important thing about it."

It is difficult to overestimate Lanier's influence on the technology industry: among other things, he was included in the list of the 100 most influential people in the world by "Time" magazine for 2010 and in the list of the 25 most influential people in the technology industry in the last 25 years by Wired magazine (2018).

Despite his high status, he does not hesitate to articulate painful criticisms of the industry, especially of his closest colleagues and friends. "About 20 years ago, I lost a lot of friends because of my criticism of the social network model, which basically gives people free things on the internet in exchange for being manipulated," he says sadly. "I thought it was very bad. At the time, my position was very controversial, and I lost friends who never returned. And it was quite painful."

This criticism also got him into trouble with Google founder Sergey Brin, who bought a startup from Lanier but did not recruit him to the company himself. “Sergey \[Brin\] told me, ‘We don’t want people writing all of these controversial essays,’” he told Business Insider in 2017. “Because I’ve been writing tech criticism for a long time. I’ve been worried about tech turning us into evil zombies for a long time, and Sergey said, ‘Well, Google people can’t be doing that.’ And I was like, really? And then I was talking to Bill Gates and he said, ‘You can’t possibly say anything else bad about us that you haven’t said. We don’t care. Why don’t you come look at our labs? They’re really cool.’ And I thought, well that sounds great. So I went and looked, and I was like, yeah, this is actually really great.”

What reactions did you receive from your colleagues on the article "There is no A.I."?

"It's very strange. Right now, there is no prominent criticism of the article from anyone in the technical world, and I don't know how to comprehend it. Sam Altman liked the article and used some of its terms in his Senate hearing last month. Kevin Scott (Microsoft's chief technology officer to whom Lanier reports) recommends the article at the beginning of his lectures. And all of this makes me a little uncomfortable because I want to believe that I am radical with an edge. And if I write something and people around me like it, does that mean I've lost that edge? I'm not radical anymore?"

Some would argue that as a Microsoft employee, you are not only not radical, but you are also not really objective.

"I have a unique arrangement with Microsoft that allows me to express my opinions freely, without restrictions. It is important to clarify that I am not speaking on behalf of the company. For quite some time now, I have had a somewhat strange role in Silicon Valley - I am both a part of it and a critic of it. I've always felt that it's important to do both. You can't simply divide the world into those who do things and those who criticize them. It doesn't work."

How is it that the people who express the most serious concerns about artificial intelligence today are the ones who develop it? Altman said in an interview with The New Yorker in 2016, “I try not to think about it too much. But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

"Isn't it funny? How strange it is that people say, 'Hmm, we might destroy the world,' and then just go and do it," he says, breaking into a big laugh. "My explanation for this - and I say this with a lot of respect for these people because these are my friends and colleagues - is that many people in the tech culture, the culture of the rich geeks in Silicon Valley, like me, for example, grew up on science fiction. We grew up on Commander Data and movies like 'Terminator' and 'The Matrix,' and stories about Isaac Asimov's laws of robotics, HAL 9000, and many more. When you grow up with such stories, they become a bit like the Torah for you. These stories, deeply embedded within us, provide an external explanation for what is happening in the world, even if we are not fully aware of it.

"In these science fiction stories, the so-called machines become intelligent and we have to fear that they will become the new Pharaoh. When people live with this idea deeply embedded in their childhood, they don't feel that they have a choice but to involuntarily participate in a story from which there is no escape."

Sam Altman. Sam Altman. Sam Altman.

Is love of science fiction equivalent to religious belief?

"I believe that it is impossible to function without some kind of broad framework for thinking about what is happening in this world and in life. This framework can be religion or metaphysics. So, while you can always laugh at the religion or metaphysics of others, the truth is that people who oppose such frameworks are just fooling themselves, especially in circles of science and technology. Sometimes we have this idea that we're disciplined intellectuals and that we don't need all these beliefs, that we only work with evidence and that we're very rational. But I don't think that's possible. Thought needs a frame, and frames need bigger frames. Eventually, you need to have really big frames."

Lanier was born in New York to Jewish parents, Ellery, most of whose family was murdered in pogroms in Ukraine, and Lillian, who was born in Vienna, the daughter of a professor and rabbi who was a colleague of Martin Buber. Lilian was able to talk her way out of a concentration camp by passing as Aryan. "She forged paperwork to get her father released just before he would have been murdered," he wrote in his book "Dawn of the New Everything." "Maneuvers like this were only possible in the earliest days of the Holocaust, before genocidal procedures were optimized."

"My parents' family history has made me grateful for every moment in which my life is safe and quiet," he says now. "And this was not always the case. I was in New York during 9/11, and I lost my house, which was near the towers, because the ceiling collapsed. You are never free from danger - Israelis know this well, but it is also true in America - but most of the time, we are under the illusion that we are free from it. I am grateful that I can hold onto that illusion, at least some of the time."

His parents moved to Mesilla, New Mexico, immediately after his birth, and his mother decided to send him to school across the border, in Mexico, on the grounds that "in Mexican schools, the education is as good as in Europe." When he wondered what was good about Europe that "wanted to kill us all," she replied that "there were beautiful things everywhere, even in Europe, and you have to learn how to not get shut down completely by the evil of the world."

Lanier and his mother were so close "that I barely ever perceived her as a separate person," he wrote. "I remember playing Beethoven sonatas on the piano for her and her friends, and it felt as if we were playing them together, from the same body."

Lillian was killed in a car accident when Lanier was nine years old, in a Volkswagen she had bought a few months earlier. "Why did my parents even buy a car from Volkswagen? It wasn't a ‘Beetle’, the model designed by Hitler, but still. The choice must have been part of my mother's plan to find the good in Europe, in everything." Lanier's 16-year-old daughter, Lena, was named for his mother.

The death of his mother brought on a series of illnesses for Jaron, including severe pneumonia, which left him hospitalized for a whole year. To make matters worse, the family was left with nothing. Lillian was the breadwinner, and the house that his father bought with his remaining money was burned down.

When Jaron was finally released from the hospital, he moved in with his father, who had meanwhile become a teacher, in a tent. Then they moved to a house they built themselves, which resembled a pregnant belly and a pair of breasts. Luckily for the boy, one of his neighbors was Clyde Tombaugh, the astronomer who discovered the dwarf planet Pluto. Tombaugh worked at the U.S. Army testing site near the city, taught at the local university, and took Lanier under his wing.

When he graduated, he received a scholarship from the National Science Foundation for mathematics studies, which led him to study programming. Later, he moved to New York and studied art. To finance his studies, he worked as a midwife's assistant, in a donut shop, and herded and milked goats. In 1983, he started working for the Atari company, and the rest is history.

We've talked a lot about the Talmud and Judaism in general. Are you observant?

"Not really. I have mezuzahs at home, I go to the synagogue on Yom Kippur, and do all sorts of other things, but I don't really observe mitzvot, and I feel a certain guilt about it. But you can think of my hair as wigs."

Are you connected to Israel? Are you aware of what is happening here, like the judicial coup promoted by the right-wing government?

"I have a problem with today's populism, which is not the same as traditional populism. Today's Netanyahu is not the same as the Netanyahu of the past, and this is seen all over the world - also in Hungary, India, Brazil, Turkey, China, and Poland - in all these places the leaders are different from the leaders of the past, in that they are childish and whiny.

"Take Stalin, the man of steel, for example - he didn't care about your feelings. But today's leaders do care; they want you to like them. I knew Donald Trump a bit in the 1990s, and he's really changed since then. Social media turned him back into a child, similar to what happened for Elon Musk and other laggards. Social media increases social anxiety, superficiality, and irritability in them and ultimately turns them into children. Social networks make us regress to the playground, so you have leaders who want to be loved and masses of dissatisfied people - mostly men - who go out to vote because they identify with them.

"I don't want to underestimate the value of many other things happening in Israel, such as its very problematic theocracy, but this is something it and the other countries I mentioned have in common - a strange syndrome of the little boy in the playground, a leader who is a kind of awkward dictator. And we should all be embarrassed by the fact that these embarrassing leaders receive so much support."

Artificial intelligence plays a role in exacerbating this trend.

"True. Mainly because it arouses in many people the fear that they will become neglected because of it, that their voices will not be heard, that they will not be needed. This anxiety is mixed with the feeling of 'I need more attention,' and this is the reason why populist leaders receive so much support. So yes, what's happening in Israel is horrifying to me, but it's incredibly similar to the processes happening everywhere else in the world. It's a worldwide phenomenon, not just Israeli. It's just that we as Jews always do things intensely," he says, bursting into a big and contagious laugh.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS