This site uses cookies to ensure the best viewing experience for our readers.
Don’t play it again, Sam: OpenAI CEO Altman seeking direction for his wayward chatbot

Don’t play it again, Sam: OpenAI CEO Altman seeking direction for his wayward chatbot

The CEO of OpenAI, the company behind ChatGPT, has launched a $1 million scholarship program to fund experiments in setting up a democratic process for deciding what rules AI systems should follow. But instead of focusing on the dangers that are already here, Altman and his counterparts are highlighting distant future threats

Viki Auslender | 20:57, 01.06.23

Last week an American lawyer with over 30 years of experience decided to use ChatGPT in writing up documents he submitted to the court. The chatbot, as it does, invented data and precedents which the lawyer nonchalantly inserted into the text as if they were facts. To his sorrow and shame, the legal teams and the judge assigned to the case quickly realized that there was no way to locate any of the court decisions the lawyer was referring to. When contacted for explanations, he admitted that he used the chatbot to complete his research, but explained that he did not know that it was possible that its results could be false.

The internet was abuzz and mostly ridiculed the lawyer who was caught red-handed, either because he demonstrated extraordinary technological naivete or because he was simply caught in the act of elementary cheating. Although this is a bit funny, it raises a very important question - why did that veteran lawyer think that the chatbot is a technology that can be trusted? Or how is it possible that these technologies are marketed in a way that manages to mislead users about their capabilities? This question is especially heightened because it points to real damage caused by the technology and its deployment, which is mainly due to the competitive nature of the market and the effort of various companies to make their products more desirable than others. All this while the discourse currently surrounding these technologies and the effort to monitor them concerns only an imaginary amorphous entity that may one day evolve from the same technology and cause great social damage.

Sam Altman. Sam Altman. Sam Altman.

1. “We will try to comply, but if we can’t comply we will cease operating"

The same company behind ChatGPT announced over the weekend that it is launching a grant program. OpenAI will award ten scholarships, each in the amount of $100,000, to individuals, teams or organizations, to develop a "democratic process" through which a series of questions (formulated by the company) will be answered regarding the type of regulation that should be imposed on artificial intelligence developments. "While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future," the company wrote in a blog published at the end of the week. "This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence."

The program comes against the backdrop of a series of statements by the CEO of OpenAI Sam Altman as well as by several other senior officials in the field. Not long before, during a hearing before the American Congress, Altman explained that he and his colleagues were anxious about how the product they were developing could change the way we live and that his company, and the artificial intelligence industry, could cause significant damage to the world. Altman, it's worth noting, made these statements as he and other executives continue to compete fiercely to build what they call a "doomsday weapon."

As part of the grant, the company raises specific questions such as "Should joint vision-language models be permitted to identify people's gender, race, emotion, and identity/name from their images? Why or why not?" or "What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?".

The company chose questions for which there is actually a broad scientific (perhaps not engineering) consensus — for example, that systems should not try to identify gender, race, emotions or sexual orientation; or rule on human rights issues.

Related articles:


But the questions are flawed for another reason - they do not touch on the main problems surrounding the technology that are already causing actual social and public damage. Thus, for example, they do not raise the question concerning OpenAI and other platforms similar to it about the disclosure of the training data on which they trained their models, how much they should pay the employees who sort the data, whether the products should be designed so that they do not look as if they are "human-like", or whether they have to pay the creators whose data they use.

These are important questions that the company doesn't think are worth raising, while they are asking for the creation of "democratic" feedback. This should come as no surprise. At exactly the same time that the company is looking to create alternative regulatory procedures, Altman was quoted as saying exactly what is on his mind to European legislators: the regulation you have been working on for a long time is not suitable for us. According to a report by the "Financial Times", Altman said: “The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating.” Altman explained that they fear that systems like ChatGPT will be defined as "high risk" within the legislation, which would place a series of requirements on the product regarding transparency and safety.

Among the requirements, the obligation to disclose various details about the design of the system, including information related to the size and power of the model, as well as data that has been used and is protected by copyright. In general, all the products developed by the company were trained on information from the internet, most of it as a whole is protected by copyright, which the company simply ignored.

2. Deliberate humanization process

Altman has since retracted in a tweet. "Very productive week of conversations in Europe about how to best regulate AI! We are excited to continue to operate here and of course have no plans to leave." But the truth is hard to deny, especially when it aligns with the way Altman and other developers want us to think about their products, and the resulting regulation.

In order for them to continue to do what they want to do, we must be urged to fear future threats and not actual dangers. And these are not future threats for which there is clear scientific evidence of them becoming a reality, such as the rise of the sea level due to global warming. They do this, among other things, through a deliberate humanization process, scattering in front of us hints that their systems are human-like. We are encouraged to think that their products "think" before giving an answer, and are "delusional" if they make up facts. They give them names or make sure they "apologize" if they are wrong. For ChatGPT, the personification is part of the interface of the product, that's why they chose to design it to include the "three dots" animation when the chat builds an answer and a word by word output that together gives the impression that the system is "typing".

But while they encourage a kind of emotional connection between people and systems and urge us to fear a future that may not happen, they deliberately downplay their responsibility and agency, so that we ignore the damage already being done today in the real world. Accordingly, regulation is a relative thing. In the sense that the companies strive to outline it, it is based on the future and on products that have not yet been developed, and not on the existing products with the current capabilities.

Suresh Venkatasubramanian, former White House AI policy advisor to the Biden Administration from 2021-2022, summed up well the anxiety produced by those executives around generative AI when addressing comments by Senator Chris Murphy: "Overall, I think the senator’s comments are disappointing because they are perpetuating fear-mongering around generative AI systems that are not very constructive and are preventing us from actually engaging with the real issues with AI systems that are not generative."

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS