This site uses cookies to ensure the best viewing experience for our readers.
OpenAI is acting responsibly - and that’s suspicious

Analysis

OpenAI is acting responsibly - and that’s suspicious

The delay in the distribution of Voice Engine, OpenAI's voice cloning tool, is mainly intended to strengthen its image in the eyes of the public as the responsible adult in the artificial intelligence arena. However, in many ways this makes it easier for the company to develop even more destructive products

Viki Auslender | 15:24, 03.04.24

OpenAI announced this week that it has developed a tool that clones a person's voice simply but will not release it to the general public as a sign of its "cautious and informed" approach. The company stated in the blog that the tool, Voice Engine, was developed at the end of 2022, and it allows the creation of synthetic speech based on an original recorded segment of only 15 seconds. The company stated on the website that it had planned to launch a pilot for developers this past month, but due to the ethical implications, decided to refrain from doing so for now. “We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities,” OpenAI said in a blog post. “Based on these conversations and the results of these small-scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

The post makes OpenAI seem impressive in its capabilities and qualities. On the one hand, it presents an innovative and productive company at the cutting edge of artificial intelligence, and on the other, a sober company that understands the responsibility placed on its shoulders. Accordingly, the company also received a positive media response.

Sam Altman. Sam Altman. Sam Altman.

OpenAI is not the first company to make a convincing voice cloning tool. There are such tools on the market and their problematic nature is widely documented. They can be used to break into bank accounts, carry out telephone scams, impersonate a loved one or a politician. The fear of all this received special attention in May 2023 when the Senate Banking Committee directed the major banks in the United States to update their security measures, so that they could deal with the danger of voice-based authentication.

If all the risks are known, why would OpenAI - a company founded as it defines itself "for the benefit of all humanity" - put any effort into developing a tool of this type? In the blog, the company presents the reasons: a voice cloning tool can help people who cannot speak, help patients regain their voice after a speech injury has occurred, and even serve creators by translating content and preserving a local accent.

But as clear as these advantages are, so is the danger, and before the company allows access to such a tool, it will first examine a set of rules that will prevent its abuse, including the formulation of rigid consent mechanisms and the creation of digital "watermarks", which will ensure a clear identification that the voice being heard is synthetic and not original . Although the words sound honest and clear, OpenAI's post should be read with a hint of cynicism as a marketing ploy - a theater of responsibility.

For more than two years now the company's products, be it ChatGPT or Dall-E-2, have been marketed to the general public while producing the same problems of identification and consent in use. In fact, the variety of dangers and problems surrounding the products and models on which the company operates have already materialized: the products based on ChatGPT and Dall-E-2 wholesale violate copyrights and are made without any consent of the original creators; The products include biases and stereotypes and establish these even further in society; The tools distort information, with a tendency to make things up; And the large amount of capital required to produce and operate these models means that the field suffers from a concentration of power.

What is OpenAI doing to deal with these claims? The exact opposite of the many proposals that were put before it. It is not transparent about the data set on which it trains its artificial intelligence and machine learning models, it does not allow third-party developers access to the information it holds (its API) and it has not developed effective digital "watermarks" to be used on the synthetic products (despite promising to do so) - and it is also fighting against all the artists, writers, illustrators and newspapers who are suing it for violating property and creative rights.

Related articles:

So why is a cloned and synthetic voice more dangerous than such text and images? Answering this question is the key: OpenAI believes that it is the frontrunner in the field of artificial intelligence, and it is also the one that wants to define the red lines in the field. To explain to us what is dangerous and what is not, what causes harm and what does not, which harm deserves attention and which does not. The red line that has now been drawn regarding the danger of voice cloning is almost random, and mainly responds to a general feeling prevalent in the public that it creates great risks, especially in this year of elections in the United States.

The timing of the post in which the company presents itself as a company of responsible adults, for a product it developed almost two years ago, is not coincidental, but rather tries to respond to the increasing criticism that this field requires more regulation.

The post is also part of a larger activity by the company in which it calls on the one hand for supervision and cooperation with regulators, and on the other hand demands that everything be done on its terms. As part of this policy, the company's CEO Sam Altman toured the United States, Great Britain, and Europe last year to promote the required way in which, in his opinion, the company's developments should be monitored. Of course, Altman is doing this while OpenAI continues to develop and launch new products, and update others.

The PR campaigns are intended to demonstrate that the company is concerned about safety, but there is a hollow dimension to them, which obscures the fact that the company works to promote its own interests first, and not the general public's - as regulators and elected officials see them. For example, shortly after Altman testified in the U.S. Senate and emphasized the need for regulation, he said in a conversation with reporters that the EU regulations on artificial intelligence are too strict and therefore his company may "stop operating" on the continent. The European legislation includes established and well-known ways to reduce damages - such as a requirement for greater transparency regarding the data sets, consent and intervention mechanisms, and the enforcement of existing laws on property rights.

At the same time, OpenAI's effort to promote supervision of its products is wrapped in a well-formulated narrative according to which those tools will be a "danger to the human race" in the future, and only a responsible company like it can ensure that they will not be misused. The effort of Altman and OpenAI to continue developing while creating a false representation of the end of the human race, causes their products to be perceived as very powerful, thus contributing to the company’s image in the public.

It's not that a tool that clones voices isn't dangerous - but it doesn't mark the end of humanity.

This approach creates for OpenAI developers an image of having superpowers, a modern version of Prometheus who gave fire to man. Is it surprising then that Altman compared himself to Robert Oppenheimer, the father of the atomic bomb, and his company to the Manhattan Project?

The comparison with Oppenheimer and Prometheus allows Altman and OpenAI to draw a few more inevitable conclusions: Prometheus gave fire to man to be used for protection, Oppenheimer developed the atom bomb out of a desire to end World War II. The two also bore the physical (Prometheus) or mental (Oppenheimer) punishment for their actions. Thus, in his view, Altman and OpenAI bear the responsibility (to develop for the public good) while they fulfilled their duty (and restrained themselves from launching one product of their choice), and in the process shielded themselves from the arrows of criticism.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS