Opinion
Can we talk? OpenAI’s new chat bot opens a Pandora’s box
OpenAI, an AI research lab co-founded by Elon Musk, released its latest AI natural language processing (NLP) creation, ChatGPT, the legal consequences of which must be considered
With each new iteration of AI, the technology seems to get closer to sentience. Notably, an engineer working with Google’s LaMDA AI platform believes that AI has already achieved sentience. This revelation came to him after the AI made a joke about Israel. In a conversation with Bloomberg reporter Emily Chang, Blake Lemoine noted how in testing LaMDA, he would ask the AI to guess the religion of an officiant in a particular country. In posing the question regarding Israel, LaMDA responded that the officiant would be a member of the one true religion, the Jedi Order. That joke, seemingly intended by the AI to reduce tension, served to help convince the now fired engineer that Google’s AI had achieved consciousness.
Now, another AI is further putting the question of sentience to the test. OpenAI, an AI research lab co-founded by Elon Musk, released its latest AI natural language processing (NLP) creation on the world last week. ChatGPT, an advanced chatbot that formulates text on demand --including poems and fiction and even in Hebrew and Yiddish-- as directed specifically by the user. Netizens around the world were quick to post the wondrous, silly, and even scary output of this artificial intelligence machine.
There is little doubt that if the same outputs generated by ChatGPT were man made that they would be granted copyright protection. Nevertheless, most jurisdictions have remained adamant that, for the time being, an AI, including ChatGPT, cannot be a legal author under copyright law. This creates a blatant bias and untenable dichotomy in copyright law: The same text, when written by a human is protectable under copyright law, but when created with an AI purportedly lacking that human je ne sais quoi of a modicum of creativity, it becomes unprotected public domain work.
Related articles:
In the past, we have argued that perhaps the work for hire doctrine can partially solve the issue of AI andcopyright. Here we suggest that we can resolve this internal copyright bias by appropriating another area of copyright law, database and compilation copyright, as the basis for examining AI authorship.
As per this theory, works created by an AI would fall under the area of copyright law dealing with compilations/databases. Databases are typically collections of fact, whereas compilations can also be of pre-existing non-factual information. Broadly, compilations and databases are typically provided with weak copyright protection, if any, even under the European Database Directive.
Practically, one could view the uncanny outputs of OpenAI as simply the compilation of pre-existing human creativity. As per ChatGPT itself, in response to a query posed by this human author: “When given an input, my algorithm generates a probability distribution over the possible responses, and selects the response with the highest probability as its output. \[…\] I am not generating original content or creative works. Instead, my responses are based on the information that I have been trained on, which typically consists of publicly available text. As such, my responses are not subject to copyright protection.”
Compilations were not always weakly protected. For most of the last century, databases, including phonebooks, were protected by the sweat of the brow doctrine. Under this principle, if an author put in a sufficient amount of effort, then even if the output did not rise to the level of original works, it could still be considered a protected work of authorship. To wit, in the case of a compilation such as a phonebook, the facts – i.e., names and address- are not original works of authorship, yet the entire work was still fully copyrightable.
In 1991, the US Supreme Court in Feist Publications v. Rural Telephone Service Co, effectively canceled the sweat of the brow doctrine; telephone white pages could no longer be protected as copyrightable works as they were no longer considered works of original authorship under copyright law.
Notably, the courts have maintained some minimal level of copyrightability for these databases and compilations, albeit only for the original organization of the information in the database, when it exists, but not the factual information itself. Similarly, under Israeli copyright law, databases/compilations are protected for the originality of their selection and arrangement, but not the underlying information. Perhaps this distinction could similarly work on OpenAI’s chatbot which admits that its impressive outputs are no more than simply stringing together previously manmade thoughts.
However, what of the presentation of those ideas by the AI? Should these compilations not receive at least a minimal amount of copyright protection? In the case of a phonebook, the simple presentation of factual information in alphabetical order is not protectable as “it lacks the modicum of \[human\] creativity necessary to transform mere selection into copyrightable expression.” Similarly, the law could argue that the presentation of the facts and ideas by ChatGPT is also uncreative. If even the presentation of information lacks the necessary originality -- every attempt by this author to query ChatGPT returned the same rigid structure of information-- then no copyright protection remains.
With this lack of copyright now justified under many copyright laws, at least, for the meantime, the problematic bias against AI works of art falls away.
Prof. Dov Greenbaum is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Harry Radzyner Law School, at Reichman University.