This site uses cookies to ensure the best viewing experience for our readers.
AI Ethics: Who will police the machines?

Opinion

AI Ethics: Who will police the machines?

China, the US, and private corporations all have different priorities and guidelines when implementing artificial intelligence, explains Dov Greenbaum.

Dov Greenbaum | 10:57, 28.10.21
Just weeks ago, China’s Ministry of Science and Technology released a set of guidelines developed by the National Governance Committee for the New Generation Artificial Intelligence, in the area of Artificial Intelligence Ethics. The English language media have titled these guidelines either “New Generation Artificial Intelligence Ethics Specifications,” or alternatively, “Ethical Norms for the New Generation Artificial Intelligence” (Chinese original). While already a world leader in AI technology, this new effort could further cement China as the frontrunner in the battle for AI dominance.

Broadly, the guidelines aim to help integrate AI ethics into the entirety of the AI lifecycle. The guidelines set forth six (or eight original) fundamental ethics rules for AI development. These include: (1) that AI should aim to enhance the well-being of humankind; (2) that AI should promote fairness and justice and protect the legitimate rights and interests of all relevant stakeholders; (3) that AI should protect the privacy and security of its users and their data; (4) that AI should be developed in such a way as to ensure human controllability, transparency, and trustworthiness; (5) that AI should be designed to be accountable; and (6), that the Chinese government should aim to generally improve AI ethics literacy.

Not all institutions seeking to implement AI ethics need to. Photo: Shutterstock Not all institutions seeking to implement AI ethics need to. Photo: Shutterstock Not all institutions seeking to implement AI ethics need to. Photo: Shutterstock

This last rule is intended to be implemented in part by helping relevant stakeholders learn and popularize knowledge related to AI ethics. This AI ethics literacy component further requires that stakeholders both understand the relevant issues in AI ethics, but also, in their public communications neither overstate nor downplay the risks associated with AI machines.

These fundamental guidelines also set out the requirement that AI should at all times be subservient to humans. To this end, the guidelines suggest that AI developers include specific safeguards in their AI machines to allow humans to override an AI machine when necessary, or to withdraw from any or all interactions with an AI, and, when needed, safeguards that can allow human users to suspend total operations of an offending AI system. In short, the Chinese want non-AI people to always have meaningful control of the AI systems that are developed.

To promote these guidelines, the National Governance Committee reportedly further set out 18 specific requirements for the governance of AI which include emergency mechanisms to retain control of rogue AIs, and a general Asimovian rule that AI can never be used to endanger public safety or national security.

Whether or not other stakeholders outside of China will have confidence in these new ethics measures proposed by the Chinese government remains unclear. Consider, for example, the United States Government’s long-held distrust regarding the putative machinations of Chinese technology corporations. In fact, it has been speculated that it is actually these specific concerns regarding Chinese legitimacy and trustworthiness that initially spurred these efforts towards greater AI ethics.

Related Stories

Not to be outdone, the World Economic Forum (WEForum) concurrently published their own white paper: “A Holistic Guide to Approaching AI Fairness Education in Organizations.” The guide examines six aspects of organizations that “have a particular role in operationalizing AI fairness,” and outlines recommendations, including continued AI fairness and bias education. One of these recommendations includes the designation of a corporate chief AI ethics officer (CAIO) who would be responsible for “designing and implementing AI education activities,” and who would act as the ultimate go-to member within the organization for dealing with and solving concerns related to AI and ethics.

While both WEForum and the Chinese have pushed this recently, the idea of corporate AI ethics oversight is not a new idea; it has been advocated for years. In their White Paper, however, the WEForum notes that this oversight of AI ethics needs to also move beyond just the C-level; even middle management needs to be educated and trained to provide meaningful oversight in the area of AI and ethics. The World Economic Forum further encourages organizations to educate their relevant staff through mandatory education and AI ethics certification programs. The CAIO is even urged to track and report participation in their AI ethics education programs.

For all the effort in creating corporate governance in AI ethics, there remain concerns as to the efficacy of AI ethics oversight teams as they are implemented, especially when the AI ethics concerns may clash with other corporate goals. For example, Google recently fired two of its own prominent ethics researchers – their reasoning has been suspect, and Facebook is under fire for not fully dealing with, ignoring, if not outright confounding, the ethical concerns regarding the use of AI to promote misinformation via their social media site.

Fortunately, the focus on improving the ethics of AI has not been limited to just fallible humans. Recent research aims to develop deep learning machines that themselves can demonstrate language-based common sense and moral reasoning. A recent paper from the University of Washington compares their Delphi machine that seems to be 92% accurate in this area. Contrast that with the fabled Open AI’s GPT-3 AIS, that although they can read and write at near-human levels, they have remained only around 50% accurate in assessing real-life ethical pitfalls.

Given this growing and timely interest in AI and Ethics, the Zvi Meitar Institute, in conjunction with Microsoft — one of the first large companies to institute an AI ethics officer — have developed a short program that seeks to provide that education component that is central to both WEForum’s AI ethics efforts as well as the Chinese.

Just keep in mind that not all institutions seeking to implement AI ethics need to. For example, data suggests that up to 40% of companies in Europe claiming to implement AI technologies, actually don’t.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS