This site uses cookies to ensure the best viewing experience for our readers.
Ethical aspects in Artificial Intelligence


Ethical aspects in Artificial Intelligence

The adoption, use and applications of AI will only intensify in the near future and we are obliged to consider the unique ethical aspects, argues Microsoft Regional Corporate Counsel, CELA Middle East, and Africa HQ, Ben Haklai

Ben Haklai | 12:46  02.02.2021

Artificial Intelligence is, without a doubt, one of the Fourth Industrial Revolution's primary growth engines. The benefits and business potential inherent in this technology are immense. Improving customer experience, automating business processes, real-time information analysis, improving cyber protection capabilities, and implementing autonomous applications are just a few examples of these benefits.  However, and similarly to other types of new and groundbreaking technologies, we must consider the latent risks in implementing Artificial Intelligence in an uncontrolled manner. Considering such risks is evidently even more urgent as Artificial Intelligence has now become so vastly used that it affects every aspect of our personal and professional life and used in scale, in large sectors of the economy.

Responsible risk management is instrumental to achieve a required delicate balance allowing sustainable use of Artificial Intelligence in a rapidly transforming landscape. As the demand for using Artificial Intelligence is only expected to grow in the coming future the challenges ahead are only expected to grow.

Ben Haklai, Adv., Microsoft Regional Corporate Counsel, CELA Middle East, and Africa HQ . Photo: Nethanial Tobias Ben Haklai, Adv., Microsoft Regional Corporate Counsel, CELA Middle East, and Africa HQ . Photo: Nethanial Tobias Ben Haklai, Adv., Microsoft Regional Corporate Counsel, CELA Middle East, and Africa HQ . Photo: Nethanial Tobias

The first step in establishing a risk management framework is identifying the underlying risks. In this short article we will not be able to review all of the concerns associated with Artificial Intelligence use but these include, among others, transition to automated decision-making by complex Artificial Intelligence algorithms (which are often considered to be a 'black box' lacking the ability to describe or explain the made decision); relying on information that includes biases of different types (e.g., data lacking proper representation for specific societal sectors or minorities); algorithmic biases (e.g., bias that is embedded in the algorithm code); deep-fakes; and societal risks in areas such as Human jobs and economic inequality.

And yet, even when considering all the associated risks, it is essential not to ‘throw the baby out with the bathwater’. Artificial Intelligence is a revolutionary technology whose implementation is continuously expanding while transforming entire industries. Artificial Intelligence applications help, among other things, research climate research, new drug development, the preservation of ancient cultures, and support wider efforts to protect the environment.

The significant advantages and challenges Artificial Intelligence present requires a coordinated response – both by sound and balanced regulation and by organizational responsible AI corporate governance frameworks.

As Artificial Intelligence regulation and legislative efforts around the world have not yet matured and considering the inherent complex nature of Artificial Intelligence on the one hand and the slow pace of regulatory processes on the other, the corporate burden to promote responsible Artificial Intelligence practices and ethical use is greater.

As recent industry trends and public sentiment revealed, ignoring Artificial Intelligence ethics and responsible use is a dangerous gamble for all organizations – both in terms of reputation, perception, and trust, and in terms of exposure to legal liability and increased regulatory scrutiny. To reduce these risks organizations must act proactively and adopt efficient and robust responsible Artificial Intelligence corporate frameworks before diving down into the 'deep waters' of Artificial Intelligence.

The first step in this journey includes defining a clear organizational responsible Artificial Intelligence policy and guidelines accompanied by an efficient corporate governance mechanism aiming to act help the organization act on and enforce these pre-defined policies. The policy should clearly define the main risks involved in implementing Artificial Intelligence technologies in different areas the organization operates in and define a coherent set of principles, guidelines and tools aiming to mitigate such risks. The corporate policy should further aim to define practical ‘real-life’ applications of the principles and associated tools to ensure that the policy is implemented and does not merely become a lip service paid to these issues. Organizations need to be cautious of falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises.

So, how does an effective organizational policy plan look like? It should include four key components: First, defining a responsible Artificial Intelligence policy and corporate governance model that matches the organization's structure and one that considers the organization business goals. Such corporate governance model should also include the establishment of a dedicated, cross-organizational, committee that will be a focal escalation resource for evaluating specific Artificial Intelligence implementations and associated applications/risks. Second, the organization will need to define a clear set of corporate guidelines for implementing Artificial Intelligence. The primary purpose of such guidelines would be to determine the organization’s ‘red-lines’ in clear and tangible manner. The third component key component will be appointment of corporate ethics trustees. And the fourth component will include ongoing Artificial Intelligence responsible use training to employees.

Artificial intelligence is already here, and there is no doubt that it is one of the main technological disruptors in the landscape. As we understand that the adoption, use and applications of Artificial Intelligence will only intensify in the near future – we are obliged to consider the unique ethical aspects of Artificial Intelligence and establish wide trust in this groundbreaking technology.

Recently, the Interdisciplinary Center Herzliya in partnership with Microsoft launched a unique Responsible AI certification course for training ethics trustees in AI - the author is one of the course's academic directors.

Ben Haklai, Adv., Microsoft Regional Corporate Counsel, CELA Middle East, and Africa HQ

share on facebook share on twitter share on linkedin share on whatsapp share on mail