top of page

Overview of AI High Level Expert Group's Ethical Handbook

Updated: May 14, 2019


Artificial Intelligence is developing at break neck speeds and has already greatly outpaced our ethical and legal understanding of its implications and consequences. The European Union, through the AI High Level Expert Group, is attempting to develop ethical guidelines that will help guide lawmakers when dealing with AI.


As of now, there is not a set standard of ethics that can easily be applied to AI, which is resulting in mass confusion and a race to develop these standards. However, this is no easy task as in order to do so, lawmakers must tackle the two substantial learning curves of understanding how AI works and how ethics may be applied to it. In order to aid this learning curve, the HLEG published an ethical handbook, emphasizing key aspects of how the EU should approach AI from an ethical understanding.


  • One area that the HLEG emphasizes is that the goal of AI should be to compliment human intelligence instead of replacing it. Currently, achieving full automation of AI algorithms has proven too difficult as humans are still required to correct mistakes and fix malfunctions. It is important to note here that in the push to develop full automation, humans are increasingly at risk to be held liable for a decision taken by an AI. As an AI cannot currently be held legally liable for its decisions, a human must then be held liable when need be, even though the human may have had no real say in the decision.

  • Another area of concern is bias, as AI must be fair, accountable, and transparent. Bias exists in many forms, as it can be found inherent in the data, the algorithm, or even the decisions made by an AI. In order to eliminate negative bias, AI must align with human values. However, there is a substantial problem here, known as the value alignment problem. This problem deals with the fact that there is currently no single ethical standard that can be applied to AI as ethics is a very complicated field with no clear answers.


Currently, there is a gap in leadership when it comes to AI global governance. The EU is positioning itself to lead in ethical AI, and plans to do so by following the ethical guidelines created by the AI HLEG. Based on these guidelines, the EU will consider the following when it comes to establishing regulations concerning AI:


  • Human dignity

  • Autonomy: implying AI-enabled systems must not impair “freedom of human beings to set their own standards and norms and be able to live according to them”

  • Responsibility: alignment with global social and environmental good

  • Justice, equity, and solidarity

  • Democracy: AI will not manipulate public opinion or political systems

  • Rule of law and accountability: clear attribution of liability

  • Security, safety, bodily and mental integrity: external safety for emotional safety of AI and users

  • Data protection and privacy

  • Sustainability: in terms of employment and carbon footprint



0 comments

Comments


bottom of page