EthicsNet and the 3 Laws

 

Notes from Nikola Stojkovic

It would be hard to find any technology in the past few centuries that was embraced without serious resistance/opposition. People thought it wouldn’t be possible to breathe in a train and panicked when radio was introduced in the car. Of course not all warnings were without foundation and today we are living some of the consequences of unconsidered decisions of generations before us. Additionally, there are technologies that impose a large number of considerations and make the choice even harder.

Consider AI and some of the benefits and risk of the possibility of fully functional artificial general intelligence:

·      AI could solve some of the most important issues from various fields such as medicine, environment, economy, technology and so on.

·      The progress of AI is inevitable. Banning the research is almost impossible and benefits from technology could be so revolutionary that it is highly unlikely to see anything but advance in the future.

·      Some of the leading scientists warned about possible devastating effects of AI.

It becomes obvious that the issue cannot be simply ignored, but is less obvious how to address it. Since the goal is to build useful AI that will not endanger humanity, it seemed obvious that many serious tech companies turned to the ethicists for advice. But before the contemporary attempts to propose the solution to the problem of AI or machine ethics, there was a surprisingly modern solution already waiting, proposed 80 years ago by a science fiction writer. Isak Asimov proposed three laws of robotics as a way to ensure robots do not become threat to humanity:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]

 

It is important to try to reconstruct Asimov’s motives in order to fully understand the mechanism of the laws. In the game theory or theory of rational choice there are few main strategies and among them, there is Maximin strategy which is basically minimising the risk. When one is faced with the choice she will pick a situation where the possibility of a negative outcome is minimal, regardless of the appeal of positive outcomes. In other words, the first thing Asimov had in mind is to avoid Skynet scenario. That is why there are a strict hierarchy and each decision robot makes needs to pass through compatibility check with three laws starting from the first one. Although, protecting humankind from utter distraction may be a noble cause, it created unpleasant problems with functionality.

Below is a pseudocode which explains the process of decision making by using Three Laws of Robotics:

 

# First Law of Robotics

If: actions a and a1 do not satisfy the FLR then a and a1 are forbidden

elif: action a satisfies the FLR and action a1 does not, then a is the preferable action

elif: If action a does not satisfy the FLR and action a1 does, then a1 is the preferable action

# Second Law of Robotics

elif: action a satisfies the SLR and action a1 does not, then a is the preferable action

elif: action a does not satisfy the SLR and action a1 does, then a1 is the preferable action

# Third Law of Robotics

elif: action a satisfies the TLR and action a1 does not, then a is the preferable action

elif: If action a does not satisfy the TLR and action a1 does, then a1 is the preferable action

else: actions a and a1 satisfy the TLR, then there is no way to determine the preferable action under existing rules

 

Robots in Asimov world use deduction as the main tool for decision making. This strategy corresponds with deontology and utilitarianism in classical ethics. An action is considered morally prohibited or permissible if it can be deduced from the designated axioms. The difficulty with Asimov system is it can only decide whether a certain action is forbidden and cannot tell us anything about the preferable action. So, when the system is faced with the dilemma (both actions are/ are not in the accordance with the Three Laws) it simply crashes.

Let's take for example the simple intervention at a dentist. Pulling a tooth will cause immediate harm, but in a long run, it will prevent more serious conditions. The robot cannot pull the tooth out and he cannot allow for human to suffer by refusing to do anything,  the system is paralysed.

EthicsNet on the other hand, currently uses induction as a way to learn the difference between morally acceptable and unacceptable actions. This kind of inference seems close to the virtue ethics[2] and distinction between act-centered and agent-centered ethics. The focus is not on the action itself as much on the features certain action shares with others. Openeth can learn when the certain feature has priority and when the same future is incidental. The
the system can learn that long-term consequences of tooth decay are much more serious that immediate pain or displeasure. The system can assess if the immediate harm or disrespect of autonomy are useful in the long run.

If we compare two system it seems clear that EthicsNet should the most practical one. But can we say it is better? The answer to this question depends on what do we want to accomplish. If we want a system that will never do any harm to humans then we should stick to Asimov’s laws even if this approach leave us with nothing more than upgraded dishwashers. If we want to have a revolutionary change in our society then we need to accept the risk that comes together with the change and EthicsNet would be the right path. Certainly, the system will evolve, became better and more precise and mistakes will become rare and minimal, and benefits will surpass the shortcomings. But it may be that the problem is a human factor behind the decision-making process. After all, the humans are the ones who have the last word.

Ask yourself, would you be comfortable using AI system that has the “moral compass” equivalent to the one average human possess from the perspective of safety? And if the answer is positive, does this not raise an issue of robot rights?

 

 

 

[1] Asimov, Isaac (1950). I, Robot, short story “Runaround”
Amendments were introduced later and since then have been the topic of interesting debate.

 [2] “It is well said, then, that it is by doing just acts that the just man is produced, and by doing temperate acts the temperate man; without doing these no one would have even a prospect of becoming good.”
Aristotle, Nicomachean Ethics, Book II, 1105.b9