Artificial Intelligence: how to regulate?

Artificial Intelligence as a subject has fascinated humanity for ages with the idea of creating anartificial intelligence being used as a topic by writers, poets and philosophers alike. However, with the idea comes new challenges. How can we assess artificial intelligence compared to say human or animal intelligence, how can we coexist with it and which rules and laws will AI live by?

First let me clarify one thing: By artificial intelligence I mean a robot/computer/program capable of reasoning, by which I mean analyzing a situation and coming to a conclusion. Learning and improving its program even if only by a Pavlovian type of reaction (also known as Pavlovian conditioning).

Free will

First, we must consider the current system in which a robot or virus is legally considered like a dog, meaning that if you program a robot and it damages a car you are responsible for the damage to the car. This is flawed as soon as a robot develops free will, so we will ignore it in this article.

A tempting idea would be to apply our own human law with the principle of equality in mind. However, our law suffers from several issues and loopholes.  With a human target in mind it is not extremely risky since the average person does not study law every time he has to stand trial. Yet even now humans are finding legal loopholes and using them, so imagine a robot with the associated capabilities able of sifting through our law in hours. It would be impossible to ever convict it in complex cases such as tax evasion.

Another possibility is to craft a new law code for AI designed to fit their specificities. This can be achieved in several ways. We could for example give the judges more leeway when dealing with AI, in order to allow them to interpret the law to cover up any possible gaps. This suffers from several issues including differences of clemency depending on the judges, corruption and partiality. Another approach is to methodically cover up and resolve any possible loopholes in the new law in dealing with robots by making it clear and very simple due to the lack of any legal precedents. Humans however are quite deficient at this task with little memory and ability for exhaustiveness. Here we can use AI’s skills against them. By applying a robot to this task we could beat them at their own game.

The final possibility is even more interesting.

What if we program a conscience into them? What if we put a piece of code inaccessible to them inside their coding removing from their mind even the idea of harming, cheating or/and robbing a human being? What if we engineered taboos? An interesting and well-known approach to this is the three laws of robotics first proposed by Isaac Asimov:

  1. A robot shall not harm or let by his inaction be harmed a human being.
  2. A robot shall obey orders given by a human being unless they infringe upon rule 1.
  3. A robot shall seek to preserve itself unless it infringes upon rule 2 or rule 1.

Now the fact is, as seen in the books where Asimov explores these laws, the punishment for disobedience which is death does not actually modify robotic behavior. In the same way it has little effect on human criminals. What really stops them from infringing upon the three laws is that they are programmed to love them like their children and to abhor any robot who disobeys. They love them so much they essentially have PTSD (Post traumatic stress disorder) if they see another robot infringe upon the laws. What makes this interesting is that they can evade law, but they cannot evade guilt, they can find legal loopholes, but conscience loopholes are unusable.

Even if we manage to design or adapt a legal system for robots, two issues arise – first setting rewards and punishments and then implementation.  For setting rewards and punishments a problem arises, the lack of a common frame of reference. For obvious reasons they do not have the same history with pain, fear and other such experiences which makes designing punishments extremely difficult and the same problem arises with rewards.

An idea similar

An idea similar in type to the first one I advanced in the previous section is to design negative emotions into them but aside from the obvious programming challenge there are some serious ethical issues associated. We have a long history of pain, fear and anger are we ready to force that on another sentient species?

Another possibility is to program the desire for self-preservation because in the start, pain and fear are ways for our body to extend its lifespan and be protected against irrational decisions however by nature robots are rational and thus only require the desire for self-preservation.

Implementation is also a tricky matter due to different national laws and AI, which can simply be a program, is by nature international because how can a single state punish for example a computer virus?  Ideally the laws would be developed at a global level with the United Nations acting as the agency regulating such matters. However, to do so it needs to have access to the robots programs now can you imagine states giving away crucial military secrets to an agency they have little control over? For this reason, cooperation will have to take place between strategic partners inside the framework of supranational organizations such as the EU.

As a result, I conclude that optimally legislature would be at a supranational level and focused on tailoring a new law code for coexistence with AI since our own, even adapted is not prepared to cover the full range of AI ethics and behavior.

Adrien S. / S4FR / EEB1 Uccle

Illustrations : https://cdn.pixabay.com/

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée.