Tue. May 7th, 2024
Do the three laws of robotics need updating?
Photo by Alex Knight: https://www.pexels.com/photo/high-angle-photo-of-robot-2599244/

Do the three laws of robotics need updating? The three laws of robotics were an invention by Isaac Asimov and they were meant for a story, not to be a literal way of controlling the robots and AI in our lives today. We will need some kind of laws that robots have to follow at some point, but that may be a while away yet.

Here are Asimov’s three laws. He created them for a short story called Runaround in 1942.

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov wrote the three laws because he was fed up with Science Fiction stories where robots killed their creators and he thought that anybody who was smart enough to build a robot would be smart enough to work out how not to be killed by it. He saw robots as tools and he saw his three laws as a logical way for a tool to behave. He also used them as plot devices because many of his stories revolved around some situation concerning the three laws.

Asimov said that he had written more than 20 million words in his career, including books and magazine stories. He was interviewed late in his life and he said that long after he had died, he would probably only be remembered for the three rules of robotics he had written. He was not wrong. His three laws are known far and wide and the South Korean government based a robot ethics charter on them in 2007.

So, what are the problems with the three laws today? The first problem is that the rules are designed for complex machines such as the androids in Asimov’s stories. He may not have envisioned the way AI and computers have been placed in everything we use. We have smart vacuum cleaners, smart fridges, and even smart coffee cups. There is no reason why we need a rule such as “a robot may not injure a human being” for a fridge because there is no way it could do so. It would perhaps be necessary to work specific rules for different levels of computer sophistication. A vacuum’s only task is to clean and it doesn’t really need any rules other than to make sure the room is clean.

We are also reaching the point where we are using AI and autonomous machines in wars to replace human soldiers. We already use remote controlled drones and fighting units, but the time when they are completely autonomous and they can select their own targets is probably not too far away. How will the rules work in that situation? “A robot must not harm humans” would completely stop them being used as weapons. Would governments have to come up with a way around that, or should we accept that robots should not be allowed to harm humans under any circumstances?

The biggest problem with Asimov’s three laws of robotics in today’s world is that we don’t have any AI systems that are advanced enough to be able to understand them or to employ them. The laws would have to be programmed into the software and following them would take too much computational power. Modern AI, as advanced as it seems, is very task specific. They are programs created to do a single task and they do it extremely well, but they cannot perform a different task. IBM’s Deep Blue supercomputer can beat every human on Earth at chess, but it cannot play poker. These systems cannot come close to harming us or acting independently, so a list of rules seems unnecessary. However, just because they are simple programs now, doesn’t mean they will stay that way. The technology is increasing so rapidly that it is worth thinking of a set of rules for AI and robots to follow. Sooner rather than later.

Several sets of guidelines for AI and robotics have been suggested. The principles behind many of these sets of guidelines is that the problem lies with the humans and not the machines. AI can only do what it is programmed to do, so if an AI is harming a human, it is because it has been programmed to do so. The manufacturers need to be responsible. This doesn’t fill many people with relief because manufacturers are notoriously bad at looking for problems and self-governing when they are answerable to their shareholders. The tobacco industries and the oil industries are good examples. A working group in the UK came up with a set of guidelines in 2011 and they have been adopted as the first national level AI softlaw. What do you think?

1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.

3. Robots are products. They should be designed using processes which assure their safety and security.

4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

5. The person with legal responsibility for a robot should be attributed.

And this is what I learned today.

Photo by Alex Knight: https://www.pexels.com/photo/high-angle-photo-of-robot-2599244/

Sources

https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

https://web.archive.org/web/20160215190449/http://researchnews.osu.edu/archive/roblaw.htm

https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)