Laws of Robotics Loopholes

  • Chưa được phân loại

The flaw is this: they assume that morality and moral decisions can be made by means of an algorithm, that discrete yes/no answers are sufficient to “solve” moral dilemmas. They are not enough. (Or, to be adequate, many, many, many more “laws” would be needed than those stated to cover the wide range of “what if” and “but” qualifications that always pop up.) Instead of laws restricting robot behavior, we believe robots should be able to maximize possible courses of action so they can choose the best solution for a particular scenario. As we describe in a new Frontiers article, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible. Woods said, “Our laws are a little more realistic and therefore a little more boring” and that “the philosophy was, `Sure, humans make mistakes, but robots will be better — a perfect version of ourselves. We wanted to write three new laws to get people to think more realistically and healthily about the human-robot relationship. [55] In Asimov`s fictional universe, these laws were incorporated into almost all of his “positronic” robots. These were not mere suggestions or guidelines, they were embedded in the software that governs their behavior. Furthermore, the rules could not be circumvented, rescinded or revised. The 2019 Netflix original series Better than Us includes the 3 laws in the opening of episode 1. The third law fails because it leads to permanent social stratification, with the enormous amount of potential exploitation built into this legal system. But this is only a small step to assume that the ultimate military goal is to create armed robots that could be used on the battlefield. In this situation, the first law – do no harm to people – becomes extremely problematic.

The military`s role is often to save the lives of soldiers and civilians, but often by harming its enemies on the battlefield. Therefore, laws may need to be viewed from different angles or interpretations. Well! It may be a good thing that general artificial intelligence of the type that Asimov tried to legislate for is questionable anyway for various reasons. In the face of all these problems, Asimov`s laws offer little more than founding principles for someone who wants to create robot code today. We must follow them up with a much more comprehensive legislative package. However, without significant AI developments, implementing such laws will remain an impossible task. And that`s before you even consider the potential for injury if humans fall in love with robots. For example, in one of Asimov`s stories, robots are made to follow the laws, but they are given a certain sense of “human”. The robots anticipate what is currently happening in real ethnic cleansing campaigns and only recognize people of a certain group as “humans”. They obey the law, but still commit genocide. The first problem is that laws are fiction! They are a plot device that Asimov invented to advance his stories. What`s more, his stories almost always revolved around how robots might follow these logical and resounding ethical codes, but go astray and the unintended consequences that result.

An advertisement for the 2004 film adaptation of Asimov I`s acclaimed book, Robot (starring Fresh Prince and Tom Brady`s little mother) put it well: “The rules were made to be broken.” Robots and artificial intelligences do not inherently contain or obey the Three Laws; Their human creators must choose to program them and find a way to do so. There are already robots (like a Roomba) that are too easy to understand when they cause pain or injury and know how to stop. Many are equipped with physical safety measures such as bumpers, audible alarms, safety cages or access restrictions to prevent accidents. However, the German television series Raumpatrouille – Die phantastischen Abenteuer des Raumschiffes Orion from the 1960s is based on Asimov`s Three Laws, without naming the source. I wonder if a logical consequence of the 3 laws of robots that robots must teach humans is objective moral laws, for example to avoid “harming a human being through inaction”. For example, robots would stop all wars, abortions, and euthanasia in the world and make a massive evangelistic effort to prevent humans from inflicting infinite damage on themselves by going to hell. Randall Munroe has discussed the Three Laws in several cases, but perhaps more directly through one of his comics called The Three Laws of Robotics, which imagines the consequences of each order of the existing Three Laws. The laws proposed by Asimov were designed to protect humans from interacting with robots.

In March 2007, the South Korean government announced that it would publish a “Robot Code of Ethics” later that year, setting standards for users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communications, the charter could reflect Asimov`s three laws that attempt to establish ground rules for the future development of robotics. [53] The same goes for building a robot that takes orders from every human being. Do I really want Osama bin Laden to be able to command via my robot? And finally, the fact that robots can be sent on dangerous missions to be “killed” is often the reason for using them. Giving them a sense of “existence” and a survival instinct would defy this logic and open up potential storylines from another sci-fi series, the Terminator movies. The fact is that much of the funding for robotics research comes from the military, which pays for robots that follow the exact opposite of Asimov`s laws.

Close Menu
×
×

Cart