Skip to content

Rules Of Robotics – Once Science Fiction, Now Surprisingly Reality

You may not know it, but the rules of robotics are now pervading every aspect of our lives. In the 1940s, Isaac Asimov devised a set of robot rules in his science fiction writings. This article will explain how rules of robots went from being science fiction to now being critical in software design, artificial intelligence (AI) deep learning, and even being adopted within government standards and regulations.

Laws of Robotics – Science Fiction. 

Science fiction author Isaac Asimov in the 1940s devised the first set of rules of robotics. Without a doubt most of us who are not involved with robotics or AI, these rules seem to make sense. But these first laws of robotics originated from science fiction and have been debunked by scientists. Specifically, Asimov’s Laws are as follows:

  • First Law – Do No Harm. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law – Obey Orders. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law – Protect Itself. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Rules of Robotics - What Rules Should I Live By?

Rules of Robotics – Software Design.

Software design rules are starting to emerge for Artificial Intelligence (AI) and robotics.  In Slate magazine, June 2016, Satya Nadella  ,a CEO of Microsoft Corporation, offered the following software design rules for artificial intelligence and robots.

  • Enable, Not Replace Humans. Designed to assist humanity, not replace humans.
  • Transparent. Humans should know and be able to understand how the software works.
  • Maximize efficiencies. Do this without destroying the dignity of people.
  • Maintain Privacy. It must earn trust through guarding human’s information.
  • Algorithmic Accountability. Enable humans to intervene to undo unintended harm.
  • Not be Bias. Must not discriminate against people.

Rules of Robotics – Building Intelligence. 

Robots are quickly moving beyond basic repetitive tasks to actually being empowered with intelligence, Artificial Intelligence (AI). For example, Google Brain–Google’s deep learning AI division lays out design rules for robots to be able to be intelligent and learn for themselves without unintended consequences. To list, below are Google Brain’s rules for robots on how a robot should be programmed to think and learn:

  • Make Things Better, Not Worse. Robots have to think through unintended consequences and not just complete their primary tasks. 
  • No Cheating. Robots that are incentivized to perform tasks must also have strict guidelines not to cheat. Otherwise, they could just focus on the incentive and not on the primary task.
  • Humans are Mentors.  Robots need periodic human feedback to affirm they are performing their tasks to standard. Specifically, robots need to be able to be “trained” and incorporate human feedback to improve their performance.
  • Play Only Where Safe. For robots to learn, they need to explore and try new things. However, the challenge is that these “learning” activities could result in dire consequences. For example, one technique that developers use is to have the robots train and learn new things only in the presence of humans.
  • Know Limitations.  Socrates once said, “a wise man knows that he knows nothing”. Indeed, this wisdom is even more important for robots. For instance, software developers need to program robots to recognize both their limitations and their own ignorance. Obviously, a robot thinking that it is “all knowing” and invincible is a recipe for disaster

Rules of Robotics – Government Perspective.

Government and legal scholars are beginning to think about the legal and ethical aspects of robotics and AI. Now, many Governments are formulating regulations to establish robotic and AI standards in regard to liability, data protection, and prevent hacking. As an example, the UK House of Lords Select Committee on Artificial Intelligence came up with these ethical AI principles:

  • Benefit the Common Good. Designers should create robots for the common good and benefit of humanity.
  • Be Fair. Robots should operate on principles of intelligibility and fairness.
  • Protect Individual Privacy. Should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All Citizens Benefit. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • Do No Harm. Developers should not create robots that have autonomous power to hurt, destroy or deceive human beings.

For more information from Unvarnished Facts on AI, data analytics, and robotics, click here.

Don’t miss the tips from Unvarnished Facts!

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *