In his 1942 short story “Runaround” (also included in the collection I, Robot eight years later), Isaac Asimov introduced his three laws of robotics, although he had foreshadowed them in a few earlier stories. Here are the three laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws imply that robots would be explicitly programmed, and could therefore be programmed to obey such laws. Given the introduction of deep learning, however, it now looks like sentient robots, if they ever do emerge, will not be explicitly programmed, but will learn and modify their own code. Can anything resembling Asimov’s laws be reasonably expected?
Join us tomorrow as Bob Winstead leads us in what promises to be a lively discussion.