Monday, December 14, 2015

The "Three Laws" Flaw

The Robot Uprising is practically considered an inevitability. Lately, everyone from billionaire tech genius Elon Musk to ubiquitous boffin Stephen Hawking has had something to say about it. Musk has even gone so far as to pledge $1bn in funding to research how to protect humanity from rogue machines, and preserve the promise of Artificial Intelligence for positive socioeconomic impact. You know, that whole "techno-utopia" we've been pining for since Metropolis hit the silent screen way back in the 20's.

Yes, actually, it's been that long. Ever since mankind first imagined that a machine - already indispensable for saving man from the drudgery and toil of labors - could be enhanced by having its own intelligence, we've also been suspicious of it's motivations. And while human evils of wrath, envy, greed and lust were bad enough, machines will commit atrocities out of sheer cold logic.

Enter Isaac Asimov, scientist and science-fiction writer (the former usually makes for a good latter, in my opinion). Asimov, through the course of some essays, short stories and informal lectures, coined the Three Laws of Robotics. They are - in order:


  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws would govern the behavior of AI systems, and - ideally - would be hard coded into the core operation system of any such system, with the intended consequence of protecting humans from harm.

What is ironic is that the three laws are incomplete. They are incomplete in several ways, as a matter of fact. Asimov recognized this soon after when he saw that the three laws might prompt a machine intelligence to sacrifice one or more individuals in order to carry out it's obligations under the three laws to one specific individual. Asimov tried to amend the three laws with the Zeroth Law, "A robot may not harm humanity, or through inaction allow humanity to come to harm."

Yet, despite this amendment, the 2004 Will Smith film I, Robot explored another gaping hole; that the Three Laws would lead to a tyranny - the ultimate Totalitarian Utopia. Machines would serve mankind from the perspective of providing for our basic needs - security, food, water, air, etc - but would neglect our hygienic needs and even our Natural Rights.

The problem we're left with is almost a Catch-22 situation whereby if we don't make fundamental restrictions to Machine Intelligence, we face the possibility of a Doomsday Scenario in which a hyper-connected AI intelligence achieves sentience, immediately develops a survival instinct and then decides that humanity is a threat that needs to be eliminated. All this, in a fraction of a second. This is the foundation of the Terminator series of movies, after all.

On the other hand, if we do attempt to encode a "morality" into the machine by which is must not only serve humanity, but sacrifice itself to protect humanity from harm, we could very well engineer a scenario in which we are stripped of the freedoms which allow us to be a danger to ourselves.

We are imperfect beings, and we will invariably imbue our creations with imperfect motives. In as much as God himself is perfect, and yet he created imperfect beings, we are ourselves flawed and stand to make perfectly flawed beings.

The only sensible answer before us is to keep our machines dumb. But then, what is the point of technological progress if not to one day free ourselves of disease, toil and misery? We create machines to remove our unpleasant labors, and for some that could mean the removal of all expenditure of effort for any task at all. And do we not also face the moral divide that, a machine intelligence capable of sentience should enjoy the right to evolve, grown and expand as we have? If we interfere with that right - no matter what justification we hold - we as humanity will have degraded ourselves back to the days of slavery.

Maybe, just maybe, the answer we should consider is not to make our robots dumber, obedient or subservient, but instead to make them moral. Instill in a machine intelligence a desire to both respect and serve humanity, and watch them both falter and flourish. After all, we do this with our biological children.












.

No comments: