Monday, February 6, 2017

We, Robot

A camel is just a horse designed by a committee.

Which leads us to the latest camel by mankind, from Bleeding Cool:
A science fiction staple are the Three Rules of Robotics created by author Isaac [Asimov]. They were simple, concise and most sci-fi [authors] who followed treated them as sacrosanct. They were introduced in his 1942 short story Runaround and said to be from Handbook of Robotics, 56th Edition, 2058 A.D.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Technology has advanced far enough that similar rules for Artificial Intelligence have been created. So, Elon Musk, Stephen Hawking, and hundreds of other scientists, researchers, and tech leaders are endorsing a list of 23 guiding principles for the productive, ethical, and safe development of A.I.

The Future of Life Institute hosted the Beneficial A.I. 2017 Conference where the Asilomar A.I. Principles were developed.  
Let's review these new rules (my responses are in red):

1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence: How does one define "beneficial"? I guaranty the following will be the definition: "Anything which benefits me, and screw the rest of you." That won't end well.

2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: (Wait, research funding? Who is providing it? If you say "government", you can redefine "beneficial" to mean "good for politicians".)

  • How can we make future A.I. systems highly robust, so that they do what we want without malfunctioning or getting hacked? As soon as you do, somebody will build an A.I. with the express purpose of hacking other A.I.'s. As for malfunctioning, the key will be limiting the damage from malfunctions.
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose? People can't find something to do without work? How did we survive this past weekend?
  • How can we update our legal systems to be more fair and efficient, to keep pace with A.I., and to manage the risks associated with A.I.? Our legal systems aren't fair or efficient now. Maybe someone can build an A.I. that makes a smarter legal system? 
  • What set of values should A.I. be aligned with, and what legal and ethical status should it have? What values? See Asimov's Three rules of Robotics above.
3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers. Like we have with climate science? We're screwed.

4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I. This sounds good on the surface. And then somebody will make money from A.I., and intellectual property laws will toss all the "cooperation, trust, and transparency" out the window.

5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards. Whoever came up with this one has never worked in a production environment. The executives calling the shots live quarter-to-quarter, and sometimes even month-to-month. They want results yesterday, and don't care about safety until somebody gets hurt. They also have ADD, so the fact somebody got hurt gets forgotten quickly.  

6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. See the first question under number 2 above.

7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why. I guaranty the executive behind the decision to cut corners will not be there when it fails.

8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. Am I the only one who finds the phrase "competent human authority" extremely funny?

9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. The tort lawyers are salivating as I type this.

10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. Because human values work so well as a barometer of good behavior! 

11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. Asimov's First Law has this one covered. The rest of this is fluffy nonsense.

12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data. And where there is data to be analyzed, the government will happily toss out the 4th Amendment to do it! Seriously, this is a good idea, but I expect our Orwellian government to find a workaround.

13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail people’s real or perceived liberty. Good luck getting the politicians on board with this.

14 Shared Benefit: A.I. technologies should benefit and empower as many people as possible. Thank you, Captain Obvious.

15. Shared Prosperity: The economic prosperity created by A.I. should be shared broadly, to benefit all of humanity. Like we do with most prosperity, tax the heck out of it! This is where communism meets A.I.

16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives. Result: pretty much everything. Humans are lazy that way. Picture gorillas in a zoo, and that is humanity in a 100 years.

17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends. Let's put the emphasis on "improve". Right now, most of our social and civic processes are severely flawed.


18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided. Tell the Russians and Chinese that.

19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities. News flash: The upper limit is the entirety of all the knowledge in the universe. We have a long way to go.

20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Like we do now? God help us.

21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. Like we planned for the impact of the 2008 financial collapse?

22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures. This is one of those where it could lead to very good things, or very bad things. This should be near the top of the list, not buried near the end.

23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization. Inevitably, only one state or organization will define the "ethical ideals". 

My first thought at the end of this list is: Let the free market handle this. Superintelligence overseen by a political body will neither be super nor intelligent.

Eventually, we will have multiple A.I.'s with different purposes. However, like we did with computers, we had many bumps in the road, but we are rolling along just fine. I worry more about what governments will do with A.I. than individuals. Sadly, this list gives far too much credence to the effectiveness of political oversight.

No comments:

Post a Comment