Musk Signs FLI Open Letter: Priorities for AI Research

Screen shot 2015-01-12 at 5.52.21 AM

Musk is one of 160+ publicly listed signatories of an open letter posted by the MIT affiliated Future of Life Institute, announced with a re-tweeted comment “First question asked of AI; “Is there a god?” First AI answer; “There is now”.

The letter observes:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.

…yet continues with a caution:

We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

The research priorities document attached to the letter points out several areas where more work is needed, one of the more interesting is in legal and ethical arenas:

1. Liability and law for autonomous vehicles: If self-driving cars cut the roughly 40,000 annual US
trac fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In
what legal framework can the safety benefi ts of autonomous vehicles such as drone aircraft and self driving cars best be realized [85]? Should legal questions about AI be handled by existing (software and internet-focused) “cyberlaw”, or should they be treated separately? In both military and commercial applications, governments will need to decide how best to bring the relevant expertise to bear; for example, a panel or committee of professionals and academics could be created, and Calo has proposed the creation of a Federal Robotics Commission.

2. Machine ethics: How should an autonomous vehicle trade, say, a small probability of injury to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues? Should such trade-offs be the subject of national standards?

3. Autonomous weapons: Can lethal autonomous weapons be made to comply with humanitarian
law? If, as some organizations have suggested, autonomous weapons should be banned, is it
possible to develop a precise definition of autonomy for this purpose, and can such a ban practically be enforced? If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command-and-control structure so that responsibility and liability be distributed, what technical realities and forecasts should inform these questions, and how should “meaningful human control” over weapons be defined? Are autonomous weapons likely to reduce political aversion to conflict, or perhaps result in “accidental” battles or wars? Finally,how can transparency and public discourse best be encouraged on these issues?

4. Privacy: How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy? How will privacy risks interact with cyberwarfare? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy.

5. Professional ethics: What role should computer scientists play in the law and ethics of AI development and use? Past and current projects to explore these questions include the AAAI 2008{09 Presidential Panel on Long-Term AI Futures, the EPSRC Principles of Robotics, and recently announced programs such as Stanford’s One-Hundred Year Study of AI and the AAAI committee on AI impact and ethical issues (chaired by Rossi and Chernova).

 

Share this post:

Related Posts

Comments are closed.