Thursday, September 9, 2010

Rest Day Read Wednesday 9-8-10

Rest Day Read (SR-46)
Moral Machines: Introduction
by Wendell Wallach and Collin Allen
"As noted, this book is not about the horrors of technology. Yes, the machines are coming. Yes, their existence will have unintended effects on human lives and welfare, not all of them good. But no, we do not believe that increasing reliance on autonomous systems will undermine people's basic humanity. Neither, in our view, will advanced robots enslave or exterminate humanity, as in the best traditions of science fiction. Humans have always adapted to their technological products, and the benefits to people of having autonomous machines around them will most likely outweigh the costs.
However, this optimism does not come for free. It is not possible to just sit back and hope that things will turn out for the best. If humanity is to avoid the consequences of bad autonomous artificial agents, people must be prepared to think hard about what it will take to make such agents good."

"Is it possible to build AMAs? Fully conscious artificial systems with complete human moral capacities may perhaps remain forever in the realm of science fiction. Nevertheless, we believe that more limited systems will soon be built. Such systems will have some capacity to evaluate the ethical ramifications of their actions—for example, whether they have no option but to violate a property right to protect a privacy right.
The task of designing AMAs requires a serious look at ethical theory, which originates from a human-centered perspective. The values and concerns expressed in the world’s religious and philosophical traditions are not easily applied to machines. Rule-based ethical systems, for example the Ten Commandments or Asimov’s Three Laws for Robots, might appear somewhat easier to embed in a computer, but as Asimov’s many robot stories show, even three simple rules (later four) can give rise to many ethical dilemmas. Aristotle’s ethics emphasized character over rules: good actions flowed from good character, and the aim of a flourishing human being was to develop a virtuous character. It is, of course, hard enough for humans to develop their own virtues, let alone developing appropriate virtues for computers or robots. Facing the engineering challenge entailed in going from Aristotle to Asimov and beyond will require looking at the origins of human morality as viewed in the fields of evolution, learning and development, neuropsychology, and philosophy."

Check out this related post from the authors. Now this is positive impact of technology!

I definitely want to read this book. Their fictitious scenario of the automation-triggered disaster of Monday July 23, 2012 is plain scary. I love their mutli-discipline approach to establishing the parameters for automated systems. Interesting dilemmas and interesting ideas highlight the need to set the foundation of ethically and morally defined robotics from the start. We will all be better off in the long run.

No comments: