In his speech at the University of Zurich, Kaspar Etter (Stiftung für Effektiven Altruismus) discusses the relation between artificial intelligence and ethics. Some information from the advertising text: „We humans steer the future not because we’re the strongest or the fastest but because we’re the smartest animal on this planet. However, there are no reasons to assume that blind evolutionary processes have reached the physical limit of intelligence with us. Quite to the contrary, we have already seen how intelligent machines outperform the best of our kind on an increasing number of tasks, ranging from Chess over the quiz show Jeopardy to Go. What will happen when artificial intelligence surpasses human intelligence in a broader range and controls our destiny? Are we ready to make our final invention? And what is to be done from an ethical perspective?“ (Website EA) The text continues: „In his talk, Kaspar Etter will cover the current state and trends in artificial intelligence, its present implications on our society and what is to be expected for the coming decades. You will also learn about a phenomenon called intelligence explosion and strategic considerations for approaching the challenges ahead of us.“ (Website EA) The event will take place in KOL-F-117, May 26, 2016, at 7 pm.
In Austin (Texas, USA) finden vom 25. bis 26. Januar 2015 die „Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15)“ statt. Auf der Website werden zum „W01 – AI and Ethics“ mediale Aussagen wie „Artificial Intelligence Is Changing the World, and Humankind Must Adapt“ und „Physicist Louis Del Monte believes that by 2045 machines will threaten human survival“ zitiert. Vor diesem Hintergrund heißt es: „The goal of this workshop is to provide a forum to discuss the ethical questions implicit in such headlines, which go to the centre of the quest to build AI systems with potentially super-human intelligence.“ Es werden sowohl informations- und technikethische als auch maschinenethische Bereiche behandelt, z.B. „AI as a threat to or saviour for humanity“, „Safeguards necessary within AI research“, „Autonomous agents in the military“, „Autonomous agents in commerce and other domains“ und „Mechanisms to ensure moral behaviours in AI systems“. Weitere Informationen sind über www.aaai.org/Workshops/ws15workshops.php erhältlich.
Am 15. März 2013 trägt Oliver Bendel bei der European TA Conference in Prag zum Thema „Towards a Machine Ethics“ vor. Das Abstract, das im Frühjahr im Tagungsband veröffentlicht wird, hebt mit den Worten an: „There is an increasing use of autonomous machines such as agents, chatbots, algorithmic trading computers, robots of different stripes and unmanned ground or air vehicles. They populate the modern world like legendary figures and artificial creatures in Greek mythology – with the main difference being that they are real in the narrow sense of the word. Some are only partially autonomous (acting under human command) while others are completely autonomous within their area of action. A genuinely autonomous machine should be able to act in a moral way, able to make decisions that are good for humans, animals and the environment. But what does it mean for machines to behave morally? Should they learn moral rules? Should they evaluate the consequences of their acts? Or should they become a virtuous character, following Aristotle? How is it possible to implement the classical normative models of ethics and is there a need for new ones?“ Der erste, zweite und dritte Abschnitt des Beitrags wird knapp zusammengefasst; dann wird auf den vierten eingegangen: „Fourthly, the paper tries to answer the question if and how it is possible to implement the classical normative models of ethics and which models should be preferred. Seven important normative approaches are described and estimated relating to their suitability for machine processing. Then the focus shifts to duty-based ethics, ethics of responsibility and virtue ethics that seem to be serious candidates. With a short technical analysis it can be shown that they fit to machine processing, apart from some limitations. The most promising approach may be the combination of the selected normative models. It is not only similar in the ’normal‘ human ethics, but also an opportunity to balance out weaknesses of the autonomous machines and to allow them alternatives. In addition, other methods like orientation on reference persons and social media evaluation could be used.“ Am Ende werden Wichtigkeit und Schwierigkeit des jungen Forschungsgebiets der Maschinenethik gleichermaßen betont und persönliche und allgemeine Einschätzungen gegeben.
Michael Anderson und Susan Leigh Anderson gehen als Herausgeber des 2011 in der Cambridge University Press erschienenen Buchs „Machine Ethics“ der Frage nach, ob und wie autonome Systeme in moralischer Weise „handeln“ sollen und können. Schon vor Jahrzehnten hat man in Literatur und Wissenschaft über diese Frage nachgedacht; aber eine entsprechende Disziplin ist erst in den letzten Jahren entstanden. Offensichtlich wird die Notwendigkeit gesehen, das „Verhalten“ von Maschinen im Kontext der Moral zu reflektieren und Philosophen (vor allem Ethiker) und Vertreter der Künstlichen Intelligenz darüber nachdenken zu lassen. In der Einführung heißt es: „The subject of this book is a new field of research: Developing ethics for machines, in contrast to developing ethics for human beings who use machines.“ Wir benutzen Maschinen, und Maschinen benutzen uns. Wie sollen die Maschinen mit uns umgehen, und wie sollen sie sich entscheiden in Situationen, in denen wir in unserer Identität und in unserer Existenz bedroht werden? Die Beiträge enthalten bemerkenswerte Vorschläge und verdienen eine aufmerksame Lektüre.