22 Chatbots in 12 Years

Since 2013, Oliver Bendel has developed 22 chatbots and voice assistants together with his students or colleagues. They can be divided into three categories. The first are moral and immoral chatbots (i.e., forms of moral machines) and empathic voice assistants. The second are chatbots (some with voice output) for dead, endangered, or extinct languages and idioms. The third are pedagogical chatbots and chatbots that give recommendations and advice. Some of the projects lasted between four and six months. Most of the GPTs were created in just a few hours. Exceptions are Miss Tammy and Animal Whisperer, which took several months to build with the help of prompt engineering and retrieval-augmented generation (RAG). Articles and book chapters have been published on many of the projects. The names of the developers can be found in these. A few chatbots made it into the media, such as GOODBOT (for which the preparatory work began in 2012), LÜGENBOT aka LIEBOT, and @llegra.

Fig.: An overview of the projects

„Programming Machine Ethics“ in the Z-Library

The book „Programming Machine Ethics“ (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. „This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.“ (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

Fig.: The machine ethicists Oliver Bendel and Luís Moniz Pereira 2016 at Stanford University

A Markup Language for Moral Machines

In many cases it is important that an autonomous system acts and reacts adequately from a moral point of view. There are some artifacts of machine ethics, e.g., GOODBOT or LADYBIRD by Oliver Bendel or Nao as a care robot by Susan Leigh and Michael Anderson. But there is no standardization in the field of moral machines yet. The MOML project, initiated by Oliver Bendel, is trying to work in this direction. In the management summary of his bachelor thesis Simon Giller writes: „We present a literature review in the areas of machine ethics and markup languages which shaped the proposed morality markup language (MOML). To overcome the most substantial problem of varying moral concepts, MOML uses the idea of the morality menu. The menu lets humans define moral rules and transfer them to an autonomous system to create a proxy morality. Analysing MOML excerpts allowed us to develop an XML schema which we then tested in a test scenario. The outcome is an XML based morality markup language for autonomous agents. Future projects can use this language or extend it. Using the schema, anyone can write MOML documents and validate them. Finally, we discuss new opportunities, applications and concerns related to the use of MOML. Future work could develop a controlled vocabulary or an ontology defining terms and commands for MOML.“ The bachelor thesis will be publicly available in autumn 2020. It was supervised by Dr. Elzbieta Pustulka. There will also be a paper with the results next year.

Fig.: Test scenario (Illustration: Simon Giller)

13 Artifacts of Machine Ethics

Since 2012, Oliver Bendel has invented 13 artifacts of machine ethics. Nine of them have actually been implemented, including LADYBIRD, the animal-friendly vacuum cleaning robot, and LIEBOT, the chatbot that can systematically lie. Both of them have achieved a certain popularity. The information and machine ethicist is convinced that ethics does not necessarily have to produce the good. It should explore the good and the evil and, like any science, serve to gain knowledge. Accordingly, he builds both moral and immoral machines. But the immoral ones he keeps in his laboratory. In 2020, if the project is accepted, HUGGIE will see the light of day. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? All moral and immoral machines that have been created between 2012 and 2020 are compiled in a new illustration, which is shown here for the first time.

Fig.: 13 Artifacts of Machine Ethics

Towards a Morality Markup Language

At the request of Prof. Dr. Oliver Bendel, a student at the School of Business FHNW, Alessandro Spadola, investigated in the context of machine ethics whether markup languages such as HTML, SSML and AIML can be used to transfer moral aspects to machines or websites and whether there is room for a new language that could be called Morality Markup Language (MOML). He presented his results in January 2020. From the management summary: „However, the idea that owners should be able to transmit their own personal morality has been explored by Bendel, who has proposed an open way of transferring morality to machines using a markup language. This research paper analyses whether a new markup language could be used to imbue machines with their owners‘ sense of morality. This work begins with an analysis how a markup language is structured, describes the current well-known markup languages and analyses their differences. In doing so, it reveals that the main difference between the well-known markup languages lies in the different goals they pursue which at the same time forms the subject, which is marked up. This thesis then examines the possibility of transferring personal morality with the current languages available and discusses whether there is a need for a further language for this purpose. As is shown, morality can only be transmitted with increased effort and the knowledge of human perception because it is only possible to transmit them by interacting with the senses of the people. The answer to the question of whether there is room for another markup language is ‚yes‘, since none of the languages analysed offer a simple way to transmit morality, and simplicity is a key factor in markup languages. Markup languages all have clear goals, but none have the goal of transferring and displaying morality. The language that could assume this task is ‚Morality Markup‘, and the present work describes how such a language might look.“ (Management Summary) The promising results are to be continued in the course of the year by another student in a bachelor thesis.

Fig.: Is there room for a MOML?

A Moral Machine in an Immoral One

The book chapter „The BESTBOT Project“ by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the „Handbuch Maschinenethik“, edited by Oliver Bendel. From the abstract: „The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.“ (Abstract) The BESTBOT is an immoral machine in a moral one – or a moral machine in an immoral one, depending on the perspective. The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

Fig.: A moral machine in an immoral one

„Handbuch Maschinenethik“ erschienen

Nach drei Jahren ist ein ambitioniertes Projekt an sein vorläufiges Ende gekommen: Das „Handbuch Maschinenethik“ (Hrsg. Oliver Bendel) ist Mitte Oktober 2019 bei Springer erschienen. Es versammelt Beiträge der führenden Expertinnen und Experten in den Bereichen Maschinenethik, Roboterethik, Technikethik, Technikphilosophie sowie Roboterrecht. Im Moment kann es hier heruntergeladen werden: link.springer.com/book/10.1007/978-3-658-17483-5 … Es ist ein umfangreiches, ein vorzeigenswertes, ein einzigartiges Buch geworden. In gewisser Weise bildet es ein Gegenstück zur amerikanischen Forschung, die die Disziplin dominiert: Die meisten Autorinnen und Autoren stammen aus Europa und Asien. Der Herausgeber, der sich seit 20 Jahren mit Informations-, Roboter- und Maschinenethik beschäftigt und seit acht Jahren intensiv zur Maschinenethik forscht, ist voller Hoffnung, dass das Buch seinen Platz in der Standardliteratur zur Maschinenethik finden wird, wie „Moral Machines“ (2009) von Wendell Wallach und Colin Allen und „Machine Ethics“ (2011) von Michael und Susan Leigh Anderson, und wie „Programming Machine Ethics“ (2016) von Luís Moniz Pereira (mit Ari Saptawijaya) und „Grundfragen der Maschinenethik“ (2018) von Catrin Misselhorn – die beiden haben wesentlich zum „Handbuch Maschinenethik“ beigetragen. In den nächsten Tagen wird das Buch mit seinen 23 Kapiteln und 469 Seiten zum Verkauf über die Springer-Website bereitgestellt und auch als Printversion angeboten.

Abb.: Das „Handbuch Maschinenethik“

Animals and Machines

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.

Fig.: In Australia

A Morality Menu for a Domestic Robot

LADYBIRD, the animal-friendly vacuum cleaning robot, was conceived in 2014 by Oliver Bendel and introduced at Stanford University (AAAI Spring Symposia) in 2017 and then implemented as a prototype at the School of Business FHNW. In the context of the project, a menu was proposed with which the user can set the morale of the vacuum cleaner robot. As the name implies, it spares ladybirds. It should also let spiders live. But if you want to have certain insects sucked in, you could define this via a menu. It is important that LADYBIRD remains animal-friendly overall. The idea was to develop a proxy morality in detail via the menu. The vacuum cleaner robot as a proxy machine does what the owner would do. In 2018 the morality menu (MOME) for LADYBIRD was born as a design study. It can be combined with other approaches and technologies. In this way, the user could learn how others have decided and how the personal morality that he has transferred to the machine is assessed. He could also be warned and enlightened if he wants to suck in not only vermin but also spiders.

Fig.: Morality menu for LADYBIRD

Machine Ethics in Florida

A special session „Formalising Robot Ethics“ takes place within the ISAIM conference in Fort Lauderdale (3 to 5 January 2018). The program is now available and can be viewed on isaim2018.cs.virginia.edu (link no longer valid). „Practical Challenges in Explicit Ethical Machine Reasoning“ is a talk by Louise Dennis and Michael Fischer, „Contextual Deontic Cognitive Event Calculi for Ethically Correct Robots“ a contribution of Selmer Bringsjord, Naveen Sundar G., Bertram Malle and Matthias Scheutz. Oliver Bendel will present „Selected Prototypes of Moral Machines“. A few words from the summary: „The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT.“

Fig.: Machine Ethics in Fort Lauderdale

The LADYBIRD Project

The LADYBIRD project starts in March 2017. More and more autonomous and semi-autonomous machines make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral machines. In this context, Prof. Dr. Oliver Bendel developed various design studies and thus submitted proposals for their appearance and functions. He focused on animal-friendly machines which make morally sound decisions, and chatbots with specific skills. The project is about a service robot, which shall spare beneficial insects – a vacuum cleaner called LADYBIRD. An annotated decision tree modelled for this objective and a set of sensors will be used. Three students work from March to October in the practice project at the School of Business FHNW. Since 2013, the principal Oliver Bendel published several articles on LADYBIRD and other animal-friendly machines, e.g., „Ich bremse auch für Tiere (I also brake for animals)“ in the Swiss magazine inside.it. The robot will be the third prototype in the context of machine ethics at the School of Business FHNW. The first one was the GOODBOT (2013), the second one the LIEBOT (2016). All these machines can be described as simple moral machines.

Fig.: The robot should spare the ladybird

Considerations in Non-Human Agents

The proceedings of the AAAI conference 2016 have been published in March 2016 („The 2016 AAAI Spring Symposium Series: Technical Reports“). The symposium „Ethical and Moral Considerations in Non-Human Agents“ was dedicated to the discipline of machine ethics. Ron Arkin (Georgia Institute of Technology), Luís Moniz Pereira (Universidade Nova de Lisboa), Peter Asaro (New School for Public Engagement, New York) and Oliver Bendel (School of Business FHNW) spoke about moral and immoral machines. The contribution „Annotated Decision Trees for Simple Moral Machines“ (Oliver Bendel) can be found on the pages 195 – 201. In the abstract it is said: „Autonomization often follows after the automization on which it is based. More and more machines have to make decisions with moral implications. Machine ethics, which can be seen as an equivalent of human ethics, analyses the chances and limits of moral machines. So far, decision trees have not been commonly used for modelling moral machines. This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.“ The proceedings can be downloaded via aaai.org/proceeding/04-spring-2016/.

Fig.: Oliver Bendel, Cindy Mason, Luís Moniz Pereira, and others in Stanford