Miss Tammy

Die Maschinenethik, die vor zehn Jahren oft noch als Kuriosität abgetan wurde, ist inzwischen Alltagsgeschäft. Sie ist etwa gefragt, wenn man bei Sprachmodellen bzw. Chatbots sogenannte Guardrails einzieht, über Alignment in der Form von Finetuning oder über Prompt Engineering. Wenn man GPTs erstellt, hat man das „Instructions“-Feld für das Prompt Engineering zur Verfügung. Dort kann der Prompteur oder die Prompteuse bestimmte Vorgaben und Einschränkungen für den Chatbot erstellen. Dabei kann auf Dokumente verwiesen werden, die man hochgeladen hat. Genau dies macht Myriam Rellstab derzeit an der Hochschule für Wirtschaft FHNW im Rahmen ihrer Abschlussarbeit „Moral Prompt Engineering“, deren Ergebnisse sie am 7. August 2024 vorgestellt hat. Als Prompteuse zähmt sie das auf GPT-4o basierende GPT, mit Hilfe ihrer Anweisungen und – dies hatte der Initiator des Projekts, Prof. Dr. Oliver Bendel, vorgeschlagen – mit Hilfe von Netiquetten, die sie gesammelt und dem Chatbot zur Verfügung gestellt hat. Der Chatbot wird gezähmt, der Tiger zum Stubentiger, der Löwe zum Hauslöwen, der ohne Gefahr etwa im Klassenzimmer eingesetzt werden kann. Nun ist es bei GPT-4o so, dass schon vorher Guardrails eingezogen wurden. Diese wurden einprogrammiert oder über Reinforcement Learning from Human Feedback gewonnen. Man macht also genaugenommen aus einem gezähmten Tiger einen Stubentiger oder aus einem gezähmten Löwen einen Hauslöwen. Das GPT liegt nun vor, von Myriam Rellstab auf den Namen Miss Tammy getauft (von engl. „to tame“, „zähmen“). Es liegt bei allen relevanten Kriterien über dem Standardmodell, mit dem es verglichen wurde, gibt etwa angenehmere, hilfreichere und präzisere Antworten auf die Fragen von verunsicherten oder verzweifelten Schülern und Schülerinnen. 20 Testpersonen aus einer Schulklasse standen dabei zur Verfügung. Moral Prompt Engineering hat mit Miss Tammy einen vielversprechenden Start hingelegt.

Abb.: Miss Tammy in Aktion

Moral Prompt Engineering

Die Maschinenethik, die vor zehn Jahren oft noch als Kuriosität abgetan wurde, ist inzwischen Alltagsgeschäft. Sie ist etwa gefragt, wenn man bei Sprachmodellen bzw. Chatbots sogenannte Guardrails einzieht, über Alignment in der Form von Finetuning oder über Prompt Engineering. Wenn man GPTs erstellt, also „custom versions of ChatGPT“, wie Open AI sie nennt, hat man das „Instructions“-Feld für das Prompt Engineering zur Verfügung. Dort kann der Prompteur oder die Prompteuse bestimmte Vorgaben und Einschränkungen für den Chatbot erstellen. Dabei kann auf Dokumente verwiesen werden, die man hochgeladen hat. Genau dies macht Myriam Rellstab derzeit an der Hochschule für Wirtschaft FHNW im Rahmen ihrer Abschlussarbeit „Moral Prompt Engineering“, deren Zwischenergebnisse sie am 28. Mai 2024 vorgestellt hat. Als Prompteuse zähmt sie GPT-4o, mit Hilfe ihrer Anweisungen und – dies hatte der Initiator des Projekts, Prof. Dr. Oliver Bendel, vorgeschlagen – mit Hilfe von Netiquetten, die sie gesammelt und dem Chatbot zur Verfügung gestellt hat. Der Chatbot wird gezähmt, der Tiger zum Stubentiger, der ohne Gefahr etwa im Klassenzimmer eingesetzt werden kann. Nun ist es bei GPT-4o so, dass schon vorher Guardrails eingezogen wurden. Diese wurden einprogrammiert oder über Reinforcement Learning from Human Feedback gewonnen. Man macht also genaugenommen aus einem gezähmten Tiger einen Stubentiger. Bei bestimmten Open-Source-Sprachmodellen ist dies anders. Das wilde Tier muss erst einmal eingefangen und dann gezähmt werden. Und selbst dann kann es einen ernsthaft verletzen. Doch auch bei GPTs gibt es Tücken, und wie man weiß, können Stubentiger durchaus fauchen und kratzen. Im August liegen die Ergebnisse des Projekts vor. Bereits bei Data, einem Chatbot für den Studiengang Data Science an der Hochschule für Technik FHNW, war Moral Prompt Engineering angewandt worden.

Abb.: Die Prompteuse vor dem Tiger (Bild: Ideogram)

Virtues and Values in Requirements Engineering

The workshop „ViVa RE’24: Virtues and Values in Requirements Engineering 2024“ will take place on April 8, 2024 in Winterthur. It is part of the conference „Requirements Engineering: Foundation for Software Quality (REFSQ) 2024“. From the workshop website: “Including human values into the development of software-intensive systems has long been a topic of discussion. A satisfactory solution such as a collection of (software-supported) methods or a model gallery for ethical argumentation has not yet been developed. This workshop will work on collecting methods for incorporating values into system development. Two particular methods, the Square of Values (SoV) and the Software Sustainability Assessment (SoSA) approaches, will be examined in more detail. A specific focus of the workshop lies on how ethical reasoning can be supported with the help of software-tools.” (Website ViVa) The program committee consists of Prof. Dr. Oliver Bendel, Dr. Renata Guizzardi-Silva Souza, Prof. Dr. Kurt Schneider, and Alexander Bock. Organizers are Prof. Dr. Alexander Rachmann and Dr. Jens Gulden. More information is available at google.com/view/vivarerefsq24.

Fig.: Fog over Winterthur

GW Animal Law Program

The new YouTube channel „GW Animal Law Program“ went online at the end of November 2023. It collects lectures and recordings on animal law and ethics. Some of them are from the online event „Artificial Intelligence & Animals“, which took place on 16 September 2023. The speakers were Prof. Dr. Oliver Bendel (FHNW University of Applied Sciences Northwestern Switzerland), Yip Fai Tse (University Center for Human Values, Center for Information Technology Policy, Princeton University), and Sam Tucker (CEO VegCatalyst, AI-Powered Marketing, Melbourne). Other videos include „Tokitae, Reflections on a Life: Evolving Science & the Need for Better Laws“ by Kathy Hessler, „Alternative Pathways for Challenging Corporate Humanewashing“ by Brooke Dekolf, and „World Aquatic Animal Day 2023: Alternatives to the Use of Aquatic Animals“ by Amy P. Wilson. In his talk, Oliver Bendel presents the basics and prototypes of animal-computer interaction and animal-machine interaction, including his own projects in the field of machine ethics. The YouTube channel can be accessed at www.youtube.com/@GWAnimalLawProgram/featured.

Fig.: An illustration of HAPPY HEDGEHOG

Care Robots from an Ethical Perspective

On 5 June 2023, Prof. Dr. Oliver Bendel will give a talk on „Care robots from an ethical perspective“ at the Institute of Ethics, History and Humanities (iEH2) of the University of Geneva. The event will take place in room A04.2910 (CMU). Care and therapy robots can be understood as service robots and in many cases also as social robots. In the talk by Professor Dr. Oliver Bendel from Zurich, the goals, tasks, and characteristics will be clarified and, based on this, considerations will be made from the perspective of ethics. In the end, it should become clear which types of robots and prototypes or products exist in the healthcare sector, what purposes they serve, what functions they assume, and what implications and consequences this has for individuals and society. Care robots may contribute to personal autonomy while weakening informational autonomy. Therapy robots may enhance personal performance and satisfaction, but in individual cases they may also violate human dignity. It is important to design service robots and social robots in the healthcare sector in such a way that they meet as many requirements and needs as possible and are useful tools for caregivers and those in need of care. Disciplines such as machine ethics can help in this regard. Prof. Dr. Oliver Bendel is the editor of several relevant standard works, including „Pflegeroboter“ („Care Robots“, 2018) and „Soziale Roboter“ („Social Robots“, 2021). He has also advised the German Bundestag on this topic. More information via www.unige.ch/medecine/ieh2/fr/la-une/prochain-colloque-ieh2/.

Fig.: Bendel as scientific director of the 23rd Berlin Colloquium (Photo: Daimler und Benz Stiftung)

New Proceedings Website of the AAAI Spring Symposia

The AAAI Spring Symposia is a legendary conference that has been held since 1992. It usually takes place at Stanford University. Until 2018, the leading US artificial intelligence organization itself published the proceedings. Since 2019, each symposium is responsible for its own. Following a restructuring of the AAAI website, the proceedings can be found in a section of the new „AAAI Conference and Symposium Proceedings“ page. In 2016, Stanford University hosted one of the most important gatherings on machine ethics and robot ethics ever, the symposium „Ethical and Moral Considerations in Non-Human Agents“ … Contributors included Peter M. Asaro, Oliver Bendel, Joanna J. Bryson, Lily Frank, The Anh Han, and Luis Moniz Pereira. Also present was Ronald C. Arkin, one of the most important and – because of his military research – controversial machine ethicists. The 2017 and 2018 symposia were also groundbreaking for machine ethics and attracted experts from around the world. The papers can be accessed at aaai.org/aaai-publications/aaai-conference-proceedings.

Fig.: The Stanford University campus

 

The Minding Animals Conference Returns

After an hiatus of six years, Minding Animals will return, with a conference tentatively scheduled to be held 11 – 17 July, 2024. This was announced by the organizers in a message dated December 19, 2022. „Just like the postponed 2021 conference, Minding Animals 5 (MAC5) will be hosted by the Centre for Compassionate Conservation (CfCC) in the Transdisciplinary School at the University of Technology, Sydney (UTS).“ (Message from Minding Animals International) The organizers refer to the Minding Animals Bulletin 53 regarding the vision of the conference and the preparation of the authors. „We believe that the greatest threat posed to the planet, to all planetary life, human or nonhuman animal, plant or otherwise, including the current pandemic, is the intersection of animal agriculture and the climate crisis. Hence, the urgency and criticality of climate as the key theme.“ (Minding Animals Bulletin 53) Possible subtopics include „The biodiversity crisis and climate change“, „Rewilding and compassionate conservation“, „The animal industrial complex“, „Animals, the circular economy and sustainable food systems“, and „Animals in development and food sovereignty“.

Fig.: The conference will take place in Sidney

„Programming Machine Ethics“ in the Z-Library

The book „Programming Machine Ethics“ (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. „This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.“ (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

Fig.: The machine ethicists Oliver Bendel and Luís Moniz Pereira 2016 at Stanford University

A Morality Module for a Care Robot

There are more and more robots being used in health care. Most of them are prototypes, some – like Lio and P-CARE from F&P Robotics – are products that are manufactured in small series. Machine ethics researches and creates moral machines. These are often guided by certain values or meta-rules, they follow predetermined rules, or they learn from situations and adapt their behavior. Michael Anderson and Susan L. Anderson presented their value-driven eldercare robot at the 2019 Berlin Colloquium by invitation of Oliver Bendel. The CARE-MOMO („MOMO“ stands for „morality module“) is a morality module for a robot in the manner of Lio. The idea is that the robot acquires clearly delimited moral abilities in addition to its usual abilities. The focus is to have it perform an act or speech act with high reliability based on a moral assumption or reasoning, with a clearly identifiable benefit to the caregiver or the care recipient. The initiators want to address a common problem in the nursing and care field. Marc Heimann could be recruited for the project at the School of Business FHNW. The supervisor is Oliver Bendel, who has been working with robots in the healthcare sector for ten years and has built numerous moral machines together with his teams.

Fig.: Another robot of the company (Photo: F&P Robotics)

Dagstuhl Report on Trustworthy Conversational Agents

On February 18, 2022, the Dagstuhl Report „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“ was published. Editors are Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller. From the abstract: „This report documents the program and the outcomes of Dagstuhl Seminar 21381 ‚Conversational Agent as Trustworthy Autonomous System (Trust-CA)‘. First, we present the abstracts of the talks delivered by the Seminar’s attendees. Then we report on the origin and process of our six breakout (working) groups. For each group, we describe its contributors, goals and key questions, key insights, and future research. The themes of the groups were derived from a pre-Seminar survey, which also led to a list of suggested readings for the topic of trust in conversational agents. The list is included in this report for references.“ (Abstract Dagstuhl Report) The seminar, attended by scientists and experts from around the world, was held at Schloss Dagstuhl from September 19-24, 2022. The report can be downloaded via drops.dagstuhl.de/opus/volltexte/2022/15770/.

Fig.: The on-site group (Photo: Schloss Dagstuhl – LZ GmbH)

SPACE THEA at Stanford University

The paper „The SPACE THEA Project“ by Martin Spathelf and Oliver Bendel was accepted at the AAAI 2022 Spring Symposia (Stanford University). The two authors will present it at the end of March 2022 at the symposium „How Fair is Fair? Achieving Wellbeing AI“. From the abstract: „In some situations, no professional human contact can be available. Accordingly, one remains alone with one’s problems and fears. A manned Mars flight is certainly such a situation. A voice assistant that shows empathy and assists the astronauts could be a solution. In the SPACE THEA project, a prototype with such capabilities was developed using Google Assistant and Dialogflow Essentials. The voice assistant has a personality based on characteristics such as functional intelligence, sincerity, creativity, and emotional intelligence. It proves itself in seven different scenarios designed to represent the daily lives of astronauts, addressing operational crises and human problems. The paper describes the seven scenarios in detail, and lists technical and conceptual foundations of the voice assistant. Finally, the most important results are stated and the chapters are summarized.“ More information about the AAAI 2022 Spring Symposia is available here.

Fig.: At Stanford University

Trustworthy Conversational Agents

The Dagstuhl seminar „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“ will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik „pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers“. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: „CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.“ (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: „The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).

Fig.: Bird’s eye view of Schloss Dagstuhl (Photo: Schloss Dagstuhl)

SPACE THEA Wants to Fly to Mars

SPACE THEA was developd by Martin Spathelf at the School of Business FHNW from April to August 2021. The client and supervisor was Prof. Dr. Oliver Bendel. The voice assistant is supposed to show empathy and emotions towards astronauts on a Mars flight. Technically, it is based on Google Assistant and Dialogflow. The programmer chose a female voice with Canadian English. SPACE THEA’s personality includes functional and emotional intelligence, honesty, and creativity. She follows a moral principle: to maximize the benefit of the passengers of the spacecraft. The prototype was implemented for the following scenarios: conduct general conversations; help the user find a light switch; assist the astronaut when a thruster fails; greet and cheer up in the morning; fend off an insult for no reason; stand by a lonely astronaut; learn about the voice assistant. A video on the latter scenario is available here. Oliver Bendel has been researching conversational agents for 20 years. With his teams, he has developed 20 concepts and artifacts of machine ethics and social robotics since 2012.

Fig.: SPACE THEA wants to fly to Mars

Paper about HAPPY HEDGEHOG

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments, more structured environments like the household or more unstructured environments like cultural landscapes or the wilderness. There they encounter domestic animals, farm animals, working animals, and wild animals. These creatures could be disturbed, displaced, injured, or killed by the machines. Within the context of machine ethics and social robotics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral and social machines in the spirit of these disciplines. In 2019-20, a team led by Prof. Dr. Oliver Bendel developed a prototype robot lawnmower that can recognize hedgehogs, interrupt its work for them and thus protect them. Every year many of these animals die worldwide because of traditional service robots. HAPPY HEDGEHOG (HHH), as the invention is called, could be a solution to this problem. This article begins by providing an introduction to the background. Then it focuses on navigation (where the machine comes across certain objects that need to be recognized) and thermal and image recognition (with the help of machine learning) of the machine. It also presents obvious weaknesses and possible improvements. The results could be relevant for an industry that wants to market their products as animal-friendly machines. The paper „The HAPPY HEDGEHOG Project“ is available here.

Fig.: A happy hedgehog

Extended Care Robots

The symposium „Applied AI in Healthcare: Safety, Community, and the Environment“ will be held within the AAAI Spring Symposia on March 22-23, 2021. One of the presentations is titled „Care Robots with Sexual Assistance Functions“. Author of the paper is Prof. Dr. Oliver Bendel. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ More information about the AAAI Spring Symposia is available at aaai.org/Symposia/Spring/sss21.php.

Fig.: The fingers of a robot

The HAPPY HEDGEHOG Project

The paper „The HAPPY HEDGEHOG Project“ by Prof. Dr. Oliver Bendel, Emanuel Graf and Kevin Bollier was accepted at the AAAI Spring Symposia 2021. The researchers will present it at the sub-conference „Machine Learning for Mobile Robot Navigation in the Wild“ at the end of March. The project was conducted at the School of Business FHNW between June 2019 and January 2020. Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed a prototype of a mowing robot in the context of machine ethics and social robotics, which stops its work as soon as it detects a hedgehog. HHH has a thermal imaging camera. When it encounters a warm object, it uses image recognition to investigate it further. At night, a lamp mounted on top helps. After training with hundreds of photos, HHH can quite accurately identify a hedgehog. With this artifact, the team provides a solution to a problem that frequently occurs in practice. Commercial robotic mowers repeatedly kill young hedgehogs in the dark. HAPPY HEDGEHOG could help to save them. The video on in the corresponding section of this website shows it without disguise. The robot is in the tradition of LADYBIRD.

Fig.: A happy hedgehog

International Workshop on Trustworthy Conversational Agents

In the fall of 2021, a five-day workshop on trustworthy conversational agents will be held at Schloss Dagstuhl. Prof. Dr. Oliver Bendel is among the invited participants. According to the website, Schloss Dagstuhl – Leibniz Center for Informatics pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers. Oliver Bendel and his teams have developed several chatbots like GOODBOT, LIEBOT and BESTBOT in the context of machine ethics since 2013, which were presented at conferences at Stanford University and Jagiellonian University and received international attention. Since the beginning of 2020, he has been preparing to develop several voice assistants that can show empathy and emotion. „Schloss Dagstuhl was founded in 1990 and quickly became established as one of the world’s premier meeting centers for informatics research. Since the very first days of Schloss Dagstuhl, the seminar and workshop meeting program has always been the focus of its programmatic work. In recent years, Schloss Dagstuhl has expanded its operation and also has significant efforts underway in bibliographic services … and in open access publishing.“ (Website Schloss Dagstuhl)

Fig.: Is this voicebot trustworthy?

The Morality Menu Project

From 18 to 21 August 2020, the Robophilosophy conference took place. Due to the pandemic, participants could not meet in Aarhus as originally planned, but only in virtual space. Nevertheless, the conference was a complete success. At the end of the year, the conference proceedings were published by IOS Press, including the paper „The Morality Menu Project“ by Oliver Bendel. From the abstract: „The discipline of machine ethics examines, designs and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.“ The book can be ordered on the publisher’s website. An author’s copy is available here.

Fig.: The book cover (photo: IOS Press)

Proceedings of Robophilosophy 2020

The book „Culturally Sustainable Social Robotics“ (eds. Marco Nørskov, Johanna Seibt, and Oliver Santiago Quick) was published in December 2020 by IOS Press. From the publisher’s information: „Robophilosophy conferences have been the world’s largest venues for humanities research in and on social robotics. The book at hand presents the proceedings of Robophilosophy Conference 2020: Culturally Sustainable Social Robotics, the fourth event in the international, biennial Robophilosophy Conference Series, which brought together close to 400 participants from 29 countries. The speakers of the conference, whose contributions are collected in this volume, were invited to offer concrete proposals for how the Humanities can help to shape a future where social robotics is guided by the goals of enhancing socio-cultural values rather than by utility alone. The book is divided into 3 parts; Abstracts of Plenaries, which contains 6 plenary sessions; Session Papers, with 44 papers under 8 thematic categories; and Workshops, containing 25 items on 5 selected topics.“ (Website IOS Press) Contributors include Robert Sparrow, Alan Winfield, Aimee van Wynsberghe, John Danaher, Johanna Seibt, Marco Nørskov, Peter Remmers, John P. Sullins, and Oliver Bendel.

Fig.: A photo from the 2018 conference (with Hiroshi Ishiguro)

An Empathic Voicebot

Social robots are robots that come close to animals and humans, interact and communicate with them. They reproduce characteristics of animals and humans in their behavior and appearance. They can be implemented both as hardware robots and as software robots. The SPACE THEA project should have already started in March 2020. Because of COVID-19 it had to be postponed. Now Prof. Dr. Oliver Bendel (School of Business FHNW) starts with the preparatory work. In winter 2020/2021 and spring 2021 the programming of the voicebot is then carried out. SPACE THEA is designed to accompany astronauts to Mars and to show them empathy and emotions. In the best case, she should also be able to provide psychological counseling, for example, based on cases from the literature. The project will use findings from social robotics, but also from machine ethics. The results will be available by summer 2021.

Fig.: An empathic voicebot

A Markup Language for Moral Machines

In many cases it is important that an autonomous system acts and reacts adequately from a moral point of view. There are some artifacts of machine ethics, e.g., GOODBOT or LADYBIRD by Oliver Bendel or Nao as a care robot by Susan Leigh and Michael Anderson. But there is no standardization in the field of moral machines yet. The MOML project, initiated by Oliver Bendel, is trying to work in this direction. In the management summary of his bachelor thesis Simon Giller writes: „We present a literature review in the areas of machine ethics and markup languages which shaped the proposed morality markup language (MOML). To overcome the most substantial problem of varying moral concepts, MOML uses the idea of the morality menu. The menu lets humans define moral rules and transfer them to an autonomous system to create a proxy morality. Analysing MOML excerpts allowed us to develop an XML schema which we then tested in a test scenario. The outcome is an XML based morality markup language for autonomous agents. Future projects can use this language or extend it. Using the schema, anyone can write MOML documents and validate them. Finally, we discuss new opportunities, applications and concerns related to the use of MOML. Future work could develop a controlled vocabulary or an ontology defining terms and commands for MOML.“ The bachelor thesis will be publicly available in autumn 2020. It was supervised by Dr. Elzbieta Pustulka. There will also be a paper with the results next year.

Fig.: Test scenario (Illustration: Simon Giller)

Listen to SPACE THEA

Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, students will design a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled THE EMPATHIC ASSISTANT IN SPACE (SPACE THEA). First results will be available by the end of 2021.

Fig.: In space