Care Robots with Sexual Assistance Functions

The paper „Care Robots with Sexual Assistance Functions“ by Oliver Bendel was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the paper at Stanford University between 23 and 25 March 2020.

Fig.: Temple figures

Co-Robots as Care Robots

The paper „Co-Robots as Care Robots“ by Oliver Bendel, Alina Gasser and Joel Siebenmann was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Cooperation and collaboration robots, co-robots or cobots for short, are an integral part of factories. For example, they work closely with the fitters in the automotive sector, and everyone does what they do best. However, the novel robots are not only relevant in production and logistics, but also in the service sector, especially where proximity between them and the users is desired or unavoidable. For decades, individual solutions of a very different kind have been developed in care. Now experts are increasingly relying on co-robots and teaching them the special tasks that are involved in care or therapy. This article presents the advantages, but also the disadvantages of co-robots in care and support, and provides information with regard to human-robot interaction and communication. The article is based on a model that has already been tested in various nursing and retirement homes, namely Lio from F&P Robotics, and uses results from accompanying studies. The authors can show that co-robots are ideal for care and support in many ways. Of course, it is also important to consider a few points in order to guarantee functionality and acceptance.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the results at Stanford University between 23 and 25 March 2020.

Fig.: At Stanford University

Solutions to the Social Problems of Automation

Automation is advancing relentlessly. Already decades ago, digitization was its partner. In the industry, innovative robots, for example co-robots, are used. Service robots begin to spread in various areas. Systems of artificial intelligence perform tasks of all sorts, even creative activities. The studies on the development of the labor market reach different results. In any case, it can be said that certain jobs will disappear and many people will have to do without their familiar work. It can also be assumed that in many areas less human work has to be performed on behalf (e.g., for customers and employers). As possible solutions to economic and social problems, an unconditional basic income and a robot tax are suggested. The paper „Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?“ by Oliver Bendel presents, discusses and criticizes these approaches in the context of automation and digitization. Moreover, it develops a relatively unknown proposal, unconditional basic property, and presents its potentials as well as its risks. Oliver Bendel will introduce his ideas at the AAAI spring symposium „Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness“ at Stanford University in late March 2019. More information via aaai.org/Symposia/Spring/sss19.php.

Fig.: At Stanford University

Zum Grundeigentum an der Stanford University

Die Studien zur Arbeitsmarktentwicklung ab 2020 gelangen zu unterschiedlichen Ergebnissen. Man kann aber auf jeden Fall sagen, dass bestimmte Arbeitsplätze wegfallen und viele Menschen ohne ihre vertraute Arbeit sein werden. Es ist auch anzunehmen, dass insgesamt weniger menschliche Arbeit im Auftrag (etwa für Arbeitgeber) verrichtet werden muss. Als Lösungen der wirtschaftlichen und sozialen Probleme werden oft bedingungsloses Grundeinkommen und Robotersteuer genannt. Der Beitrag „Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?“ von Oliver Bendel diskutiert und kritisiert diese Ansätze im Kontext von Automatisierung und Digitalisierung. Zudem entwickelt er einen weitgehend unbekannten Vorschlag, das bedingungslose Grundeigentum, weiter und stellt dessen Potenziale ebenso dar wie dessen Risiken. Der Informations- und Maschinenethiker aus Zürich präsentiert seine Ergebnisse zwischen dem 25. und 27. März 2019 bei den AAAI Spring Symposia. Es ist sein sechster Vortrag an der Stanford University innerhalb von vier Jahren.

Abb.: An der Stanford University

Successful Completion of the BESTBOT Project

The BESTBOT was developed at the School of Business FHNW from March to August 2018. Predecessor projects were GOODBOT (2013) and LIEBOT (2016). Prof. Dr. Oliver Bendel has been doing research in the young discipline of machine ethics for several years. In cooperation with robotics and artificial intelligence (AI), it designs and produces moral machines. At the beginning of 2018 Bendel presented his paper „From GOODBOT to BESTBOT“ at Stanford University, which laid the foundation for the BESTBOT project. David Studer programmed the chatbot in Java. Prof. Dr. Bradley Richards assisted him in technical matters. Like LIEBOT, BESTBOT is a networked system that exploits search engines and dictionaries. It analyzes the user’s text input with text-based emotion recognition software. At the same time, face recognition is used, again with emotion recognition. For example, if the user states that he is doing well but reveals something else on his face, the chatbot addresses this contradiction. It recognizes both small and big worries. Like the GOODBOT, the BESTBOT can escalate over several levels and provide a suitable emergency number. Like its predecessor, it makes it clear that it is only a machine. It is also special that it cites the source of factual allegations. The BESTBOT will be presented at conferences in 2019.

Fig.: The chatbot uses face recognition

The Chatbot that Uses Face Recognition

Machine ethics researches the morality of semiautonomous and autonomous machines. The School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. To achieve this, it will use approaches of face recognition. The paper „From GOODBOT to BESTBOT“ describes the preconditions and findings of the GOODBOT project and the results of the LIEBOT project and outlines the subsequent BESTBOT project. A reflection from the perspective of information ethics is included. Oliver Bendel presented his paper on 27 March 2018 at Stanford University („AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“, AAAI 2018 Spring Symposium Series). The entire volume can be downloaded via AAAI.

Fig.: The BESTBOT

AI and Society III

Das „AAAI 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ an der Stanford University ging am 27. März 2018 um 9.15 Uhr in die zweite Runde (oder in die dritte, wenn man die Halbtage zählt). Den Invited Talk hielt Emma Brunskill, Stanford University, zum Thema „Beyond Expected Reward: Considerations for Reinforcement Learning in High Stakes Domains“. Sie stellte die Frage, wie ein Agent aus Erfahrung lernen kann, gute Entscheidungen zu treffen, wenn die Erfahrung kostspielig oder risikobehaftet ist. Die Session „Applications and Interactive Agents“ startete mit „Interactive Agent that Understands the User“. Piotr Gmytrasiewicz präsentierte ein assistierendes KI-System namens MARTHA. Emanuelle Burton stieß in „The Heart of the Matter“ vor und fragte nach der „Patient Autonomy as a Model for the Wellbeing of Technology Users“ (so der Untertitel). Sie stellte eine Case Study mit dem Titel „Joe the heart patient“ vor. In „The Uncanny Return of Physiognomy“ kritisierte Oliver Bendel neuere Verfahren der Gesichtserkennung, mit denen z.B. sexuelle Orientierungen und kriminelle Tendenzen festgestellt werden sollen, und ordnete sie Physiognomik, der Pseudowissenschaft von Lavater und Co. sowie Biometrik in ihrer unseriösen Form (in der Tradition von Peter Camper) zu.

Abb.: Auch am zweiten Tag war bestes Wetter in Stanford

AI and Society I

Das Symposium „AAAI 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ an der Stanford University begann am 26. März 2018 um 9 Uhr. Den Invited Talk hielt Vince Conitzer, Duke University, über „Moral Artificial Intelligence, Kidney Exchanges, and Societal Tradeoffs“. Er diskutierte ausführlich das Parking Game. Sodann stellte er die Moral Machine des MIT vor und ging auf einen „kidney exchange algorithm“ und „underdemanded pairs“ ein. Von 11.00 bis 12.30 Uhr fand die Session „Moral Decision Making“ statt. In „Towards Provably Moral AI Agents in Bottom-up Learning Frameworks“ forderten Nolan P. Shaw und Ryan W. Orr „robustness“, „simplicity“, „consistence“ und „universality“ von ihren moralischen Maschinen; diese sollen über neuronale Netzwerke von moralischen Agenten aller Art lernen. In „A Military AI System with Ethical Rules“ brachte Yetian Wang Knowledge Representation und Machine Learning zusammen. Der letzte Vortrag vor dem Mittagessen – das man in den Cafés der Umgebung bei Bilderbuchwetter einnahm – war „Preferences and Ethical Principles in Decision Making“. Das Thema von Kristen Brent Venable waren CP-Nets.

Abb.: Das Wahrzeichen der Stanford University

Programme of AAAI Spring Symposium

The tentative schedule of AAAI 2018 Spring Symposium on AI and Society at Stanford University (26 – 28 March 2018) has been published. On Tuesday Emma Brunskill from Stanford University, Philip C. Jackson („Toward Beneficial Human-Level AI … and Beyond“) and Andrew Williams („The Potential Social Impact of the Artificial Intelligence Divide“) will give a lecture. Oliver Bendel will have two talks, one on „The Uncanny Return of Physiognomy“ and one on „From GOODBOT to BESTBOT“. From the description on the website: „Artificial Intelligence has become a major player in today’s society and that has inevitably generated a proliferation of thoughts and sentiments on several of the  related issues. Many, for example, have felt the need to voice, in different ways and through different channels, their concerns on: possible undesirable outcomes caused by artificial agents, the morality of their use in specific sectors, such as the military, and the impact they will have on the labor market. The goal of this symposium is to gather a diverse group of researchers from many disciplines and to ignite a scientific discussion on this topic.“ (AAAI website)

Fig.: On the campus of Stanford University

The Uncanny Return of Physiognomy

Face recognition is the automated recognition of a face or the automated identification, measuring and description of features of a face. In the 21st century, it is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In the paper „The Uncanny Return of Physiognomy“, the basic principles of this topic are elaborated; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. Oliver Bendel will present his paper in March 2018 at Stanford University („AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“, AAAI 2018 Spring Symposium Series).

Fig.: The uncanny return of physiognomy

Mit Robotern töten?

„Darf die Polizei mit Robotern töten?“ Dies ist der Titel eines Beitrags, der am 9. Juli 2016 auf Spiegel Online erschienen ist. „Nachdem ein Roboter den mutmaßlichen Heckenschützen von Dallas getötet hat, werden unbequeme Fragen laut: Wie und warum ist die Entscheidung gefallen? Bricht mit dem Einsatz von ferngesteuerten Maschinen in der Polizeiarbeit eine neue Ära an? Ist der Einsatz von tötenden Maschinen ethisch vertretbar und – wollen die USA das?“ (SPON, 9. Juli 2016) Zu Wort kommt u.a. Peter Asaro, der im März 2016 in Stanford zum Symposium zur Maschinenethik beigetragen und auch in der Roboterethik als Disziplin, die nach den Konsequenzen des Einsatzes von automatischen und autonomen Maschinen fragt, einen Namen hat. Gegenüber Bloomberg äußerte er: „Sobald das zu einem Standardverfahren wird, wird es auch in anderen Situationen eingesetzt werden, die nicht so deutlich sind wie dieser Fall.“ (SPON, 9. Juli 2016) Der Jurist Stoughton wies gegenüber dem gleichen Nachrichtendienst darauf hin, dass sich, ob bei Schusswaffen oder Elektroschockern, jedes Mal die Frage des angemessenen Einsatzes gestellt habe. „Ich glaube, wir werden ähnliche Gespräche über Roboter haben, die den Tod bringen.“ (SPON, 9. Juli 2016)

Abb.: Wer hat die Waffe in der Hand?

Konferenz zur Maschinenethik

Beim Workshop „Ethical and Moral Considerations in Non-Human Agents“ an der Stanford University vom 21. bis 23. März 2016 im Rahmen der AAAI Spring Symposia erklärte Ron Arkin (Georgia Institute of Technology, USA) in seiner Keynote „Robots that Need to Mislead: Biologically Inspired Machine Deception“, wie Tiere täuschen und wie man Maschinen beibringen kann, andere zu täuschen. Er ging auch auf die Möglichkeit ein, dass Roboter lügen, und stellte die Frage: „Should a robot be allowed to lie?“ Elizabeth Kinne und Georgi Stojanov thematisierten in „Grounding Drones‘ Ethical Use Reasoning“ die Chancen und Risiken von Kampfdrohnen. Sie präsentierten Szenarien, um die Probleme beim Einsatz zu vergegenwärtigen. Die Verantwortlichen könnten dazu neigen, immer mehr heikle Entscheidungen den Robotern zu überlassen. Der Vortrag „Metaethics in context of engineering ethical and moral systems“ von Michal Klincewicz und Lily Frank, bereits nach der Kaffeepause, fragte nach den grundsätzlichen Möglichkeiten und Unmöglichkeiten moralischer Systeme. Oliver Bendel ging auf das LIEBOT-Projekt ein. An der Hochschule für Wirtschaft FHNW wird 2016 im Rahmen einer Bachelor-Arbeit ein Chatbot programmiert, der konsequent die Unwahrheit sagt. Vor allem stellte der Wissenschaftler aber einen annotierten Entscheidungsbaum für eine einfache moralische Maschine vor, ein tierfreundliches Roboterauto. „The artificial autonomous moral agents (AAMA): a minimalist model“ war der letzte Vortrag vor dem Lunch. Ioan Muntean schlug in „The artificial autonomous moral agents (AAMA): a minimalist model“ einen Ansatz der Tugendethik vor, den er zusammen mit Don Howard erarbeitet hatte. Bis zum Mittwoch tauschte man sich zur Maschinenethik und zur Roboterethik aus. Die letzte Keynote („Programming Machine Ethics“) hielt Amit Kumar Pandey von Aldebaran Robotics (SoftBank Group). Der Proceedingsband „The 2016 AAAI Spring Symposium Series: Technical Reports SS-16-01 – SS-16-07“ kann über aaai.org/proceeding/04-spring-2016/ bezogen werden.

Abb.: Sicherheitsdienst in Roboterform in der Shopping Mall von Stanford