Be My AI is a GPT-4-based extension of the Be My Eyes app. Blind users take a photo of their surroundings or an object and then receive detailed descriptions, which are spoken in a synthesized voice. They can also ask further questions about details and contexts. Be My AI can be used in a variety of situations, including reading labels, translating text, setting up appliances, organizing clothing, and understanding the beauty of a landscape. It also offers written responses in 29 languages, making it accessible to a wider audience. While the app has its advantages, it’s not a replacement for essential mobility aids such as white canes or guide dogs. Users are encouraged to provide feedback to help improve the app as it continues to evolve. The app will become even more powerful when it starts to analyze videos instead of photos. This will allow the blind person to move through his or her environment and receive constant descriptions and assessments of moving objects and changing situations. More information is available at www.bemyeyes.com/blog/announcing-be-my-ai.
The article „Image Synthesis from an Ethical Perspective“ by Prof. Dr. Oliver Bendel was submitted on 18 April and accepted on 8 September 2023. It was published on 27 September 2023. From the abstract: „Generative AI has gained a lot of attention in society, business, and science. This trend has increased since 2018, and the big breakthrough came in 2022. In particular, AI-based text and image generators are now widely used. This raises a variety of ethical issues. The present paper first gives an introduction to generative AI and then to applied ethics in this context. Three specific image generators are presented: DALL-E 2, Stable Diffusion, and Midjourney. The author goes into technical details and basic principles, and compares their similarities and differences. This is followed by an ethical discussion. The paper addresses not only risks, but opportunities for generative AI. A summary with an outlook rounds off the article.“ The article was published in the long-established and renowned journal AI & Society and can be downloaded here.
Fig.: Are there biases in image generators? (Image: Ideogram)
The article „Image Synthesis from an Ethical Perspective“ by Prof. Dr. Oliver Bendel from Zurich has gone into production at Springer and will be published in a few weeks. From the abstract: „Generative AI has gained a lot of attention in society, business, and science. This trend has increased since 2018, and the big breakthrough came in 2022. In particular, AI-based text and image generators are now widely used. This raises a variety of ethical issues. The present paper first gives an introduction to generative AI and then to applied ethics in this context. Three specific image generators are presented: DALL-E 2, Stable Diffusion, and Midjourney. The author goes into technical details and basic principles, and compares their similarities and differences. This is followed by an ethical discussion. The paper addresses not only risks, but opportunities for generative AI. A summary with an outlook rounds off the article.“ The article will be published in the long-established and renowned journal AI & Society.
Fig.: The image of a woman generated with Ideogram
Mit generativer KI (engl. „generative AI“) beschäftigte sich Prof. Dr. Oliver Bendel ab 2019, zunächst in Bezug auf Dialogsysteme sozialer Roboter, dann in Bezug auf Text- und Bildgenerierung. In dem von ihm herausgegebenen Band „Maschinenliebe“ geht einer seiner Autoren, Kino Coursey von Realbotix, auf die Verwendung von Sprachmodellen bei sozialen Robotern wie Harmony ein. Es folgten weitere Artikel in diesem Kontext von Oliver Bendel selbst, etwa „Die Mächtigkeit von Sprachmodellen: Anwendungsmöglichkeiten für Service- und Industrieroboter“, erschienen im Frühjahr 2023 in messtec drives Automation. 2023 war der Technikphilosoph zu Gast bei „Scobel“ zu diesem Thema, u.a. zusammen mit Doris Weßels, und Referent an der TU Darmstadt. Für Ende 2023 und Anfang 2024 sind weitere Papers und Buchbeiträge zu Text- und Bildgeneratoren geplant, u.a. bei Kohlhammer und Schäffer-Poeschel und in AI & Society. Der Artikel „Image Synthesis from an Ethical Perspective“ ist nun bei Springer in die Produktion gegangen. Er war bereits im April 2023 bei AI & Society eingereicht worden und enthält eine der wenigen systematischen Untersuchungen von Bildgeneratoren aus ethischer Sicht. Das Abstract ist schon über diese Plattform verfügbar.
Abb.: Mit Ideogram generiertes Bild einer Superheldin
The idea of a Babel Fish comes from the legendary novel or series of novels „The Hitchhiker’s Guide to the Galaxy“. Douglas Adams alluded to the Tower of Babel. In 1997, Yahoo launched a web service for the automatic translation of texts under this name. Various attempts to implement the Babel Fish in hardware and software followed. Meta’s SeamlessM4T software can handle almost a hundred languages. In a blog post, the American company refers to the work of Douglas Adams. „M4T“ stands for „Massively Multilingual and Multimodal Machine Translation“. Again, it is a language model that makes spectacular things possible. It has been trained on four million hours of raw audio. A demo is available at seamless.metademolab.com/demo. The first step is to record a sentence. The sentence is displayed as text. Then select the language you want to translate into, for example Japanese. The sentence is displayed again in text form and, if desired, in spoken language. A synthetic voice is used. You can also use your own voice, but this is not yet integrated into the application. A paper by Meta AI and UC Berkeley can be downloaded here.
On 5 June 2023, Prof. Dr. Oliver Bendel will give a talk on „Care robots from an ethical perspective“ at the Institute of Ethics, History and Humanities (iEH2) of the University of Geneva. The event will take place in room A04.2910 (CMU). Care and therapy robots can be understood as service robots and in many cases also as social robots. In the talk by Professor Dr. Oliver Bendel from Zurich, the goals, tasks, and characteristics will be clarified and, based on this, considerations will be made from the perspective of ethics. In the end, it should become clear which types of robots and prototypes or products exist in the healthcare sector, what purposes they serve, what functions they assume, and what implications and consequences this has for individuals and society. Care robots may contribute to personal autonomy while weakening informational autonomy. Therapy robots may enhance personal performance and satisfaction, but in individual cases they may also violate human dignity. It is important to design service robots and social robots in the healthcare sector in such a way that they meet as many requirements and needs as possible and are useful tools for caregivers and those in need of care. Disciplines such as machine ethics can help in this regard. Prof. Dr. Oliver Bendel is the editor of several relevant standard works, including „Pflegeroboter“ („Care Robots“, 2018) and „Soziale Roboter“ („Social Robots“, 2021). He has also advised the German Bundestag on this topic. More information via www.unige.ch/medecine/ieh2/fr/la-une/prochain-colloque-ieh2/.
Fig.: Bendel as scientific director of the 23rd Berlin Colloquium (Photo: Daimler und Benz Stiftung)
The Korean company Samsung Electronics announced new updates to its voice assistant Bixby that are designed to improve user experience, performance, and capabilities of the intelligent assistant and platform. One of the most interesting innovations concerns the voice of the users. According to Samsung, they „can personalize their Bixby Text Call voice“. „Using the new Bixby Custom Voice Creator, users can record different sentences for Bixby to analyze and create an AI generated copy of their voice and tone. Currently available in Korean, this generated voice is planned to be compatible with other Samsung apps beyond phone calls“ (Samsung, 22 February 2023). As early as 2017, Oliver Bendel wrote with respect to Adobe VoCo: „Today, just a few minutes of samples are enough to be able to imitate a speaker convincingly in all kinds of statements.“ In his article „The synthetization of human voices“, published in AI & Society, he also made ethical considerations. Now there seems to be a recognized market for such applications and they are being rolled out more widely.
Why is your baby crying? And what if artificial intelligence (AI) could answer that question for you? „If there was a flat little orb the size of a dessert plate that could tell you exactly what your baby needs in that moment? That’s what Q-bear is trying to do.“ (Mashable, January 3, 2023) That’s what tech magazine Mashable wrote in a recent article. At CES 2023, the Taiwanese company qbaby.ai demonstrated its AI-powered tool which aims to help parents resolve their needs in a more targeted way. „The soft silicone-covered device, which can be fitted in a crib or stroller, uses Q-bear’s patented tech to analyze a baby’s cries to determine one of four needs from its ‚discomfort index‘: hunger, a dirty diaper, sleepiness, and need for comfort. Q-bear’s translation comes within 10 seconds of a baby crying, and the company says it will become more accurate the more you use the device.“ (Mashable, January 3, 2023) Whether the tool really works remains to be seen – presumably, baby cries can be interpreted more easily than animal languages. Perhaps the use of the tool is ultimately counterproductive because parents forget to trust their own intuition. The article „CES 2023: The device that tells you why your baby is crying“ can be accessed via mashable.com/article/ces-2023-why-is-my-baby-crying.
The U.S. civil liberties organization Electronic Frontier Foundation has launched a petition titled „Don’t Scan Our Phones“. The background is Apple’s plan to search users‘ phones for photos that show child abuse or are the result of child abuse. In doing so, the company is going even further than Microsoft, which scours the cloud for such material. On its website, the organization writes: „Apple has abandoned its once-famous commitment to security and privacy. The next version of iOS will contain software that scans users’ photos and messages. Under pressure from U.S. law enforcement, Apple has put a backdoor into their encryption system. Sign our petition and tell Apple to stop its plan to scan our phones. Users need to speak up say this violation of our privacy is wrong.“ (Website Electronic Frontier Foundation) More information via act.eff.org/action/tell-apple-don-t-scan-our-phones.
The symposium „Applied AI in Healthcare: Safety, Community, and the Environment“ will be held within the AAAI Spring Symposia on March 22-23, 2021. One of the presentations is titled „Care Robots with Sexual Assistance Functions“. Author of the paper is Prof. Dr. Oliver Bendel. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ More information about the AAAI Spring Symposia is available at aaai.org/Symposia/Spring/sss21.php.
Amazon’s Alexa can perform actions on her own based on previous instructions from the user without asking beforehand. Until now, the voicebot always asked before it did anything. Now it has hunches, which is what Amazon calls the function. On its website, the company writes: „Managing your home’s energy usage is easier than ever, with the Alexa energy dashboard. It works with a variety of smart lights, plugs, switches, water heaters, thermostats, TVs and Echo devices. Once you connect your devices to Alexa, you can start tracking the energy they use, right in the Alexa app. Plus, try an exciting new Hunches feature that can help you save energy without even thinking about it. Now, if Alexa has a hunch that you forgot to turn off a light and no one is home or everyone went to bed, Alexa can automatically turn it off for you. It’s a smart and convenient way to help your home be kinder to the world around it. Every device, every home, and every day counts. Let’s make a difference, together. Amazon is committed to building a sustainable business for our customers and the planet.“ (Website Amazon) It will be interesting to see how often Alexa is right with her hunches and how often she is wrong.
The „Reclaim Your Face“ alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens‘ Initiative. One of the goals is to establish transparency: „Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?“ (Website RYF) Another one is to draw red lines: „Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.“ (Website RYF) Finally, the initiative demands respect for human: „Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.“ (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper „The Uncanny Return of Physiognomy“ presented at the 2018 AAAI Spring Symposia or in the chapter „Some Ethical and Legal Issues of FRT“ published in the book „Face Recognition Technology“ in 2020. More information at reclaimyourface.eu.
Springer launches a new journal entitled „AI and Ethics“. This topic has been researched for several years from various perspectives, including information ethics, robot ethics (or roboethics) and machine ethics. From the description: „AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.“ (Springer Website) More information via www.springer.com/journal/43681.
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.
Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the practice and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The new platform robophilosophy.com was founded in July 2019 by Oliver Bendel. He invited several authors to write with him about robophilosophy, robot law, information ethics, machine ethics, robotics, and artificial intelligence. All of them have a relevant background. Oliver Bendel studied philosophy as well as information science and made his doctoral thesis about anthropomorphic software agents. He has been researching in the fields of information ethics and machine ethics for years.
„AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …“ (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: „A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.“ (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference „Machine Ethics and Machine Law“ in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: „Should we develop robots that deceive?“ Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled „Superhuman AI for multiplayer poker“.
„In Germany, around four million people will be dependent on care and nursing in 2030. Already today there is talk of a nursing crisis, which is likely to intensify further in view of demographic developments in the coming years. Fewer and fewer young people will be available to the labour market as potential carers for the elderly. Experts estimate that there will be a shortage of around half a million nursing staff in Germany by 2030. Given these dramatic forecasts, are nursing robots possibly the solution to the problem? Scientists from the disciplines of computer science, robotics, medicine, nursing science, social psychology, and philosophy explored this question at a Berlin conference of the Daimler and Benz Foundation. The machine ethicist and conference leader Professor Oliver Bendel first of all stated that many people had completely wrong ideas about care robots: ‚In the media there are often pictures or illustrations that do not correspond to reality‘.“ (Die Welt, 14 June 2019) With these words an article in the German newspaper Die Welt begins. Norbert Lossau describes the Berlin Colloquium, which took place on 22 May 2019, in detail. The article is available in English and German. So are robots a solution to the nursing crisis? Oliver Bendel denies this. They can be useful for the caregiver and the patient. But they don’t solve the big problems.
Fig.: The Pepper robot at the Berlin Colloquium (photo: Daimler and Benz Foundation)
The article „The Synthetization of Human Voices“ by Oliver Bendel, first published on 26 July 2017, is now available as a print version. The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article was published in AI & SOCIETY, 34(1), 83-89.
In 2019, the Faculty of Philosophy and Educational Research at Ruhr-Universität Bochum will be awarding a W1 (W2) tenure-track professorship for Ethics of digital methods and technologies at the Institute of Philosophy I. „The future tenure-track professor will be representing the field of ethics in union with philosophy of current technology excellently in education and research. She/he has performed in different areas of digital ethics by pertinent publications and features excellent knowledge of digital methods and technologies. Ethical problems of digitisation are treated in close relation with epistemological and methodological analysis of current IT-developments. Close cooperation with nearby professorships in the humanities and social sciences and the emerging ‚Center for Computer Science‘ at RUB is expected.“ (Job advertisement) Applications and all relevant documents are to be submitted by e-mail by 31 January, 2019 to the Dean at the Faculty of Philosophy and Educational Research at Ruhr-Universität Bochum, Prof. Dr. Norbert Ricken. More information is available here.
Robots in the health sector are important, valuable innovations and supplements. As therapy and nursing robots, they take care of us and come close to us. In addition, other service robots are widespread in nursing and retirement homes and hospitals. With the help of their sensors, all of them are able to recognize us, to examine and classify us, and to evaluate our behavior and appearance. Some of these robots will pass on our personal data to humans and machines. They invade our privacy and challenge the informational autonomy. This is a problem for the institutions and the people that needs to be solved. The article “The Spy who Loved and Nursed Me: Robots and AI Systems in Healthcare from the Perspective of Information Ethics” by Oliver Bendel presents robot types in the health sector, along with their technical possibilities, including their sensors and their artificial intelligence capabilities. Against this background, moral problems are discussed, especially from the perspective of information ethics and with respect to privacy and informational autonomy. One of the results shows that such robots can improve the personal autonomy, but the informational autonomy is endangered in an area where privacy has a special importance. At the end, solutions are proposed from various disciplines and perspectives. The article was published in Telepolis on December 17, 2018 and can be accessed via www.heise.de/tp/features/The-Spy-who-Loved-and-Nursed-Me-4251919.html.
Service robots are becoming ever more pervasive in society-at-large. They are present in our apartments and our streets. They are found in hotels, hospitals, and care homes, in shopping malls, and on company grounds. In doing so, various challenges arise. Service robots consume energy, they take up space in ever more crowded cities, sometimes leading us to collide with them and stumble over them. They monitor us, they communicate with us and retain our secrets on their data drives. In relation to this, they can be hacked, kidnapped and abused. The first section of the article „Service Robots from the Perspectives of Information and Machine Ethics“ by Oliver Bendel presents different types of service robots – like security, transport, therapy, and care robots – and discusses the moral implications that arise from their existence. Information ethics and machine ethics will form the basis for interrogating these moral implications. The second section discusses the draft for a patient declaration, by which people can determine whether and how they want to be treated and cared for by a robot. The article is part of the new book „Envisioning Robots in Society – Power, Politics, and Public Space“ that reproduces the talks of the Robophilosophy 2018 conference in Vienna (IOS Press, Amsterdam 2018).
The University of Potsdam dedicates its current research to voices. The scientists – among them Dr. Yuefang Zhou and Katharina Kühne – are studying the first impression during communication. The survey website says: „The current study will last approximately 20 minutes. You will be asked some questions about the voice you hear. Please answer them honestly and spontaneously. There are no right or wrong answers; we are interested in your subjective perception. Just choose one out of the suggested alternatives.“ Prof. Dr. Oliver Bendel, FHNW School of Business, produced three samples and donated them to the project. „Your responses will be treated confidentially and your anonymity will be ensured. Your responses cannot be identified and related to you as an individual, if you choose to leave your e-mail address at the end of the study this cannot be linked back to your responses. All responses will be compiled together and analysed as a group.“ The questionnaire can be accessed via www.soscisurvey.de/impress/ (link no longer valid).
Fig.: What is the first impression during communication?