GPT-4 was launched by OpenAI on March 14, 2023. „GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.“ (Website OpenAI) On its website, the company explains the multimodal options in more detail: „GPT-4 can accept a prompt of text and images, which – parallel to the text-only setting – lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images.“ (Website OpenAI) The example that OpenAI gives is impressive. An image with multiple panels was uploaded. The prompt is: „What is funny about this image? Describe it panel by panel“. This is exactly what GPT-4 does and then comes to the conclusion: „The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.“ (Website OpenAI) The technical report is available via cdn.openai.com/papers/gpt-4.pdf.
The U.S. civil liberties organization Electronic Frontier Foundation has launched a petition titled „Don’t Scan Our Phones“. The background is Apple’s plan to search users‘ phones for photos that show child abuse or are the result of child abuse. In doing so, the company is going even further than Microsoft, which scours the cloud for such material. On its website, the organization writes: „Apple has abandoned its once-famous commitment to security and privacy. The next version of iOS will contain software that scans users’ photos and messages. Under pressure from U.S. law enforcement, Apple has put a backdoor into their encryption system. Sign our petition and tell Apple to stop its plan to scan our phones. Users need to speak up say this violation of our privacy is wrong.“ (Website Electronic Frontier Foundation) More information via act.eff.org/action/tell-apple-don-t-scan-our-phones.
The symposium „Applied AI in Healthcare: Safety, Community, and the Environment“ will be held within the AAAI Spring Symposia on March 22-23, 2021. One of the presentations is titled „Care Robots with Sexual Assistance Functions“. Author of the paper is Prof. Dr. Oliver Bendel. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ More information about the AAAI Spring Symposia is available at aaai.org/Symposia/Spring/sss21.php.
Amazon’s Alexa can perform actions on her own based on previous instructions from the user without asking beforehand. Until now, the voicebot always asked before it did anything. Now it has hunches, which is what Amazon calls the function. On its website, the company writes: „Managing your home’s energy usage is easier than ever, with the Alexa energy dashboard. It works with a variety of smart lights, plugs, switches, water heaters, thermostats, TVs and Echo devices. Once you connect your devices to Alexa, you can start tracking the energy they use, right in the Alexa app. Plus, try an exciting new Hunches feature that can help you save energy without even thinking about it. Now, if Alexa has a hunch that you forgot to turn off a light and no one is home or everyone went to bed, Alexa can automatically turn it off for you. It’s a smart and convenient way to help your home be kinder to the world around it. Every device, every home, and every day counts. Let’s make a difference, together. Amazon is committed to building a sustainable business for our customers and the planet.“ (Website Amazon) It will be interesting to see how often Alexa is right with her hunches and how often she is wrong.
The „Reclaim Your Face“ alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens‘ Initiative. One of the goals is to establish transparency: „Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?“ (Website RYF) Another one is to draw red lines: „Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.“ (Website RYF) Finally, the initiative demands respect for human: „Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.“ (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper „The Uncanny Return of Physiognomy“ presented at the 2018 AAAI Spring Symposia or in the chapter „Some Ethical and Legal Issues of FRT“ published in the book „Face Recognition Technology“ in 2020. More information at reclaimyourface.eu.
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.
The article „Hologram Girl“ by Oliver Bendel deals first of all with the current and future technical possibilities of projecting three-dimensional human shapes into space or into vessels. Then examples for holograms from literature and film are mentioned, from the fictionality of past and present. Furthermore, the reality of the present and the future of holograms is included, i.e. what technicians and scientists all over the world are trying to achieve, in eager efforts to close the enormous gap between the imagined and the actual. A very specific aspect is of interest here, namely the idea that holograms serve us as objects of desire, that they step alongside love dolls and sex robots and support us in some way. Different aspects of fictional and real holograms are analyzed, namely pictoriality, corporeality, motion, size, beauty and speech capacity. There are indications that three-dimensional human shapes could be considered as partners, albeit in a very specific sense. The genuine advantages and disadvantages need to be investigated further, and a theory of holograms in love could be developed. The article is part of the book „AI Love You“ by Yuefang Zhou and Martin H. Fischer and was published on 18 July 2019. Further information can be found via link.springer.com/book/10.1007/978-3-030-19734-6.
„AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …“ (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: „A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.“ (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference „Machine Ethics and Machine Law“ in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: „Should we develop robots that deceive?“ Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled „Superhuman AI for multiplayer poker“.