An AI Poisoning Tool to Deter Image Thieves

Researchers at the University of Chicago have released Nightshade 1.0, a tool designed to punish makers of machine learning models who train their systems on data without first obtaining permission. This was reported by The Register on 20 January 2024. According to the magazine, Nightshade is an offensive data poisoning tool, a companion to a defensive protection tool called Glaze. „Nightshade poisons image files to give indigestion to models that ingest data without permission. It’s intended to make those training image-oriented models respect content creators‘ wishes about the use of their work.“ (The Register, 20 January 2024) „Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image,“ said one team member. „For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass.“ (The Register, 20 January 2024) More information on this topic can be found in the paper published by the researchers at the end of October 2023.

Fig.: A cow in a green field (Image: DALL-E 3)