Ryn Forquer Blogpost #4 - AI and Nightshade
Ben Zhao, a professor at The University of Chicago, has created a program called Nightshade, designed to “poison” AI image generators to stop infringement upon artists’ copyrighted work. Nightshade works by shifting pixels in artwork, confusing the technology on the work’s subject– thus making it possible for an AI generator to transform what is supposed to be an image of a dog into a picture of a cat and so on. The hope behind the creation of this tool is that enough people will use Nightshade that AI generator databases will become unable to produce accurate outputs.
Creators of Nightshade include Ben Zhao, Shawn Shan, Wenxin Ding, Josephine Passananti, and Haitao Zheng– at the University of Chicago.
The creation of Nightshade comes in response to repeated claims from artists that AI generators, such as Stable Diffusion, DALL-E, and Midjourney, use copyrighted works in the database they draw from. So far, very few laws have addressed this issue– hence the creation of this database poison.
![]() |
| Figure 1. A graph shows how Nightshade “poisons” data sets to allegedly protect artists’ works from being scraped by AI. Courtesy of the researchers. |
Some remain unconvinced and urge artists to keep fighting on the legal front of this battle, such as Marian Mazzone, a professor at the University of Charleston. Mazzone’s concern is that companies profiting from or using AI art will have the financial resources to quell Nightshade’s poisoning attempts and others like it or that the advancement of AI will continue quicker than programs like Nightshade can account for.
However, Zhao believes Nightshade gives “artists something they can do, which is important. Feeling helpless is no good.”
In addition, Zhao and his team also made a tool named “Glaze” that artists can use to “mask” their art style to prevent it from being copied by AI companies. Similarly to Nightshade, it changes the pixels of an artwork in ways unnoticeable to the human eye. Still, these differences confuse AI technology and disrupt their ability to output the correct image.
Zhao and his team intend to integrate Nightshade into Glaze and allow artists to use the poisoning program or opt-out. Zhao’s team also wants to make Nightshade open source, allowing others to experiment with the program and make their own versions– this is because, according to Zhao, “The more people use it and make their own versions of it, the more powerful the tool becomes.”
This article elaborates on how Nightshade works– it allows artists to “add invisible changes to the pixels in their art before they upload it online.” Nightshade functions such that when AI developers scrape the internet for more data for their AI model, these “poisoned” samples will integrate into the model’s data set and cause the generator to malfunction. Poisoned data is painstaking to remove and would require individual tech companies to find and delete each corrupted data piece. Nightshade can corrupt not just one word but several connected words and concepts. Since AI is skilled at making connections between related words, this has allowed Nightshade to infect one word, like “dog,” and subsequently infect associated terms, like “puppy,” “husky,” and “wolf.”
![]() |
| Figure 2. A graph shows how Nightshade “poisons” data sets to allegedly protect artists’ works from being scraped by AI. (image courtesy Ben Zhao and authors) |
With this tool, there comes a risk of abuse by malicious users. However, Zhao assures that anyone with malicious intent would need thousands of poisoned samples to inflict real damage on significant AI generators composed of billions of images.
A professor at Cornell University, Vitaly Shmatikov, who has studied AI model security, says, “We don’t yet know of robust defenses against these attacks. We haven’t yet seen poisoning attacks on modern [macxhine learning] models in the wild, but it could be just a matter of time. The time to work on defenses is now.”
Still, others have praised the team behind Nightshade for their work. Junfeng Yeng, a computer science professor at Columbia University who has studied the security of deep-learning systems, thinks Nightshade could make AI companies respect artists’ rights more– possibly making them more willing to pay royalties.
Artists such as Eva Toorenent, an illustrator and artist who has used Glaze, hopes Nightshade will change the status quo, “It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent.”
Heikkila, Melissa. “This New Data Poisoning Tool Lets Artists Fight Back against Generative AI.” MIT Technology Review, October 24, 2023. https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
Velie, Elanie. "New Tool Helps Artists Protect Their Work From AI Scraping." Hyperallergic, October 30, 2023. https://hyperallergic.com/853520/nightshade-helps-artists-protect-their-work-


Comments
Post a Comment