in

Artists Fight Back Against Generative AI with NEW Data Poisoning Tool

Source: AnonymousUnknown author, Public domain, via Wikimedia Commons / VulcanSphere / CC BY 4.0 / Wikimedia Commons

If there’s one thing that will always be true about technology, it’s that it’s always changing. Always developing, always progressing. Some of the advances, like GPS in cars, have been revolutionary. The development of Artificial Intelligence is one that could be revolutionary, but could just as easily be disastrous.

Humble But Fascinating Beginnings

Artificial intelligence is not a new phenomenon. In fact, the very first artificial intelligence goes all the way back to the 1950’s. Alan Turing created a program in 1950 which he called the Imitation Game, and the intent of the program was to measure the created intelligence of computers.

Source: AnonymousUnknown author, Public domain, via Wikimedia Commons

The Imitation Game – now called the Turing Test, after its creator – was a program that participated in a conversation with a human partner. It created responses to the typed input from the person, maintaining a conversation. Independent evaluators read the conversation, and if they couldn’t decide which text belonged to the computer, the machine passed the test.

The First AI Boom

Over the next forty years after the Turing Test was initially introduced, there was massive interest and funding in AI research. Various programs were created, including ones that taught a computer to play chess with itself and have conversations with people, called chatbots.

Source: User RTC on en.wikipedia, Public domain, via Wikimedia Commons

After the initial AI boom, though, there was a wane in interest and therefore funding. The early 90’s was this so-called “AI winter” where research was greatly reduced, but that didn’t stop progress. Research allowed for the first AI-run machines like Roomba, and the first commercially-available speech recognition software on Windows computers.

The Wave of Modern Technology

Into the 2000’s, AI was used across a variety of industries. Rovers were launched in outer space that could function without human intervention, social media platforms launched and used AI in their algorithms, and Apple came out with the first artificial intelligence assistant, Siri.

Source: Shutterstock

All this progress is an incredible comment on the ever-changing nature of technology, and the miracles that it can perform. As the timeline comes to the present, though, the conversation turns not only to what AI technology can do for us, but the potential dangers.

Not The First Time A Question Has Been Posed

The question of whether artificial intelligence could be dangerous is not a new one. The Terminator movie franchise asks whether AI could ever become sentient and turn its back on humanity. Steven Spielberg’s movie A.I.: Artificial Intelligence asks whether its truly possible for humans and robots to live together when we’re truly so different.

Source: Image credit: ThisisEngineering RAEng/Unsplash

The modern question of artificial intelligence is a bit more of a niche subject, though. Rather than big implications such as robots and humans working together, or an artificial overlord, the question is the implications of AI in regards to artists, and copyright infringement.

AI Art Is By No Means a New Phenomenon

Like the history of AI as a whole, the ability of AI to create art also goes back decades, to the 70’s. The first notable AI art generation system was released in 1973, called Aaron. Aaron used a symbolic AI approach to create black-and-white drawings.

Source: VulcanSphere / CC BY 4.0 / Wikimedia Commons

In the years since Aaron was first released, generative AI has made leaps and bounds of progress. AI can now generate entire full-color images in an endless variety of styles, and with this development comes a new question of ethics and morality in technology.

Where Does All The Data Come From?

Contemporary artificial intelligence generators are able to work as well as they do because they’ve been trained on massive amounts of information and input. The ethics question comes into play as many AI programs including Midjourney and ChatGPT have been trained on existing artists’ work, without their permission.

Source: VulcanSphere / CC BY 4.0 / Wikimedia Commons

Understandably, once the truth of Midjourney’s – and other AI’s just like it – incredible image-creation software came to light, artists were furious. They felt that their hard work and years of skill had been stolen by this software and their creators, and it wasn’t long before technology developers came up with a solution.

A Creative Solution For Artists

Ben Zhao, a professor at the University of Chicago, first led a team to create the first data-poisoning software earlier this year. The software is called “Glaze,” and it creates a filter over an artists work to prevent it from being scraped by unethical companies for use in training their AI.

Source: Midjourney AI/Wikimedia Commons/Courtesy Ammaar Reshi

The newest tool, called “Nightshade,” promises to be even more effective at thwarting AI programs from using artists’ work without their permission. The tool allows artists to add invisible changes to the digital renditions of their artwork after its completion, so that if it is scraped into an AI training set, the resulting model will behave in unpredictable and erratic ways.

Fight, Fight, Fight Back

Understandably, many artists were thrilled about this development. The backlash against generative AI has been so intense that there are several lawsuits pending against many of the parent companies regarding copyright infringement. Future punishment doesn’t help present problems, though, and Nightshade gives artists an opportunity to fight back.

Source:Midjourney AI/Wikimedia Commons/Courtesy Ammaar Reshi

The beauty in Nightshade is that in changing the data of a single piece of art, if that art is then scraped, it can train and disrupt the entire AI model’s dataset. The poisoned data can cause the model to associate unrelated objects to each other, for instance, and much worse is possible as the tool is developed.

Worse Ahead For Generative AI

In an ideal world, companies would be behaving ethically and paying artists for their hard work. However, current reality suggests that artists come up with creative ways of protecting their intellectual property. “Glaze” and “Nightshade” are the first iterations of tools to help them in that endeavor, and they’re only the beginning.

Source: Pexels @ Tara Winstead

As lawsuits progress, and artists and developers come together to create better ways to corrupt generative AI, it’s a certainty that the future for technology is fluid. Artists will continue to fight back against their intellectual property being stolen, and companies will continue to innovate and come up with new ways to create, change, and advance their own technological futures.

What do you think?

200 Points
Upvote Downvote
James Cross

Written by James Cross

James Cross, an enigmatic writer from the historic city of Boston. James' writing delves into mysteries, true crime, and the unexplained, crafting compelling narratives that keep readers and viewers on the edge of their seats. His viral articles, blog posts, and documentary-style videos explore real-life enigmas and unsolved cases, inviting audiences to join the quest for answers. James' ability to turn real mysteries into shareable content has made him a sensation in the world of storytelling.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Instagram Apologizes For Translation Error Influencer Bios

The Green Revolution: How Two Friends Worked to Transform Arid Land Into Lush Landscape