Comprehensive coverage

The "revenge of the artists" against artificial intelligence

Artificial intelligence that generates images trains on artists' works without their consent, and also imitates their style. A group of researchers and artists decided to fight against the phenomenon with sophisticated digital tools

By Tal Sokolov, Davidson Institute website

Revenge of the artists in artificial intelligence. Credit: The Science website using DALEE artificial intelligence software
Revenge of the artists in artificial intelligence. Credit: The Science website using DALEE artificial intelligence software

The spectacular capabilities of content generators eg GIVE HER and-Midjourney Those who create images from text, or Chat GPT And similar ones that generate texts in natural language, rely on huge amounts of information from which the generators learn. The design phase of such a generator involves training on a very large collection of information. For example, a generator that produces an image from a sentence given to it by the user, goes through a training phase in which it sees many examples of real images and the text that represents them, so that it learns the complex connections between the semantic context and the visual content.

There are authorized databases that contain images and texts for such training purposes, but there are cases where the collection of information for the training phase is carried out by A comprehensive scan of the Internet. Some of the developers admitted that the collection of the information for the training needs of certain generators was done without the consent of the owners of the information. Artists found themselves In a situation where a creator who has learned from their works that are famous on the net, imitates in seconds the style they have developed during an entire career.

Lawsuits by artists Alleging copyright infringement and raising ethical questions regarding the use that artificial intelligence makes of the fruits of artists' labor. recently Lists have been published of thousands of artists, some very well known and some less so, whose works were used to train the generators without their consent. It seems that the legal field is still faltering, and the boundaries between legal inspiration and unfair use are not well defined.


counter attack

Direct protection against the use of works of art without consent forces artists to harm the publication of their works. Water marks that are stamped on the works may damage the aesthetics, and there are already tools to remove stamps such The ultimate solution is not to publish the works on the open network, but this is sinful for the artists' publishing goals. 

There are those who have decided not to stand idly by. A group of researchers from the University of Chicago in the United States develops tools for attacks against image generators. The initial goal is to protect the original work of artists, and it is realized in an offensive process, which also disrupts creators who try to use these works. The attacks are based on creating information that the generators will want to practice on, which looks innocent and suitable, but is actually "toxic" and destructive for the generator.

Generators sometimes go through stages of updating their learning, even after they are already ready and offered to the public, and they include additional training on new information. The researchers offer an offensive tool that not only launches a pre-emptive attack against future generators, but is also capable of harming generators who strive to stay up-to-date and improve from time to time, and therefore will be forced to use new information for learning. In a race where every few months a new record-breaking generator comes to market, toxic information circulating on the net can affect the results.

Damage to generators can be done in several ways. Some require access to the guts of the generator or the training process. To enable effective infection against the generators without going behind the technical scenes, The researchers propose a procedure of spreading contaminated information on the network, which will "poison" the generator, i.e. disrupt its ability to create logical content.

It is possible to design the attack in a specific way, for example to disrupt the ability of the generator to produce images of dogs, so that a user who requests an image of a dog will receive an image of a cat. In the context of copyright, an attack designed to protect the works of an artist named Israel Israeli can be carried out in such a way that when a user requests "draw me a car in the style that Israel Israel draws" he will receive a picture in a different style, or not at all of a car, but something else, for example a duck .

Duck-shaped amphibious bus, Seattle, 2018. Photo: Avi Blizovsky
An amphibious bus in the shape of a duck (and so called), Seattle, 2018. Photo: Avi Blizovsky

What does a duck look like? 

The heart of the attack lies in the way generators interpret the content of the image. When we watch a duck, we recognize a beak, wings, feathers, maybe even the lake in which the duck swims. In the summary of all these variables we determine that it is a duck. And sometimes they even confuse him with a goose. Generators use structures called neural networks. Despite borrowing the name from the neurons in our brain, computerized neural networks perceive information differently from the brain.

For a neural network, an image is a collection of numbers that indicate the brightness of each pixel, that is, a point in the image. As a neural network learns from combinations of an image and the text that describes it, it learns which characteristics of groups of pixels represent the subjects in the text. These characteristics can be similar to characteristics that help a person to decide: for example groups of pixels that represent a structure of a sharp corner can indicate an origin. Meshes also use less trivial features, such as texture, the frequency of appearance of repetitive structures in the brightness of the pixels, and more. 

Characteristics such as beak and feathers are represented as a weighting of these characteristics. When a neural network that has already seen a bunch of examples and learned what characteristics represent a duck is asked to produce an image of a duck, it produces an image whose characteristics match the characteristics representing a duck. 

You can play with the relationship between our perception of the object called a duck, and the network's perception of the object labeled as a duck. You can cheat the generator and upload a lot of pictures to the web that are labeled as a duck but show, for example, a car. This way the network will learn to associate the characteristics of a car, which are different from the characteristics of the duck, and the duck labeling. However, human, or automatic, control of the training input can correct such a thing.

The attack the researchers propose is more insidious and harder to detect. This is a systematic mechanism where you can change the brightness values ​​of the pixels in the image so that the human eye will still see a duck, but the numerical content of the image will display characteristics that the network has learned to attribute to the car. A person looking at the image will not notice anything strange because the change in pixels has been set to be as small as possible, but large enough for the neural network to detect characteristics of a car. If enough pictures of cars that are digitally "disguised" as ducks are presented to the network, along with a verbal description of a duck, the network will mix up the representations. And so, when a user asks the network to create a picture of a duck, the result will be a picture that resembles a mix of a duck and a car.

The level of similarity to cars depends on the ratio between the contaminated information and the clean information. The researchers showed that a relatively small amount of "poisoned" samples is enough to disrupt the functioning of the artificial intelligence that generates images.

The difficulty with this kind of sabotage is that in order to divert the duck's characteristics to those of a car, the attackers will probably have to know what characteristics represent a car in the eyes of the network they intend to attack. For this they will need access to the network structure, which is usually not open to the public. The researchers showed that even if the feature shift design relies on a generator that is open to the public, an attack using these features would be highly effective against other text-to-image generators, the specifications of which are not publicly available.


A never ending race

Back to copyright, an artist who wants to protect his artistic style, can add a change to the image that the researchers call "Disguise style". An Israeli Israel will be able to easily change his image, in a way that the human eye will not notice, but he will direct the neural network to interpret his style as a style very far from his own, for example "Van Gogh". The next time someone who gets tired tries to cut corners and create an image of an Israeli Israel without paying him royalties, he will get an image in the style of Van Gogh, which of course is very different from him. 

The cat and mouse games between artificial intelligence that creates human content and humans who want to preserve their unique creative abilities will not end soon. Sooner or later it can be assumed that the developers of artificial intelligence will find a solution to this type of information contamination. The discussion on the subject raises many questions related to technological progress versus human value, but in the process we enjoy technological developments here and there, imitation and simulation, defense and attack.

More of the topic in Hayadan:

One response

  1. Intellectual property laws are the feudalism of the twentieth century. They have no justification except for the need to preserve the wealth of the rich.
    Regarding the matter in the article: what is the difference between a painter who learns the style of others and paints in their style and an AI that does the same thing?

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.