Artificial intelligence tools for converting text to image are taking the internet by storm. But is it art? Or the end of art?

Images created by AI programs, such as Stable Diffusion and DALL-E, are everywhere now, dazzling users with their ability to instantly create any image they can dream up.

AI works by removing billions of images from the internet, often created by artists who may be unhappy that their life’s work is helping to build technology that could threaten their livelihoods.

Stephen Zapata, a designer, illustrator, and art educator in New York City, has misgivings about what this means. It doesn’t make sense for these machine learning systems to continue to compete with creators whose work the models are trained to do, Megan McCarty told Carino of Marketplace. He also believes that an ethical version of these art-making systems could be developed and would be valuable.

Below is an edited transcript of their conversation.

Stephen Zapata: My main concern is the precedent we’re going to set here by allowing these systems to scrape the creative work of millions of people off the internet, training models who then go on to compete with the people they trained in the same creative markets. If we collectively decide that this is fine, legal, and ethical to some extent, that leaves a giant legal loophole that allows this to happen over and over again, in every market these systems come into. It doesn’t make sense to allow any machine learning startup to have carte blanche to use all the creative work people have shared online to make models that compete directly with those same people.

Cute Megan McCarty: Do artists really allow it? Is there any consent or is there any option to opt out?

Zapata: On current models, no. There was no consent. No approval was sought, no credit was given and no compensation was given. As things progress, products like Stable Diffusion have said they will allow opt-outs on future models. But on the initial launch, they snuck it under the wire. It wasn’t until after the products hit the market that we really understood what was going on. You really have to “subscribe” to the systems from the start.

Cute McCarty: Under the current circumstances, do artists have any legal recourse?

Zapata: Currently, there are no easy legal gullies for this. And that’s because these are advanced things. We have to admit that many of the questions raised here occur within a series of legal gray areas. And it’s very complicated. Almost every question you can ask, like is it copyright infringement? Do the systems replicate the training data? Is this kind of use permitted under fair use? And if it’s fair use, is it only in the US? Is it fair use in the UK? Things change from one jurisdiction to the next, so it’s quite complicated. But she will spend her day in court. We’ve had recent developments, including a class action lawsuit in the US on behalf of artists seeking some sort of legal recourse. And recently in the UK, Getty Images announced that it would be suing Stability AI, the makers of Stable Diffusion, for training their systems on the copyrighted work of Shutterstock and its users.

Cute McCarty: There is an argument that all art is referential, and all art is in dialogue with other work that has been done before. What’s different about how this AI works?

Zapata: It’s very different, in my estimation. For example, I can imagine that when I share my work online, if I were to ask them another artist, then in a satirical point of view, I can interpret their inspiration to make works like mine, or to achieve my skill level, as they rush to compete with me in the market. But the difference when you have this kind of distant interaction with another human being is that I know what awaits them on the journey. I know learning those skills and trying to get to my level is worth it on its own, and it will make their lives better. No matter what happens, they will appreciate the attempt, and will experience a journey of self-affirmation in making art. And even if they come and meet me at my level in the market where they can get jobs from me or something, I’m glad to see them. I’m like, I know how hard this is for you, and I want you to succeed. This is not something that happens with these machines. These machines, devoid of experience, don’t feel flashy and there is no gentleman being on a ride. They simply come out. We outsource one of life’s great pleasures and the work we truly love to do. Art really becomes an existential buoy, doesn’t it? We’re outsourcing this thing to something that doesn’t feel all the benefits and none of them. And I think to make this kind of false equivalence between what they do and what we do reduces what humans do with art to a kind of very sad weirdness that doesn’t align with the personal experiences of most artists who teach.

Cute McCarty: Not all artists have the same kind of negative reaction to this. I’ve seen some artists say, maybe this is a tool we can incorporate into our creative process. Do you think there is any way the artists themselves can harness some of the power of this technology for their own use?

Zapata: Any artist can choose to use these systems however they want. And for people with technical expertise, many of them are used to making sure they are in control of their processes and not the other way around. So for them, it would be very easy to spot where the machine takes a lot. Artists are very ingenious, very creative and will always be able to involve their creativity in managing the process. But the point I want to make here is that everything good or noble that we can imagine doing with these systems is possible with the moral version of these systems. All the hopes of democratizing things, the hope that it will allow people who wouldn’t otherwise be creative, to engage creativity, the hopes of increasing the accessibility of art, everything. Every utopian and noble thing we may say or dream about these systems, every one of them can be said and applied equally to a moral version of these systems rather than to an immoral one.

Cute McCarty: What would the moral system look like to you?

Zapata: It looks like a system built on public domain and creative commons works. Moreover, also artwork voluntarily given to companies that train systems. If all of these things can be honored and well combined, and a system can be created that does not infringe on the rights of rights holders and does not interfere with people in the marketplace and unfairly use their names to create derivative works, then I see no reason why artists should not use this technology. And the possibilities look very exciting.

If you’d like to hear more from Steven Zapata on the subject, you can watch his video article, “The end of art: An argument against artificial intelligence of the image. He explains in detail many of the arguments we discussed, and—bonus—does it while drawing.

As Zapata mentioned, there he was Big legal news in the past two weeks. Three artists have filed a class action lawsuit against Stability AI — that’s what makes Stable Diffusion — as well as another AI company, Midjourney, and online art portfolio platform DeviantArt. In the lawsuit, they said the three companies violated the copyrights of “millions” of artists by deleting their images without their consent.

Stability Amnesty International said of the lawsuit: “The allegations in this lawsuit represent a misunderstanding of how generative AI technology works and the law surrounding copyright. We intend to stand up for ourselves and massive generative AI must expand the creative power of humanity.”

And if you were wondering if your work or even your photos ended up in the training datasets of these AI models, There is a website for that.

Leave a Reply

Your email address will not be published. Required fields are marked *