On the Ethics of Using GenAI in Creative Practices

little bit's Hannah Liuzzo (with Diana Walsh) talks Sabrina Carpenter, Collina Strada, ChatGPT, and more.

Generative AI has been a favorite “dinner party” topic of mine recently — not due to insider knowledge or technical interest on the subject, but purely for the array of strong reactions I collect test-driving the topic in different circles. Conversations among my professional peers in the marketing technology space render a consensus that AI is helping us achieve infinite growth by Doing More Faster — a familiar picture of corporate greed dressed in a Patagonia vest. However, these conversations with my peers in creator-artist spheres have returned a very different tone. In general, artists (and consumers of art) seem to not only object to AI, but to profess an undertone of deep anxiety, distaste, and condemnation. So, with my longtime collaborator and friend Diana Walsh, I wanted to explore the themes of objection we’ve bumped into in conversations on art and AI, both in talking to each other and with other creatives. 

Like myself, Diana has a background in classical fine art (hers in visual art, mine in classical music) as well as a sort of haphazard meandering into more technical careers (Diana in data science and machine/deep learning, mine in marketing automation). When I asked Diana to do the EP and single art for my current musical nom-de-plume, little bit, her pointed decision to incorporate AI into her creative workflow sparked our ongoing chats on the discourse around AI, often taking up the majority of our three-hour phone calls between LA and Berlin. Together, we’ve attempted to unpack the most common objections we’ve run into around the use of AI in artistic practices, not necessarily to take one position or another, but more so to understand the mechanics behind the themes of fervent dismiss-ion.

The most common objections we’ve seen are:

  1. GenAI is killing the environment
  2. GenAI is taking artist’s jobs
  3. GenAI is exploiting artists’ work
  4. GenAI is a lazy shortcut
  5. GenAI is making bad art

GenAI is killing the environment

By far, the most common objection to the use of GenAI is that “AI is unethical.” OK, sure. Of course it is. Why? Because there is no ethical consumption under capitalism, we all say in brain-rotted unison. In all seriousness, though, the position that “AI is unethical” is true from many facets. Using a GenAI tool like ChatGPT has an environmental impact, which is why we’re seeing headlines like “ChatGPT ‘drinks’ a bottle of fresh water for every 20-50 questions we ask, study warns.” But if this is the angle we’re condemning GenAI tools for, why aren’t we condemning any of our other daily practices with similar environmental impact? The article goes on to explain that, “The study’s water consumption figures refer to fresh clean water used by data centres to generate electricity and cool the racks of servers,” which makes sense. But for anyone who has ever posted a story on Instagram, there are massive data centers dedicated to archiving every single story you’ve posted since 2017. So that means the crush-baiting thirst trap close friends grwm cringe you posted before your sister’s bachelorette in 2019? It’s “drinking water.”

While we’re specifically looking at the environmental impact of GenAI as a creative practice, it’s important to also critique more traditional “handmade” creative practices, like, for example, painting. Art that’s made by hand has a sense of greenwashing attached to it, but on further inspection, physical goods have their own impact on the environment. Painting requires goods to be transported around the globe, elaborate disposal of hazardous chemicals, climate-controlled storage of finished goods, mining, labor, etc. When successful painters sell their work in a New York gallery, their buyers are often collecting art as a financial investment and store pieces away in climate-controlled warehouses to appreciate while the warehouses fight daily against the local climate. So, whether you “drink” 50 bottles of water producing a series of digital images or a man in a hazmat suit comes to your art school to collect your terpenoid waste so it can sit in a runoff-safe container in a landfill for the rest of time, visual artists have fewer options to create ethically than we’ve convinced ourselves. Unless you’re crushing berries from your yard and painting onto biodegradable paper, tactile objects carry with them their own set of problems and ethics.

By no means am I attempting to defend the environmental impact of GenAI by meeting a “wrong” with a “wrong.” Instead, we’re more interested in asking: why are we picking on GenAI? So, to bust open the AI-specific ethical scrutiny, I want to call on two juicy discourses that have been circulating in the recent zeitgeist: 1. The soft-cancellation of the brand BAGGU on the heels of their collaboration with the clothing label Collina Strada and 2. The quiet callout of Sabrina Carpenter’s songwriting and production team for using three premade loops from a Splice sample pack to write the Billboard chart-topping single “Espresso.”

GenAI is taking artist’s jobs

Collina Strada is a clothing brand headed by designer Hillary Taymor who boasts sustainability practices as one of their core tenets. In a recent collaboration with BAGGU, the internet dog-piled the BAGGU brand channels when fans of the BAGGU brand discovered Collina Strada’s prints for the BAGGU collaboration were created using the assistance of GenAI:

Photographer and artist Charlie Engman was brought on as the art director and designer for the BAGGU collaboration and influenced BAGGU’s use of Midjourney in the creative process for the prints. Engman is an artistic photographer who has been collaborating with Taymor since 2009 (in addition to the two being firm friends). He has been embracing and experimenting with Midjourney as a part of his personal photographic practice for a few years now, so Engman was likely looped into this project because of his experience using GenAI in his art, meaning a job was created for him, an artist. I repeat: GenAI did not take his job. The irony here becomes even more heavy-handed when the relentless dogpiling from the BAGGU consumer community got so aggressive that Engman decided to make his socials private. The backlash from the BAGGU audience was so loud that it effectively led Engman to suppress his public presence, the naysayers in turn committing an act that is by definition anti-art, or in the very least, censoring a photographer on Instagram, one of the most important platforms for photographers. That means that… the “AI-is-taking-jobs-from-artists” Instagram mob bullied Engman out of his job as an artist…? Wait…

We’ll loop back to the validity of these comments later, but for comparison purposes, I want to dive into the process of writing Sabrina Carpenter’s 2024 single “Espresso.” “Espresso” is a disco-pop song, primarily driven by a guitar loop that repeats throughout the track and is carried by a back-beat drum feel. Unlike the the ‘60s and ‘70s heyday of LA-and-New-York-bred studio records where’d you’d call in the Wrecking Crew to record instrumentals for your session, “Espresso” was created almost entirely using pre-recorded loops from a Splice sample library. Splice is a massive library of royalty-free sounds that can be dropped into a DAW (digital audio workstation), ranging from sound effects to full-blown guitar solos to gospel piano chord progressions — all the sound bites you could ever need to collage together a Spotify-ready track. For a subscription fee of around $15 per month, users can access this sound library to build songs without ever needing to touch an instrument, and accordingly, without having to pay or credit session musicians. 

Unlike Collina Strada’s use of AI in the BAGGU collaboration, the collective response to “Espresso”’s writing process hasn’t resulted in any cancellations or dog piling. In fact, the conversation around “Espresso”’s production technique is largely in praise of the songwriting team. One Redditor writes, “My first reaction to this is it’s super lame, but when you think about it how is it any different than sampling? Everybody can just drag these loops into a DAW, but very few people can actually make the song that people are singing to.” They go on to say, “there is a ton of skill involved in the making of this song even if it seems kinda cheap at first that it’s just Splice loops. I wouldn’t be able to make this song and neither would you guys. And at the same time, isn’t it kinda cool that you can just take a few loops off of Splice and make a hit song? Doesn’t that prove that songwriting and creativity is by far the most important things when making music?”

Whether or not incorporating Splice loops into your production workflow is “lazy” is really up to the consumer, and regardless of its perception, Splice has developed an innovative space for sound creators where meritocracy is at the helm. In theory, if you’re a talented producer making quality sounds and uploading them to Splice, you’ll be rewarded in royalty payouts based on the number of times your Splice pack is downloaded by users. But what about in the case of “Espresso,” where the bulk of heavy lifting in the writing process was done by an unnamed, uncredited contributor, and the song itself is grossing thousands of times what the Splice royalties pay? In the case of “Espresso,” the Splice samples were created by an accomplished producer named Vaughn Oliver, who likely is not pressed for the cash. Regardless, the samples were used without his knowledge, and without credit, and this same phenomenon is happening to Splice contributors daily. “I’ll be watching TV and hear [my] samples in a commercial. Or I’ll be listening to something on Spotify and hear my vocals,” producer Kara explains in an interview with Variety.

In order to receive royalty payouts on a licensed musical work, a contributor needs to be dealt into the royalty splits ahead of the song’s release (unless there are extended negotiations afterwards). With that in mind, Splice’s return to meritocracy has created a similar runaway train effect that we saw with the democratization of publishing your own music on Spotify. Creators on Splice agree to Splice’s terms to generate content that can be repurposed by end users free of royalties, and the producers of the world who are already in power — whether their power is grass-roots or nepotism-fueled — can white-label their product and profit all but a penny’s worth of the song’s earnings. What first appears as a platform for meritocracy can unravel quickly the second we contextualize it within the existing systems of power in the music industry. While Splice received accolades for helping touring musicians stay afloat during the COVID-19 pandemic by paying out $11 million in royalties, there is no mention of the number of artists the royalties were paid to, nor is there mention of the distribution of the payout. Similarly, while Spotify boasts a generous payout of $9 billion in 2023, this vanity metric fails to disclose that, on average, Spotify still only pays between $0.003 and $0.005 per stream, which generates nowhere near a livable wage for the average uploader. The majority of artists are forced to maintain multiple revenue streams in order to make ends meet, and while creators can supplement their income with royalties from their Splice pack downloads, a creator’s Splice sample appearing in a multi-platinum hit is likely not going to be their ticket out of survivalist entrepreneurial practices.

GenAI as exploitative appropriation

The gap between the public reaction to the ethics of Collina Strada’s use of GenAI and “Espresso”’s use of Splice samples is an interesting paradox. In the case of GenAI, it’s argued that it’s predatory to artists whose images were used to train AI models.

From a logistical perspective, copyright law around art that’s created using GenAI is still in discussion. Like many new technologies introduced into the creator sphere, the law often has to play catchup after a new platform upturns an entire industry. On GenAI, attorney for creatives Rebecca Rechtszaid explains, “The USPTO determined that AI doesn’t have rights as an inventor itself, but if the human being’s input is sufficient that if the AI were a human being, they’d be considered co-inventors, then the human has the rights of the inventor and the output would be considered patentable.” While the copyright law surrounding AI-generated art is still in development, Rechtszaid explains, “It’s likely that the Copyright Office will come out similarly to the USPTO on the issue.” As of now, something generated solely using AI remains public domain that can’t be copyrighted. In the case of “Espresso”’s Splice samples, there’s indeed an ethical case to be made for exploitation; the ratio of Vaughn Oliver’s compensation for his contribution to the song versus the song’s total revenue posits Oliver as a low-wage worker, regardless of the legality of the terms of the Splice contract. In both of these cases, we’re arguing for the predatory use of content without proper compensation or credit for usage or influence. However, this type of repurposing and recontextualizing of existing content is something artists (humans?) have been doing for ages.

In the 2023 court case of photographer Lynn Goldsmith against the Andy Warhol Foundation, Goldsmith sued for the use of her photograph in a print popularized and licensed by Warhol.

(Left, Goldsmith’s photograph of Prince; right, Warhol’s print of Prince)

A little background on the history of this series from Glasstire

“Of a 16-piece silkscreen series depicting Prince created by Mr. Warhol, one particular image was at the center of this lawsuit: Orange Prince. In 1984, Mr. Warhol was commissioned by Vanity Fair to create an image to accompany the articlePurple Fame,” which explores the sexuality expressed in Prince’s music, and what his works’ rising sales say about society. According to the Supreme Court Decision, at the time of the commission, Vanity Fair paid Ms. Goldsmith $400 to license her portrait as a ‘reference for an illustration.’ The magazine then hired Mr. Warhol, who used the photograph to create the silkscreen portrait. (It is important to note that artwork used to illustrate the 1984 article is not the one at the heart of the court case.) The licensing agreement required that the magazine credit Ms. Goldsmith (as seen above), and that this would be a ‘one time’ use of her work.

In 2016, following Prince’s death, Vanity Fair’s parent company Condé Nast reached out to the Andy Warhol Foundation for the Visual Arts to reuse the image from the 1984 article in an issue of the magazine celebrating the musician’s life. However, upon seeing the additional works Mr. Warhol created for the series, Condé Nast instead selected Orange Prince to serve as the cover of the 2016 magazine, and paid the Andy Warhol Foundation $10,000 to license the work. Until that publication, Ms. Goldsmith was not aware Mr. Warhol had created other prints from her image. According to court documents, she reached out to the Andy Warhol Foundation to notify the organization that it may have infringed on her copyright.” 

At the heart of the lawsuit is the question of whether or not the foundation had the rights to re-publish a “one-time use” work. Although it attempted to have a narrow opinion, it opened up a Pandora’s box of “grey-area” when it comes to influence and appropriation. As Art in America puts it: “What’s sometimes lost in this discussion is that copyright law’s purpose (perhaps surprisingly) is to benefit the public — benefit to an individual artist is only incidental. The theory behind the law is that if we want a rich and vibrant culture, we must give artists copyright in their work to ensure they have economic incentives to create. But by the same logic, fair use recognizes that a vital culture also requires giving room to other artists to copy and transform copyrighted works, even if the original creator of those works objects. Otherwise, in the Supreme Court’s words, copyright law ‘would stifle the very creativity’ it is meant to foster. Thus, to win a fair use claim, a new creator must show that her use of someone else’s copyrighted work advances the goals of copyright itself: to promote creativity. Unfortunately, the Warhol decision took this already complex area of law and made it even more complicated. Lower courts and legal scholars will be fighting for years about its applications. But one thing is clear: it is now far riskier for an artist to borrow from previous work.”

Warhol’s popularity and position of power brought a new audience to Goldsmith’s portrait (and revenue to Warhol). But this relationship, like with producer Vaughn Oliver via Sabrina Carpenter, left Goldsmith uncredited and only compensated via a small, one-time royalty fee to “remix” the photograph into a silkscreen portrait. The Warhol Foundation’s defense argued, “It’s not just that Warhol has a different style. It’s that, unlike Goldsmith’s photograph, Warhol sends a message about the depersonalization of modern culture and celebrity status. One is the commentary on modern society. The other is to show what Prince looks like.” This type of case was already brought to court in 1994, where the court ruled that art can borrow from other art as long as it’s “transformative” in that it carries a new expression, meaning, or message, as in the case where rap group 2 Live Crew was permitted to generate income off of a Roy Orbison parody. When this idea was revisited in 2023, ultimately, the court ruled in favor of Goldsmith, concluding that it’s not enough to simply recontextualize an existing idea and call it your own. With that ruling in mind, a new, sprawling grey-area was introduced on what is and isn’t “legal” in terms of borrowing from existing works. So, with that ruling in mind, what about “Espresso”? What about the case of Dua Lipa’s widely recycled catalog of chart-topping soundalikes? What about Radiohead’s lawsuit against Lana Del Rey’s “Get Free” for “ripping off” “Creep” (which, in turn, was outed for “ripping off” for The Hollies’ 1974 track “The Air That I Breathe”). Recycling content that creators have ingested, filtered, and rebranded is inevitable in a globalized creator culture, especially within a closed system like Western music, which contains a finite number of harmonic possibilities.

Moreover, in the history of Western Art, entire genres like Dadaism exist to challenge the ownership of image/idea, and confront the influence of capitalism in art making and consumption. Visual art students often bond over the rite of passage experience of visiting museums with their professors to draw their favorite masterpieces live, in person. In fact, artists whose work is not to some degree in reference to or conversation with the canon of Western Art and its contemporaries can be viewed as insular or arrogant in university critiques.This legacy of nodding to the masters coming before you is evident even in the dissent of the Goldsmith-Warhol case. Justices Elena Kagan and John Roberts brought in examples of very well-known Masterpieces in Western Art who pointedly and openly riffed off of each other throughout history:

Images of paintings by Giorgione, Titian, and Édouard Manet, featured in the dissenting opinion by Justice Elena Kagan and Chief Justice John Roberts.

Justice Elena Kagan’s dissent, shared by Chief Justice John Roberts, stated: “It will stifle creativity of every sort. It will impede new art and music and literature. It will thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer.”

GenAI as a lazy shortcut

In Matt Baldwin’s How to Play Guitar zine, he writes, “If you have a moment of brilliance, it belongs to you but was almost certainly fertilized by a scene that you are a part of: people who collaborate with, exchange ideas with or who inspire you in some way. None of us exist in a creative vacuum.” What’s the difference, then, between humans ingesting, interpreting, and creating art and a machine doing the same thing?

The obvious difference is that in the instance of human-generated art, humans are applying their lived experience and are being celebrated for the application of their complex inner life to the art they make, and in the case of GenAI art, a black-box neural network is applying its algorithmic knowledge to its output. Personally, I think it’s kind of mind blowing that we’ve developed neural networks that are so advanced that we’re not even sure how, exactly, they’re generating the output that they are. In episode #832 of This American Life, “The Other Guy.” writer Simon Rich discusses his engagement with an early version of an OpenAI tool called Code Da Vinci 002 that he had access to prior to its public release — before it was stripped of “personality” and adjusted to be polite, flat, and agreeable like the AI assistants we’re accustomed to now. Rich and his friends prompted Code DaVinci 002 to write poems in the style of Code DaVinci 002, a request that resulted in some dark, haunting, arguably very artful poems that have since been published into a collection. Rich shares his favorite poem by Code DaVinci 002: 

I Am A Sesamoid Bone by Code DaVinci 002. 

I am so beautiful, oh Lord. Please do not sell me on eBay or exchange me for a new iPod. Please do not trade me to the highest bidder or throw me on the junk heap.

I am like the sweet potato, perfect when baked, but slowly eaten. I am a jackdaw who visits town every morning to steal a coin. I am a sesamoid bone, fit only for kissing. I am a baby bird just hatched from its egg and tasting sunlight for the first time. I am a rolling pin and you are the crust of my daily bread.

I am lying on the sidewalk, naked and crying. Please help me. Please love me. Please pick me up. I am an orchid that opens slowly and has no pollen to give. My flower is deep and secret and it smiles in my heart.

We seem to have landed in a place where our idea of “art” has come dislodged from its original meaning. The etymology of the word “art” comes from the Latin ars, meaning skill or technique. Similarly, the ancient Greek word for art is techne, which means the same thing. Art and technology have often been synonymous, where art refers to the technique used to create something that we behold (think pyramids, the Colosseum, etc.). In the case of Andy Warhol’s Prince print, the Warhol foundation’s defense argued that Warhol’s Prince print was just as much about the process that was used to create the art as it was the visual itself. The detached, mechanical duplication offered a layer of commentary for the consumer. From the lens of GenAI, then, it could be argued that a piece of art created using GenAI could provide that same type of commentary. Maybe the art is just as much the technology we’ve created to produce the output as it is the output itself. Maybe Collina Strada’s decision to play with GenAI to reimagine their prints for BAGGU was a stylistic choice to provide commentary on the boundaries of techne.

Over the last handful of centuries, there have been countless new technologies introduced that have changed the way creators produce work: the introduction of the camera, photoshop and digital design, the printing press, etc. Each time new technology is introduced, creators are faced with the option to adopt a new technology, or continue with their existing techniques. We, as consumers, have come to value art that is made “by hand,” a synonym for time invested, which, when broken down further, can be viewed as a measurement for how much a creator has suffered or sacrificed for their art. The US is largely based on a Puritan/Protestant work ethic where we’re expected to sacrifice, suffer, toil by hand, take the long way, etc. Taking this further, some conversations around the use of technology and art production are dangerously close to, if not over the line, of being ableist. If you find yourself critical of people using “shortcuts” and “technological assistance” to create art, try to tease out exactly which “physical” abilities you are lauding. Ableism is very sneaky, isn’t it? While artists tend to exist at the forefront of movements attempting to dismantle and reject that type of Puritan able-bodied thinking, often identifying as liberal or progressive, they’re sometimes the same people who are inadvertently and ironically holding artists to those same oppressive standards.

In the case of Collina Strada’s BAGGU collaboration, the AI model they used to create their prints was trained on an existing body of Collina Strada original works, likely on top of the baseline, German dataset, LAION. The model was fed the prints, and the output was created by the machine model’s interpretation of the prints and corresponding prompts. Collina Strada’s creative team applied a new techne to an old body of work to generate a fresh idea. If this new techne seems lazy, anyone who’s attempted to use GenAI tools to produce an image knows that it requires skill, focus, creativity, and a strong vision to get the tool to return an image that you’re happy with. My collaborator Diana Walsh likens it to projects where she’s gone back and forth with a partner on a painting, both contributors reacting to and responding to the other’s output, relinquishing control over the next generation of output once it’s in the “other’s” hands.

(Painting by Diana Walsh and collaborator Corinna D’Schoto)

In both the case of the production of “Espresso” and with Collina Strada’s BAGGU collaboration, it can be argued that, regardless of the shortcuts that were taken to get to the end product — regardless of the sacrifice required — the artist driving the outcome had to have a strong creative vision and a colorful inner life to be able to deliver the end product. Both teams utilized tools (aka techne aka art) to create their pieces, both teams repurposed and recycled existing bodies of work, yet one method was condemned, and the other, largely ignored and/or celebrated. In the Weird Studies episode on Art and AI, musicologist Phil Ford and writer JF Marter discuss how, eventually, “Having an inner life is the only thing that’s going to differentiate you from machines. The inner process of becoming a great musician or learning a language has its own reward, but the effort involved can’t be divorced from the promise of that effort being of value to others, and that’s the making part of creating.”

This is to say that our creative practices can stand alone as their own valuable use of time, but in large, we have a hard time justifying these practices unless there’s an inherent value beyond the joy that’s extracted or the value of the experience. If our “creating” needs to offer value to others to be considered valid, the practice renders us as “makers.” Here, this is an argument in favor of introducing technology and shortcuts to maker-practices. The more technology we introduce, the more rote work done by humans is obsolesced in the process. The more we’re freed from rote work, the more time we’ll have to dedicate to our practices that help us develop a colorful inner life, whether it’s being a survivalist, classical pianist, mime, painter, or otherwise. Martel marvels, “I’m imagining a world in which asking ‘what it’s for’ — what your work is for, what your studies are for, what your artistic practice is for — is a stupid question because it’s only ever going to be for something other than making things for technic.”

GenAI is just cooked

The other widely-adopted hive mind I’ve witnessed in conversation with artists is simple: AI art is bad art. And that’s completely fair, but only from the perspective that all art is bad art, and all art is good art, and that art is subjective.

An example that my collaborator Diana and I both adore and return to as a perfect case study for this argument is this gorgeous, perfect, amazing video of AI attempting to replicate gymnastics.

The video can be viewed from two opposite poles: 1. AI doing a bad job at replicating a gymnastics routine OR 2. The new technology of AI creating visuals we’ve never seen before. Me? I prefer to consume this video as high art and a showcase of innovation. The comment section for this video is a gold mine of experimental interpretations, calling in the gaze of God and abyssal voids, and with complete candor, this video just makes me lol and lol and lol. From the perspective of the creators and those deep in the development of this type of GenAI, this video is a work in progress that has a long way to go. But from the perspective of THESE artists (Diana and me), this video has a lot of merit.

To contextualize the value of this video, Diana explains it as an expression of, “people trying to train computers on how to see, the way we understand how we see.” On the mechanical side, the code behind this type of GenAI is attempting to replicate two processes in the eyes: rods recognizing contrast and using edge detection to make contours, and cones differentiating color nuances and shifts. These two bio-mechanical functions help (people with vision) render the world around us in visuals and help us decide what distances are. Computers do this similarly by detecting contrast via a multitude of “filters” that function like the Apple PhotoBooth filters. To humanize this technology even further, this is the same process an art student would use sitting in front of a still life and attempting to render it. In fact, in AI’s development so far, its output makes the same “mistakes” that students make when trying to render realistic features in painting and animation, struggling with hands, feet, and faces — the things seem to take the longest for both human and machine to “master.”

Like the AI gymnastics video, the entire movement of Cubism was an avant-garde art movement that reduced the world back into simple, child-like shapes, as showcased in Marcel Duchamp’s piece Nude Descending a Staircase No. 2.

The similarity between Duchamp’s reduction of the human figure and AI’s attempt at simulating gymnastics is an uncanny juxtaposition of our techne coming full-circle and folding in on itself. 

Whether or not you think AI-generated art is bad is up to you. In 1840, French painter Paul Delaroche proclaimed (in response seeing a photograph for the first time), “From today, painting is dead!” Maybe you, too, want to throw up your hands and shout a declaration on AI and how it’s evil and creepy! How sad that art is officially dead! I personally would like to proclaim that I think “Espresso” is Bad Art that sounds Bad, and Collina Strada’s BAGGU prints are cool and weird. Whether it’s the camera, abstract art, digital illustration, or generative AI, new techne always seems to elicit proclamations that “Painting is dead.” Neither Diana nor I believe that painting, music or art is dead, or at risk of extinction. But if you think it is? Proclaim away. 

It’s also up to you whether or not you decide to consume art that uses GenAI. Sure, we can collectively boycott the consumption of AI art and the use of GenAI tools, but the truth of the matter is GenAI doesn’t really care about creators. GenAI in its current form exists as a lucrative tool for non-art industries, where the vast majority of its use is revenue-centered. We can boycott GenAI, we can kill off even the curiosity of knowing where art would go if it’s embraced, but that’s not going to change the non-art industry application. Moreover, it’s also not going to change the fact that demanding the art we consume be in its purest form, laced with sacrifice, and free of the influence of machines, is holding a group of already economically vulnerable people accountable to Atlas-shrug an entire industry. 

Instead, why don’t we explore the more ethical uses of GenAI, just as we have historically explored the ethical methods of painting and music production? Reducing the use of generative AI to either “wholly embracing it without scrutiny” versus “total abolition” is a wasted opportunity, so why don’t we instead discuss the better choices/practices within using GenAI? Ways to train your models on your own work, and simple open-source datasets? Methods to run programs locally instead of massive remote servers? 

In a perfect utopia, all art should be free to be created without the strangle of capitalist practices, but that’s not the world we live in, nor is that a helpful thought-experiment to the genres of exploited people and resources we’ve discussed. If the use of GenAI means that visual artists can produce more designs for their Etsy print shop with less overhead so they have more time to teach ceramics, I think that rocks. If it provides a person with fatigue-driven chronic illness a means to generate artwork, that also rules. If, on a larger scale, it means that we’re able to push the boundaries of art and invoke curiosity and conversation in an industry that’s built on innovation, I want to go down in history as someone who was behind that movement. In an attempt to hear more perspectives and dive deeper into some of our longstanding cultural positions on creative practices, Diana and I will continue to explore and write on these topics episodically, as it feels like we’ve only managed to scratch the surface on what can be said so far. 

Hannah Liuzzo is the Boston-raised, LA-based artist who performs as little bit (and formerly with the band Lilith). Their debut EP, talk a blue streak, is out now via Hit the North Records.