Thousands of artists are urging the auction house Christie’s to cancel a sale of art created with artificial intelligence, claiming the technology behind the works is committing “mass theft”.

The Augmented Intelligence auction has been described by Christie’s as the first AI-dedicated sale by a major auctioneer and features 20 lots with prices ranging from $10,000 to $250,000 for works by artists including Refik Anadol and the late AI art pioneer Harold Cohen.

  • Zaleramancer@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The question about if AI art is art often fixates on some weird details that I either don’t care about or I think are based on fallacious reasoning. Like, I don’t like AI art as a concept and I think it’s going to often be bad art (I’ll get into that later), but some of the arguments I see are centered in this strangely essentialist idea that AI art is worse because of an inherent lack of humanity as a central and undifferentiated concept. That it lacks an essential spark that makes it into art. I’m a materialist, I think it’s totally possible for a completely inhuman machine to make something deeply stirring and beautiful- the current trends are unlikely to reliably do that, but I don’t think there’s something magic about humans that means they have a monopoly on beauty, creativity or art.

    However, I think a lot of AI art is going to end up being bad. This is especially true of corporate art, and less so for individuals (especially those who already have an art background). Part of the problem is that AI art will always lack the intense level of intentionality that human-made art has, simply by the way it’s currently constructed. A probabilistic algorithm that’s correlating words to shapes will always lack the kind of intention in small detail that a human artist making the same piece has, because there’s no reason for the small details other than either probabilistic weight or random element. I can look at a painting someone made and ask them why they picked the colors they did. I can ask why they chose the lighting, the angle, the individual elements. I can ask them why they decided to use certain techniques and not others, I can ask them about movements that they were trying to draw inspiration from or emotions they were trying to communicate.

    The reasons are personal and build on the beauty of art as a tool for communication in a deep, emotional and intimate way. A piece of AI art using the current technology can’t have that, not because of some essential nature, but just because of how it works. The lighting exists as it does because it is the most common way to light things with that prompt. The colors are the most likely colors for the prompt. The facial expressions are the most common ones for that prompt. The prompt is the only thing that really derives from human intention, the only thing you can really ask about, because asking, “Hey, why did you make the shoes in this blue? Is it about the modern movement towards dull, uninteresting colors in interior decoration, because they contrast a lot with the way the rest of the scene is set up,” will only ever give you the fact that the algorithm chose that.

    Sure, you can make the prompts more and more detailed to pack more and more intention in there, but there are small, individual elements of visual art that you can’t dictate by writing even to a human artist. The intentionality lost means a loss of the emotional connection. It means that instead of someone speaking to you, the only thing you can reliably read from AI art is what you are like. It’s only what you think.

    I’m not a visual artist, but I am a writer, and I have similar problems with LLMs as writing tools because of it. When I do proper writing, I put so much effort and focus into individual word choices. The way I phrase things transforms the meaning and impact of sentences, the same information can be conveyed so many ways to completely different focus and intended mood.

    A LLM prompt can’t convey that level of intentionality, because if it did, you would just be writing it directly.

    I don’t think this makes AI art (or AI writing) inherently immoral, but I do think it means it’s often going to be worse as an effective tool of deep, emotional connection.

    I think AI art/writing is bad because of capitalism, which isn’t an inherent factor. If we lived in fully-automated gay luxury space communism, I would have already spent years training an LLM as a next-generation oracle for tabletop-roleplaying games I like. They’re great for things like that, but alas, giving them money is potentially funding the recession of arts as a profession.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      All right, I don’t want to dismiss how you feel or anything but so many people said this that they did experiments to see and it turns out that nah, overall, people thought mostly that the robot art was more human, and the effect comes from the knowledge of the painter. All things equal, emotional connections happen just as much (if not more) with generative art. That doesn’t surprise me honestly, it’s mimicking humans. And the rating of how likely it is to do so has guided it to the end product, so somehow, the humanity is embedded. It’s not something that feels great as I am an artist myself, but I accept science on this one.

      • Zaleramancer@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I’m not sure I understand your overall point here. It sounds like you’re saying that the perceived emotional connections in art are simply the result of the viewer projecting emotions onto the piece, is that correct?

  • lnxtx (xe/xem/xyr)@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Artists are inspired by each other.
    If I draw something being inspired by e. g. Bansky, and it’s not a direct copy - it’s legal.

    We don’t live in a vacuum.

    • peanuts4life@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Counterpoints:

      Artists also draw distinctions between inspiration and ripping off.

      The legality of an act has no bearing on its ethics or morality.

      The law does not protect machine generated art.

      Machine learning models almost universally utilize training data which was illegally scraped off the Internet (See meta’s recent book piracy incident).

      Uncritically conflating machine generated art with actual human inspiration, while career artist generally lambast the idea, is not exactly a reasonable stance to state so matter if factly.

      It’s also a tacit admission that the machine is doing the inspiration, not the operator. The machine which is only made possible by the massive theft of intellectual property.

      The operator contributes no inspiration. They only provide their whims and fancy with which the machine creates art through mechanisms you almost assuredly don’t understand. The operator is no more an artist than a commissioner of a painting. Except their hired artist is a bastard intelligence made by theft.

      And here they are, selling it for thousands.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        It’s also a tacit admission that the machine is doing the inspiration, not the operator. The machine which is only made possible by the massive theft of intellectual property.

        hard disagree on that one… the look of the image was, but the inspiration itself was derived from a prompt: the idea is the human; the expression of the idea in visual form is the computer. we have no problem saying a movie is art, and crediting much of that to the director despite the fact that they were simply giving directions

        The legality of an act has no bearing on its ethics or morality.

        Except their hired artist is a bastard intelligence made by theft.

        you can’t on 1 hand say that legality is irrelevant and then call it when you please

        or argue that a human takes inputs from their environment and produces outputs in the same way. if you say a human in an empty white room and exposed them only to copyright content and told them to paint something, they’d also entirely be basing what they paint on those works. we wouldn’t have an issue with that

        what’s the difference between a human and an artificial neural net? because i disagree that there’s something special or “other” to the human brain that makes it unable to be replicated. i’m also not suggesting that these work in the same way, but we clearly haven’t defined what creativity is, and certainly haven’t written off that it could be expressed by a machine

        in modern society we tend to agree that Duchamp changed the art world with his piece “Fountain” - simply a urinal signed “R. Mutt”… he didn’t sculpt it himself, he did barely anything to it. the idea is the art, not the piece itself. the idea was the debate that it sparked, the questions with no answer. if a urinal purchased from a hardware store can be art, then the idea expressed in a prompt can equally be art

        and to be clear, i’m not judging any of these particular works based on their merits - i haven’t seen them, and i don’t believe any of them should be worth $250k… but also, the first piece of art created by AI: perhaps its value is not in the image itself, but the idea behind using AI and its status as “first”. the creativity wasn’t the image; the creativity and artistic intent was the process

        • Pa_Kalsha@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          in modern society we tend to agree that Duchamp changed the art world with his piece “Fountain” - simply a urinal signed “R. Mutt”… he didn’t sculpt it himself…

          He did (possibly). Sorry.

          Duchamp was a sculptor, as well as a painter, and Fountain doesn’t match any of the urinals sold at the time, by his named source or other plumbing suppliers. Every example in a gallery is a replica made based on a photo of the original, which he claimed to have lost, and they’re all different (the placement and pattern of the drianage holes, the indented ring around the ‘foot’ of the piece).

          Same with In Advance of a Broken Arm and a bunch of his other Readymades - attempts to find an identical, commercially available, object have failed.

          There’s an argument, outlined here: https://www.toutfait.com/issues/issue_3/Collections/rrs/shearer.htm, the Duchamp either made or excessively modified every object he claimed he bought and displayed unchanged.

          Therein lies the problem for art students decades later: because his Readmades were/were based on everyday ephemera, few to no examples of other objects in that category remain for us to compare.

          I think he was pointing out how few of us look at the objects around us (especially those, like art critics, whose job it is to observe) - if we were paying attention, would we have noticed that his work wasn’t what he claimed? Or maybe it’s a case of not noticing the art in the world around us until we put it in the special “art room”.

          Either way, Duchamp is a fascinating artist and (IMO) a compete troll, and may not be the best example to use to defend generative AI.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            i think it’s still a good example, and the point stands - it kinda doesn’t really matter if he did sculpt them or not - either way, it’s the fact that he was a troll, the unknowns, the ideas that is what makes the art; not the piece itself