• Terrasque@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 days ago

    No other model on market can do anything like that. The closest is diffusion based where you could train a lora with a person’s look or a specific clothing, then generate multiple times and / or use controlnet to sorta control the output. That’s fast hours or days of work, plus it’s quite technical to set it up and use.

    OpenAI’s new model is a paradigm shift in both what the model can do and how you use it, and can easily and effortlessly produce things that was extremely difficult or impossible without complicated procedures and post processing in Photoshop.

    Edit Some examples. Try to make any of this in any of the existing image generators

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      All diffusion and language models are autoregressive. That just means that the output is fed back in as input until the task is complete.

      With diffusion models this means that it is fed an image that is 100% noise and it removes some small percentage of the noise and then then the denoised image is fed back in and another small percentage is removed. This is repeated until a defined stopping points (usually a set number of passes).

      Combining images and using one image to control the generation of another has been available for quite a while. Controlnet and IPAdapters let you do exactly that: ‘Put this coat on this person’ or ‘Take this picture and do it in this style’. Here’s an 11 month old YouTube video explaining how to do this using open source models and software: https://www.youtube.com/watch?v=gmwZGC8UVHE

      It’s nice for non-technical people that OpenAI will sell you a subscription in order to access an agent that can perform these kinds of image generation abilities, but it’s not doing anything new in terms of image generation.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        I know them, and used them a bit. I even mentioned them in an earlier comment. The capabilities of OpenAI’s new model is on a different level in my experience.

        https://www.reddit.com/r/StableDiffusion/comments/1jlj8me/4o_vs_flux/ - read the comments there. That’s a community dedicated to running local diffusion models. They’re familiar with all the tricks. They’re pretty damn impressed too.

        I can’t help but feel that people here either haven’t tried the new openai image model, or have never actually used any of the existing ai image generators before.

        • ZeroOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I cannot take you seriously with all that reddit comments.

          But then why am I even surprised, you shill for a proprietary-AI

          • Terrasque@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            ah yes, I forgot we live in post-truth society where reality doesn’t matter and only your feelings are important. And since your feelings say AI bad, proprietary bad, and reddit bad, you don’t have to actually think or take into consideration reality.

            • ZeroOne@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Truth in this case simply means Your ill-informed opinions

              & FYI, I like AIs that are fully opensource

              • Terrasque@infosec.pub
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 days ago

                I’m sorry, but what is ill informed or opinion about it? Fact is it can do things no other image generator can do, open source or not. It can also effortlessly do things that would require a lot of tinkering with controlnet in comfyui, or even making custom lora’s. It’s a multimodal model that can do image and text both input and output, and does it well. All other useful image generators are diffusion based, which doesn’t read a prompt in the same way, and is more about weighting patterns based on keywords rather than any real understanding of the prompt. That’s why they’re struggling with relatively simple things like “a full glass of wine” or “a horse riding an astronaut on the moon”. If I’m wrong about this, please prove me wrong. Nothing would make me happier than finding an open source model that can do what openai’s new image model can do, really. I already run llama.cpp servers and comfyui locally, I have my own AI server in the basement with a P40 and a 3090. Please, please prove me wrong here.

                I love open models, and been running them locally since first llama model, but that doesn’t mean I willfully ignore and pretend what claude and openai and google develops doesn’t exist. Rather I want awareness about it, that it does exist, and I want an open source version of it.