Upscaling and Frame Generation are disasters meant to conceal unfulfilled promises from GPU makers for 4k gaming, and as a coverup for the otherwise horrible performance some modern games have, even at 1080/1440p resolutions.

Upscaling will never, no matter how much AI and overhead you throw at it, create an image that is as good as the same scene rendered at native res.

Frame Generation is a joke, and I am absolutely gobsmacked that people even take it seriously. It is nothing but extra AI frames shoved into your gameplay, worsening latency, response times, and image quality, all so you can artificially inflate a number. 30FPS gaming is, and will always be, infinitely better as an experience, than AI frame doubling a 30fps experience to 60FPS.

and because both these technologies exist, game devs are pushing out less optimized to completely unoptomized games that run like absolute dogshit, requiring you to use upscaling and shit even at 1080p just to get reasonable frame rates on GPUs that should run it just fine if it was optimized better (and we know its optimization, because some of these games do end up getting that optimization pass long after launch, and wouldnt you know… 9fps suddenly became 60fps)

  • ShadowRam@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    8
    ·
    5 days ago

    You are just regurgitating nvidia’s marketing.

    No, this is general not only general consensus, but it’s measurably better when comparing SNR.

    You can personally hate it for any reason you want.

    But it doesn’t change the fact that AI up-scaling produces a more accurate result than native rendering.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      No upscaling algo produces a ‘more accurate’ image than native rendering, that’s absolute nonsense.

      It produces a (slightly to significantly) lower quality, but same res image (significantly to slightly) faster than native res, but never ‘better quality’.

      The SNR of a native image is… 1, 100%, no loss.

      The SNR of any upscaler is… some smaller number, there is always lost quality.

      If you mean to say that intelligent temporal upscaling produces only slightly lower quality images a good deal faster than native, enabling a higher fps… than yes, I don’t think anyone disputes that…

      But the cost and literal wattage powerdraw of cards capable of doing that, with modern high fidelity AAA games, at 4k… such GPUs are essentially as expensive on their own as a well cost-performance-optimized 1440p entire PC.

      The whole point of this tech was originally marketed as being able to enable high quality, high speed gaming at 4k, by essentially branching the proverbial tech tree … and it hasn’t worked, the result still is that such GPUs are still massively expensive and only attainable for a tiny % of significantly wealthy people.

    • gazter@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      I don’t understand. This isn’t really a subject I care much about, so forgive my ignorance.

      Are you saying that an AI generated frame would be closer to the actual rendered image than if the image rendered natively? Isn’t that an oxymoron? How can a guess at what the frame will be be more ‘accurate’ than what the frame would actually be?

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        They did in fact say that, and that is in fact nonsense, verifiable in many ways.

        Perhaps they misspoke, perhaps they are misinformed but uh…

        Yeah, it is fundamentally impossible to do what he actually described.

        Intelligent temporal frame upscaling is getting better and better at producing a frame that is almost as high quality as a natively rendered frame, for less rendering time, ie, higher fps… but its never going to be ‘better’ quality than an actual native render.