• mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Less than you might think, considering the small range perspectives involved. Rendering to a stack of layers or a grid of offsets technically counts. It is more information than simply transmitting a flat frame… but update rate isn’t do-or-die, if the headset itself handles perspective.

    Optimizing for bandwidth would probably look more like depth-peeled layers with very approximate depth values. Maybe rendering objects independently to lumpy reliefs. The illusion only has to work for a fraction of a second, from about where you’re standing.

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      How does it handle stuff like fog effects, by the way? Can it be made to work (efficiently) with reflections?

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The “deep view” link has video - and interactive online demos.

        Alpha-blending is easy because, again, it is a set of sorted layers. The only real geometry is some crinkly concentric spheres. I wouldn’t necessarily hand-wave Silent Hill 2 levels of subtlety, with one static moment, but even uniform fog would be sliced-up along with everything else.

        Reflections are handled as cutouts with stuff behind them. That part is a natural consequence of their focus on lightfield photography, but it could be faked somewhat directly by rendering. Or you could transmit environment maps and blend between those. Just remember the idea is to be orders of magnitude more efficient than rendering everything normally.

        Admittedly you can kinda see the gaps if you go looking.