When I started angel investing in the late 1990s, a tech investment included a significant technology risk, with the potential upside being groundbreaking innovation. Being an investor at this time meant taking a considerable technology risk and betting on actual tech, such as nanotech, semiconductors or biotech.

E-commerce, albeit hyped and interesting, was not considered tech. It was “Business 2.0”, plain and straightforward, hype included.

  • REDACTED@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    12
    ·
    2 days ago

    Seriously. Your opinions and hate aside, LLM, deep learning and reasoning models are amongst one of the most advanced software technologies available to consumers.

    This post is lame

    • JayleneSlide@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      No, no they’re not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.

      Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.

      • REDACTED@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        21 hours ago

        I genuinely can’t tell if you’re being for real. By the same logic, raytracing is ancient tech that should be abandoned.

        The stuff we had when people thought Hitler is still alive on some Island and stuff we have now is barely comparable, even thought yes, they use a similar underlying technology.

        Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.