• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 hours ago

    I keep being told by experts that AGI is inevitable. Yet all I ever see is people constantly go on about LLMs, so I don’t know what to think. Are they lying, is it all just a bubble that’s going to burst or is there actually some utility there that is being hidden by the LLM hype? If so, can’t we just use the actual AI rather than these other things.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 hours ago

      There’s no such thing as “actual AI.” AI is just a broad term that encompasses all artificial intelligence systems. A chess engine, ChatGPT, and HAL 9000 are all examples of AI - despite being fundamentally different. A chess engine is a narrow AI, ChatGPT is a large language model, and HAL 9000 would qualify as AGI.

      It could be argued that AGI is inevitable - assuming general intelligence isn’t substrate-dependent (meaning it doesn’t require a biological brain) and that we don’t destroy ourselves before we get there. But the truth is, nobody knows how difficult it is to create AGI, or whether we’re anywhere close. There’s a lot of hype around generative AI right now because it remotely resembles what AGI might look like - but that doesn’t guarantee it’s taking us any closer. It could be a stepping stone - or a total dead end.

      So what I hear you asking is: “Can’t we just use task-specific narrow AI instead of creating AGI?” And yes, we could - but we’re never going to stop improving these systems. And every step of progress brings us closer to AGI, whether that’s the goal or not. The only things that might stop us are hitting a fundamental wall (like substrate dependence) or wiping ourselves out.

      There’s also the economic incentive. AGI would be the ultimate wealth generator. All the incentives point toward building it. It’s a winner-takes-all scenario: if you’re the first to create a true AGI, your competition will likely never catch up - because from that point on, the AGI can improve itself. And then the improved version can further improve itself, and so on. That’s how you get to the singularity: an intelligence explosion that leads to Artificial Superintelligence (ASI) - a level of intelligence far beyond human comprehension.

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Every type of AI that was ever made had people saying that this is the one that’ll bring is general intelligence. It’s just a matter of scaling it up further, the hype crashed and there was an AI winter. Now LLM have their own problems scaling up and nothing really indicating it’s anywhere near general intelligence. There isn’t much more data to train them on. And so far, not enough people willing to pay for it. Definitely bubble territory.