• scratchee@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    The difference between LLMs and human intelligence is stark. But the difference between LLMs and other forms of computer intelligence is stark too (eg LLMs can’t do fairly basic maths, whereas computers have always been super intelligences in the calculator domain). It’s reasonable to assume that someone will figure out how to make an LLM that can integrate better with the rest of the computer sooner rather than later, and we don’t really know what that’ll look like. And that requires few new capabilities.

    The reality is we don’t know how many steps between now and when we get AGI, some people before the big llm hype were insisting quality language processing was the key missing feature, now that looks a little naive, but we still don’t know exactly what’s missing. So better to plan ahead and maybe arrive early at solutions than wait until AGI has arrived and done something irreversible to start planning for it.