Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-22 months agoIf AI was going to advance exponentially I'd have expected it to take off by now.message-squaremessage-square223fedilinkarrow-up1257arrow-down138
arrow-up1219arrow-down1message-squareIf AI was going to advance exponentially I'd have expected it to take off by now.Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-22 months agomessage-square223fedilink
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up59arrow-down5·2 months agoAnd the single biggest bottleneck is that none of the current AIs “think”. They. Are. Statistical. Engines.
minus-squarethemurphy@lemmy.mllinkfedilinkarrow-up7arrow-down1·2 months agoAnd it’s pretty great at it. AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to. AI is so much better and many other tasks.
minus-squaremoonking@lemy.lollinkfedilinkarrow-up25arrow-down21·2 months agoHumans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkarrow-up5arrow-down2·2 months agoMarkov chains with extra steps
minus-squareCaveman@lemmy.worldlinkfedilinkarrow-up3·1 month agoHow closely do you need to model a thought before it becomes the real thing?
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up3·1 month agoNeed it to not exponentially degrade when AI content is fed in. Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
minus-squareXaphanos@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down2·2 months agoYou’re not going to get an argument from me.
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkarrow-up1arrow-down1·1 month agoMaybe we are statistical engines too. When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
And the single biggest bottleneck is that none of the current AIs “think”.
They. Are. Statistical. Engines.
Same
And it’s pretty great at it.
AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.
AI is so much better and many other tasks.
Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
Markov chains with extra steps
How closely do you need to model a thought before it becomes the real thing?
Need it to not exponentially degrade when AI content is fed in.
Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
You’re not going to get an argument from me.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.