Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-27 天前If AI was going to advance exponentially I'd have expected it to take off by now.message-squaremessage-square205fedilinkarrow-up1233arrow-down136
arrow-up1197arrow-down1message-squareIf AI was going to advance exponentially I'd have expected it to take off by now.Clinicallydepressedpoochie@lemmy.world to Showerthoughts@lemmy.world · edit-27 天前message-square205fedilink
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up51arrow-down5·7 天前And the single biggest bottleneck is that none of the current AIs “think”. They. Are. Statistical. Engines.
minus-squareCaveman@lemmy.worldlinkfedilinkarrow-up3·6 天前How closely do you need to model a thought before it becomes the real thing?
minus-squarejustOnePersistentKbinPlease@fedia.iolinkfedilinkarrow-up3·6 天前Need it to not exponentially degrade when AI content is fed in. Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
minus-squarethemurphy@lemmy.mllinkfedilinkarrow-up5arrow-down1·7 天前And it’s pretty great at it. AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to. AI is so much better and many other tasks.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkarrow-up5arrow-down2·7 天前Markov chains with extra steps
minus-squaremoonking@lemy.lollinkfedilinkarrow-up20arrow-down17·7 天前Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
minus-squareXaphanos@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down1·7 天前You’re not going to get an argument from me.
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkarrow-up1arrow-down1·5 天前Maybe we are statistical engines too. When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
And the single biggest bottleneck is that none of the current AIs “think”.
They. Are. Statistical. Engines.
Same
How closely do you need to model a thought before it becomes the real thing?
Need it to not exponentially degrade when AI content is fed in.
Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.
And it’s pretty great at it.
AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.
AI is so much better and many other tasks.
Markov chains with extra steps
Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.
You’re not going to get an argument from me.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.