some_guy@lemmy.sdf.org to Technology@lemmy.world · 3 months agoMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.comexternal-linkmessage-square73fedilinkarrow-up1132arrow-down19
arrow-up1123arrow-down1external-linkMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.comsome_guy@lemmy.sdf.org to Technology@lemmy.world · 3 months agomessage-square73fedilink
minus-squarePetter1@lemm.eelinkfedilinkEnglisharrow-up1arrow-down1·3 months agoNo it just needs to categorise into important / probably true and not important / probably nonsense, as a first step Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better) https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
minus-squareMentalEdge@sopuli.xyzlinkfedilinkEnglisharrow-up1·edit-23 months agoRight… As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI? You are VERY confused about how thinking works.
minus-squarePetter1@lemm.eelinkfedilinkEnglisharrow-up1arrow-down1·3 months agoYou don’t need AGI to categorise new info as probably true / probably wrong based on your base knowledge. This a simple machine learning task.
No it just needs to categorise into important / probably true and not important / probably nonsense, as a first step
Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better)
https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
Right…
As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?
You are VERY confused about how thinking works.
You don’t need AGI to categorise new info as probably true / probably wrong based on your base knowledge. This a simple machine learning task.
No it isn’t.
OK