Go to https://piavpn.com/alex to get 83% off Private Internet Access with 4 months freeFor early, ad-free access to videos, support the channel at https://ww...
This all hinges on the definition of “conscious.” You can make a valid syllogism that defines it, but that doesn’t necessarily represent a reasonable or accurate summary of what consciousness is. There’s no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.
I can’t watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
Define “agency”. Why do u have agency but an LLM doesn’t?
“Intentionally” doing anything isn’t possible.
I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.
ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.
We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.
Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.
The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).
We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.
There sure is complexity in how we respond to stimuli.
The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).
This is primarily what leads to questions like u raised right now.
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
Agreed :(
You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.
This all hinges on the definition of “conscious.” You can make a valid syllogism that defines it, but that doesn’t necessarily represent a reasonable or accurate summary of what consciousness is. There’s no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.
I can’t watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.
Define “agency”. Why do u have agency but an LLM doesn’t?
I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.
That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.
ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.
We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.
Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.
The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).
We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.
There sure is complexity in how we respond to stimuli.
The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).
This is primarily what leads to questions like u raised right now.
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all
capitalistails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!
Agreed :(
You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.
Lemmy is still in its infancy, and we’re the early adopters. It will come into its own in due time, just like Reddit did.