I propose you eat less hype about the slop generators. AI doesn’t exist, and there is no reason to believe that we’re closer to understanding if it’s even possible. Machine learning algorithms have their uses and are used already a lot, and nobody is against that, but that’s not AI. LLMs being pushed everywhere, and it’s never useful or particularly liked, and that’s not AI either. My hunch is that this bubble will pop, leaving an unpleasant odour behind, which we will have to deal with for years after, and then tech bros will come up with a new bullshit that revolutionises the world and disrupts the universe, because there is no meritocracy and the world is stupid.
Something that has actual intelligence, or at least significant portion of building blocks of intelligence.
The problem, of course, is that intelligence is a complicated, complex, sprawling phenomenon, with no real ways to measure it as a whole, at least to any degree of reliability.
Learning, reasoning, critical thinking. Creativity, logic, problem solving, abstraction. Self-awareness, self-reflection, general sense of self. You need most of the elements of most of the groups (and probably more that is also important but I am missing right now) in order to even begin to talk about possibility of intelligence. Then somehow we will need to solve the philosophical zombie problem, and I don’t envy the researchers who will have to do that, but that’s way later down the line.
What we have right now very demonstrably doesn’t have almost any of those. What we call machine learning can be called learning in a very specific and reductive way, and whatever emerged phenomenon we observe from that is it’s own beast, but intelligence it is not. All the other boxes are not ticked, and some, like creativity or critical thinking, are the opposite of ticked. It might lead to something in the future (personally I doubt it, but can’t rule out), it might just as likely be something else, or nothing entirely.
I am very unsecure in my speculations on it, but those who have the most robust and optimistic answers right now are actually those who want to sell you something, and most of them are salespeople with the expertise of sales and nothing more.
Being used as a genuine tool, especially after it stops hallucinating, is fine to a point. We’re already finding that an over-reliance on LLMs and shit is causing issues for people, especially those with developing brains.
What I would use it for is to, eventually, use it to help expand tool libraries in programs or take away tasks that are ultimately just pure labour. I work in architecture(architectural technologist) and being able to use it to help draw plans(not design them for me, just draw them, would be awesome if it could understand proper layer usage, block usage and organization, and all those details. We’re nowhere near that right now, of course, and I would never want to hear myself saying “computer, design a building for me”.
I want to liken it to synchros in a manual transmission, or having ABS. I have full control over my vehicle still and it will never walk away from me just because my foot’s not on the brake pedal but it’s also not a huge fucking pain in the ass to drive because of double clutching. I’m still rev matching to change gears and everything, but there is forgiveness. I still support automatic transmissions for people who physically cannot drive stick but frankly at that point I have to ask why their neighbourhood in so underserved by public transportation.
Do you do anything creative? Like play music or make art? Have you ever noticed a skill within that taking a hit because of access to a new tool? I sure have, and am justifiably worried that when that tool becomes too powerful, especially too quickly, that too many of those skills will go unpracticed. With how people are using it even right now you can see very clearly that no one even has the curiousity to understand the help they’re getting.
AI should be a tool that helps humanity. If all we end doing is letting it take our humanity away then what on earth is the fucking point? Congrats, our meat is alive and uncreative, never being able to truly say “I did that”, just watching some computer make things for them.
It doesn’t matter what I (or anyone) “proposes”. You may as well be asking me what I propose to do about the orbit of Jupiter. Arguing about it on the Internet is especially pointless. It’s the new “Old man shouts at clouds” basically…
The way I see it, my comments have prompted an enormous amount of discussion. The most interesting comment, I think, was from PixelProf when they said “Lots of students arbitrarily hating anything AI related”. It’s this blind, herd mentality that I attempted (apparently successfully) to start a discussion about. In fact a lot of what I wrote was driven by the point you were making, which I interpreted as being a funny take on the way people react to AI generated content.
I wasn’t asking you because you know it is inevitable.
I was asking the other user because I do not see what can be done against this genie that is out of the bottle. Just abandoning is never going to happen and regulating isn’t going to fix all the qualms they have with it.
So, genuine question.
What do you propose should happen with the advances of AI?
I propose you eat less hype about the slop generators. AI doesn’t exist, and there is no reason to believe that we’re closer to understanding if it’s even possible. Machine learning algorithms have their uses and are used already a lot, and nobody is against that, but that’s not AI. LLMs being pushed everywhere, and it’s never useful or particularly liked, and that’s not AI either. My hunch is that this bubble will pop, leaving an unpleasant odour behind, which we will have to deal with for years after, and then tech bros will come up with a new bullshit that revolutionises the world and disrupts the universe, because there is no meritocracy and the world is stupid.
What constitutes AI by your definition?
Something that has actual intelligence, or at least significant portion of building blocks of intelligence.
The problem, of course, is that intelligence is a complicated, complex, sprawling phenomenon, with no real ways to measure it as a whole, at least to any degree of reliability.
Learning, reasoning, critical thinking. Creativity, logic, problem solving, abstraction. Self-awareness, self-reflection, general sense of self. You need most of the elements of most of the groups (and probably more that is also important but I am missing right now) in order to even begin to talk about possibility of intelligence. Then somehow we will need to solve the philosophical zombie problem, and I don’t envy the researchers who will have to do that, but that’s way later down the line.
What we have right now very demonstrably doesn’t have almost any of those. What we call machine learning can be called learning in a very specific and reductive way, and whatever emerged phenomenon we observe from that is it’s own beast, but intelligence it is not. All the other boxes are not ticked, and some, like creativity or critical thinking, are the opposite of ticked. It might lead to something in the future (personally I doubt it, but can’t rule out), it might just as likely be something else, or nothing entirely.
I am very unsecure in my speculations on it, but those who have the most robust and optimistic answers right now are actually those who want to sell you something, and most of them are salespeople with the expertise of sales and nothing more.
Being used as a genuine tool, especially after it stops hallucinating, is fine to a point. We’re already finding that an over-reliance on LLMs and shit is causing issues for people, especially those with developing brains.
What I would use it for is to, eventually, use it to help expand tool libraries in programs or take away tasks that are ultimately just pure labour. I work in architecture(architectural technologist) and being able to use it to help draw plans(not design them for me, just draw them, would be awesome if it could understand proper layer usage, block usage and organization, and all those details. We’re nowhere near that right now, of course, and I would never want to hear myself saying “computer, design a building for me”.
I want to liken it to synchros in a manual transmission, or having ABS. I have full control over my vehicle still and it will never walk away from me just because my foot’s not on the brake pedal but it’s also not a huge fucking pain in the ass to drive because of double clutching. I’m still rev matching to change gears and everything, but there is forgiveness. I still support automatic transmissions for people who physically cannot drive stick but frankly at that point I have to ask why their neighbourhood in so underserved by public transportation.
Do you do anything creative? Like play music or make art? Have you ever noticed a skill within that taking a hit because of access to a new tool? I sure have, and am justifiably worried that when that tool becomes too powerful, especially too quickly, that too many of those skills will go unpracticed. With how people are using it even right now you can see very clearly that no one even has the curiousity to understand the help they’re getting.
AI should be a tool that helps humanity. If all we end doing is letting it take our humanity away then what on earth is the fucking point? Congrats, our meat is alive and uncreative, never being able to truly say “I did that”, just watching some computer make things for them.
It doesn’t matter what I (or anyone) “proposes”. You may as well be asking me what I propose to do about the orbit of Jupiter. Arguing about it on the Internet is especially pointless. It’s the new “Old man shouts at clouds” basically…
Why did you join a discussion space if you don’t like discussing?
The way I see it, my comments have prompted an enormous amount of discussion. The most interesting comment, I think, was from PixelProf when they said “Lots of students arbitrarily hating anything AI related”. It’s this blind, herd mentality that I attempted (apparently successfully) to start a discussion about. In fact a lot of what I wrote was driven by the point you were making, which I interpreted as being a funny take on the way people react to AI generated content.
I wasn’t asking you because you know it is inevitable.
I was asking the other user because I do not see what can be done against this genie that is out of the bottle. Just abandoning is never going to happen and regulating isn’t going to fix all the qualms they have with it.