First : the algorithm predicts thar our behavior today will be like our behavior yesterday. Which makes sense.
Second : what you eat determines how you poop. And they do control what we eat. So that makes sense too.
So both work together.
The success of algorithmic feeds does not imply that humans are predictable in general. It just means that humans are predictable in terms of what content will keep them scrolling/watching/listening for some more time.
Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so programmers add a bit of randomization they call “temperature”.
It’s that unpredictable element that makes LLMs seem humanlike—not the predictable part that’s just functioning as a carrier signal.
You just ruined the magic of ChatGpt for me lol. Fuck. I knew the illusion would break eventually but damn bro it’s fuckin 6 in the morning.
i.e. their fundamental limitations is, ironically, why they are so easy to hype
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
First of all it doesn’t matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.
Secondly, you’re assuming that humans typically understand the question at stake. You’ve clearly never met, or been, an under-paid, over-worked employee who doesn’t give a flying fuck about the daily bullshit.
I’m not saying I agree with AI being shoehorned into everything, i’m seeing it being pushed into places it shouldn’t first hand, but strictly speaking, things don’t have to be more reliable if they’re fast enough.
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
Same thing like back when I was in grade school and teachers would say to not trust internet sources and make sure to look everything up in an physical book / encyclopedia because a book is more reliable. Like, yes, it is, but it also takes me 100x as long to look it up, so ultimately starting at Wikipedia is going to get me to the right answer faster, the vast majority of the time, even if it’s not 100% accurate or reliable (this was nearer Wikipedia’s original launch).
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
That works for pattern matching, but you don’t want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place. Same with reporting historical facts.
There is a vslidation step that AI doesn’t do. If you feed it 1000 posts from unreliable sources like reddit or don’t add even more context about whether the ‘fact’ is a joke, baseless rumor, or from a reliable source you get the current AI.
Yes, doing multiple calculations efficently and taking averages has a lot of uses, mainly in complex systems where this provides opportunities to test chaotic systems with wildly different starting states. There are a ton of great uses for AI!
But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.
That works for pattern matching, but you don’t want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place.
I agree.
Same with reporting historical facts.
I disagree. Those are not remotely the same problem. Both in how they’re technically executed, and in what the user expects out of them.
But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.
No, it’s just different. Is it wrong sometime? Yes. But it can also get you the right answer to a normal human question orders of magnitude faster than a series of traditional searches and documentation readings.
Does that information still need to be vetted afterwards? Yeah, but it’s a lot easier to say “copilot, I’m looking at a crossover circuit and I’ve got one giant wire coil, three white rectangles and a capacitor, what is each of them doing and how kind of meter will I need to test them”, then it is to individually search for each component and search for what type of meter you need to test them. Do you still need to verify that info after? Yeah, but it’s a lot easier to verify once you know what to actually search for.
Basically any time one human query needs to synthesize information from multiple different sources, an AI search is going to be significantly faster.
In latter classes our teachers just told us to not blindly believe what we read on Wikipedia but cross-reference that with other sources like public newspaper or (as you said) books.
Humans overall are extremely predictable. Other factors might aggravate this, but even without any tech involved it’s not looking good.
The proof of that fact can be found in things like the Pepys diary. Dude was stoked about his cool watch, and his dalliance with an actress.
LLMs: high speed stochastic bureaucracy.
Subtly categorising people into bureaucratically compatible holes since 2021.
We are, but only the truly simple minded can be thoroughly swayed and changed into an antisocial beast of propaganda, tasked with toil and consumption. So, there’s no need to vilify “the algorithms” or their results… there’s nothing wrong with YouTube recommending me a Japanese “Careless Whisper” cover from the 80s, based on my previous input. 😅
oh you are so mistaken. propaganda, which is essentially advertisement for political stances, takes a toll on us all. you just don’t notice it because modern propaganda is targeted towards the subconscious more than towards the conscious, as many people have poorer defenses around their subconsciousness than around their consciousness.
On top of that, you’re vastly underestimating how very pliable the human mind mostly is. When presented with one credible idea, an infestation takes place similar to a virus infestation which can make that idea grow exponentially, up to a target size.
Yet you are right that we must not give up confronting ourselves with these kind of messages, in order to find truth. Dialogue is the essential foundation of democracy. Only dialogue can reveal the truth.
GIGO