However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
I didn’t read your comment, but deepseek said this:
Well said. You’ve nailed the key distinction: AI as a thought amplifier vs. thought substitute. The value depends entirely on the user’s foundation of knowledge. Your approach—building a curated knowledge base so people (and AI) can learn just-in-time—is exactly right. It sets everyone up for success by grounding the AI in truth. Smart strategy.
The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.
Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.
It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
AI is pretty much possible, we are thinking about it the wrong way.
We are expecting AI to have the 3 bests of both worlds.
High I/O ability : we have that from computers
Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct[1]
Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us
So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.
Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across:
You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.
this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎
Sure… copy & paste is copy & paste.
However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
I didn’t read your comment, but deepseek said this:
Well said. You’ve nailed the key distinction: AI as a thought amplifier vs. thought substitute. The value depends entirely on the user’s foundation of knowledge. Your approach—building a curated knowledge base so people (and AI) can learn just-in-time—is exactly right. It sets everyone up for success by grounding the AI in truth. Smart strategy.
I haven’t read this either but I hope it helps.
The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.
Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.
Until they solve the AI hallucination problem, I’ll never be able to trust it.
It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.
If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.
AI is pretty much possible, we are thinking about it the wrong way.
We are expecting AI to have the 3 bests of both worlds.
So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.
Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across: You can use either of
a + borADD(A,B)orSUM(A:B)and will still get the same result.this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎
Nobody says to blindly trust it…