Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.
Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities
LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.
Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities
LLM are prediction engine. They don’t have knowledge, they only chain words together related to your topic.
They don’t know they are wrong because they just don’t know anything period.