This is part of why I’m so disappointed in the LLM craze, they basically sucked all attention including some promising uses of machine learning in medicine.
Instead now we put every last memory module towards generative AI…
This is part of why I’m so disappointed in the LLM craze, they basically sucked all attention including some promising uses of machine learning in medicine.
Instead now we put every last memory module towards generative AI…


It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.
That’s some buzzword bingo there… A very long winded way of saying it isn’t human-like reasoning but you want to call it that anyway.
If you went accept that reasoning often fails to show continuity, well then there’s also the lying.
Examining a reasoning chain around generating code for an embedded control scenario. At one point it says the code may effect the behavior of how a motor is controlled, and so it will test if the motor operates.
Now the truth of the matter is that the model has no access to perform such a test, but the reasoning chain is just a fiction, so it described a result, asserting that it performed the test and it passed, or failed. Not based on a test, but by text prediction. So sometimes it says it failed, then carries on as if it passed, sometimes it decides to redo some code to address the error, but leaves it broken in real life. Of course it can claim it works when it didn’t at all. It can show how “reasoning” can help though. If the code is generated based on one application, but when applied to a motor control scenario, people had issues and so generating the extra text caused it to zero in on some stack overflow thread where someone made a similar mistake.


Oh phone trees are terrible, I refer exclusively to online self service. I suppose an LLM might be able to help a caller connect to the correct set of humans better than phone trees…
If I’m resorting to phone, it’s because I really really need a human. I know there still exist some very old people stuck calling… But if they can’t work your online portal, they won’t be able to work a phone tree either…


The robo-bullshit is great, if the thing has no nuance. Self checkout, paying bills, buying stuff online.
The things is those things are great because they are so predictable. LLM takes the predictability out. It’s also generally not allowed to do anything that the self service portal was not allowed to do, so you get stuck with a more imprecise interface instead of the nice, precise interface of a traditional portal, and no access to more nuanced help. It’s the worst of both worlds.


Yeah, have a new executive who managed a vaguely segment appropriate “hello world” with code gen and so regularly rants about why we should be paying human developers.


The biggest improvement on the user side was to stop trying to weigh the bagging area to prevent loss.
The newer machine vision based systems are less likely to screw up. “Unexpected item in bagging area” was an almost universal experience, nowadays I have only been flagged for human review once.
Also, one store I was at just lets you put your items under a camera without finding barcodes, and you just confirm the identified products.


Think the issue is either a self service portal that works in very predictable way (like the self checkout) or a human to deal with nuance.
To the extent an LLM might be useful, it’s likely blocked from doing so because the operator doesn’t trust it either.
The biggest annoyance is that the LLM support tends to more aggressively refuse to bring a human in.


And we didn’t have CFC deniers with huge social media platform amplifying fringe conspiracy theories into big political platforms.
Of course, the dangerous CFCs weren’t as critical, if anything new formulations were easy business opportunities for established players. There’s no easy pivot from being a big fossil fuel company to a replacement, any attempt to do so comes with huge risk of being distributed by an unexpected competitor.


The “reasoning” models aren’t really reasoning, they are generating text that resembles “train of thought”. If you examine some of the reasoning chains with errors, you can see some errors are often completely isolated, with no lead up and then the chain carries on as if the mistake never happened. Errors that when they happen in an actual human reasoning chain propagate.
LLM reasoning chains are generating essentially fanfics of what reasoning would look like. It turns out that expending tokens to generate more text and discarding it does make the retained text more more likely to be consistent with desired output, but “reasoning” is more a marketing term than describing what is really happening.


The things to remember is that these CEOs have made a whole living out of not knowing what they are doing, but being insufferably confident in whatever vomit of words they spew, whether they know anything or not, while ultimately just saying the most milquetoast blatantly obvious stuff and pretending it’s very insightful. All this while they believe and the money proves that are the most important people in the world.
So naturally it’s easy for them to believe LLM can take all the jobs, because it can easily take theirs.


I just don’t get how so many people just start by it. Every time I set my expectations lower for what it can be useful at, it proceeds to prove itself likely to fail at that when I actually have a use case that I think one of the LLMs could tackle. Every step of the way. Being told by people that the LLMs are amazing, and that I only had a bad experience because I hadn’t used the very specific model and version they love, and every time I try to verify their feedback (my work is so die-hard they pay for access to every popular model and tool), it does roughly the same stuff, ever so slightly shuffling what they get right and wrong.
I feel gaslit as it keeps on being uselessly unreliable for any task that I would conceivably find it theoretically useful for.


I never said they were anonymous. Not sure what the consequence of a known false tip might be, it’s not a police report. Also not sure that the tip giver realized they weren’t anonymous.


It is worth noting, however, that the FBI document states the Trump/Epstein link was newly alleged as of 2020. So if you wanted an FBI agent to make a messy statement about opening an investigation into a Trump/Epstein event the same way Comey talking about Hilary email investigation in 2016, well adding a Trump/Epstein Angle to an old death from the news could do it.


Real documents, but describing tips submitted during the 2020 election without apparent link to more investigative content.
Any random person can generate a tip, so a tip is a starting point for an investigation, but in and of itself should not be considered newsworthy.
I’m sure you’d have scary sounding pizzagate themed tips implicating Hilary Clinton during her run. I’m sure there were Hunter Biden laptop tips in 2020. I’m sure there were terrorist themed tips about Obama during his runs.


Well yeah, I would assume Steam would be a big priority for this scenario…


I’m wondering to what extent some self diagnosed neurodivergent people experience normal, but generally unacknowledged mental experiences and think they must be weird, otherwise it would be talked about more.
As others point out in the thread, this is generally written up as a universal experience of people.


There was also the prolific serial to USB components. The market was flooded with perfectly functional clones. Prolific deliberately broke support for clones, penalizing a ton of people who had no idea.
When people did too good a job cloning some of their chips, they made the driver break even their own chips.
Of course, in this case the vendor got their stuff into the standard Windows driver without even needing users to download anything…
The ultimate effect is that our datacenter just uses Linux laptops because in practice serial adapters for Windows are just too unreliable unless we try to be supply chain detectives for the cheap little serial adapters we buy.
Had a relative with a toddler that almost died due to his GCM overreporting his levels.
My mom had one and learned immediately not to trust it.
I’m shocked that both people I know personally had those devices turn out to be uselessly inaccurate…


Though that one had the most critical information unredacted. Certainly worth highlighting that particular document regardless, but technically not an answer to the question about redactions.
So it is just as good as the typical CEO