• 0 Posts
  • 1.31K Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Actually, the tip doesn’t allege that it was Trump’s child, it alleges that Trump was present when her uncle killed the baby. It makes no allegation about who the father is, and sadly the list of candidates would be very large in that scenario. He was there near the birth, no claim about who was there for the conception.

    I also hate to admit it, but that specific “credible evidence” was a tip submitted via an online form in October 2020. Certainly worth searching for/demanding more to see where, if anywhere, the tip went, but by itself it isn’t credible evidence. There’s so much more credible bad stuff about Trump that makes this quite believable, but until linked to more substantive stuff, probably best to stick to the more concrete stuff. I suspect there were quite a few crafted attempts at ‘October Surprises’ during the 2020 race and Epstein was a solid topic well known by the populace to try those sorts of things.



  • The people wanted actual reasoning AI, not generative AI. They didn’t expect us to devote most of our nominal economic activity toward a few big tech companies to get it. They didn’t expect them to assert that text generators are ‘reasoning’ and when called on it declare that it’s not reasoning as humanity has known it, but here’s some buzzwords to justify us claiming it’s a whole new sort of reasoning that’s just as valuable.





  • Inspired by your comment, I polled ChatGPT 5 direct and Copilot itself, and ChatGPT was smarter than the executive by saying it was a bad idea, while Copilot itself said it might be a bad idea, but it’s aligned with Microsoft’s vision, which may be more important, but ultimately seemed to have no idea if it was a good idea or bad idea…

    So I guess ChatGPT at least is smarter than the MS CEO. Of course Copilot seemed primed to try to favor and vindicate Microsoft’s decision. I tried a more aggressive statement that it was stupid to try to get that ‘I agree with you by default’ and it still tried to soften the perspective in favor of Microsoft.

    As a bonus, I asked if it would be a good idea to rename LibreOffice to LibreSidekick. It looked more like the ChatGPT 5 answer for Office to Copilot, saying it’s a dumb idea, until the end when it said unless it has an AI assistant like Microsoft Copilot, then it would be a good idea…


  • Why would Putin enjoy a heavy US naval presence in the contested Artic Circle waters?

    In theory, he might not like a heavy foreign naval presence, but Greenland is already NATO aligned. So he’d be trading a NATO aligned region for a US-only region in exchange for a fractured NATO. Sounds like a decent trade. Also keep in mind practically speaking they are probably equally unlikely to actually boost military presence much, and if they really wanted just that, NATO would probably mostly let them do it if USA paid for it, without USA having to take it.

    The Danish are not meaningfully contributing to the Ukraine conflict. And there is no reason to believe the big EU militaries would stop feeding supplies to Ukraine if the US invaded Greenland.

    This isn’t about just Denmark, which materially would barely be impacted right now since Greenland is doing practically nothing. It’s about the notion of one NATO member invading another, and the absolute clusterfuck that would bring. A USA versus European NATO scenario would be his dream scenario. See “Foundations of Geopolitics”, where it’s mostly about getting Russia’s opponents to fracture and ruin alliances:

    Russia should “introduce geopolitical disorder into internal American activity, encouraging all kinds of separatism and ethnic, social, and racial conflicts, actively supporting all dissident movements – extremist, racist, and sectarian groups, thus destabilizing internal political processes in the U.S. It would also make sense simultaneously to support isolationist tendencies in American politics”.

    This is also the book that makes annexation of Ukraine the top priority, that it must be secured before the broader Russian agenda can be executed. It also says they need to make the UK isolated from the broader EU. They know that above all else alliances must be broken for them to stand a chance to seize power.


  • Yeah, absolutely nothing in the writing explained why he stuck with the emperor. I feel like the last thing he would have done based on his motivation to that point is stick with palpatine. Beyond being a psychopath, it doesn’t even make sense by those standards.

    Empire strikes back set up a couple of plot twists that the series really couldn’t execute on. Nice in the movie since they got to leave it open ended, but bad for the series when they actually had to run with resolving those twists.




  • It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.

    That’s some buzzword bingo there… A very long winded way of saying it isn’t human-like reasoning but you want to call it that anyway.

    If you went accept that reasoning often fails to show continuity, well then there’s also the lying.

    Examining a reasoning chain around generating code for an embedded control scenario. At one point it says the code may effect the behavior of how a motor is controlled, and so it will test if the motor operates.

    Now the truth of the matter is that the model has no access to perform such a test, but the reasoning chain is just a fiction, so it described a result, asserting that it performed the test and it passed, or failed. Not based on a test, but by text prediction. So sometimes it says it failed, then carries on as if it passed, sometimes it decides to redo some code to address the error, but leaves it broken in real life. Of course it can claim it works when it didn’t at all. It can show how “reasoning” can help though. If the code is generated based on one application, but when applied to a motor control scenario, people had issues and so generating the extra text caused it to zero in on some stack overflow thread where someone made a similar mistake.








  • The “reasoning” models aren’t really reasoning, they are generating text that resembles “train of thought”. If you examine some of the reasoning chains with errors, you can see some errors are often completely isolated, with no lead up and then the chain carries on as if the mistake never happened. Errors that when they happen in an actual human reasoning chain propagate.

    LLM reasoning chains are generating essentially fanfics of what reasoning would look like. It turns out that expending tokens to generate more text and discarding it does make the retained text more more likely to be consistent with desired output, but “reasoning” is more a marketing term than describing what is really happening.