• 0 Posts
  • 4.7K Comments
Joined 3 years ago
cake
Cake day: July 26th, 2023

help-circle



  • Well it’s a major shipping corridor isn’t it and mines tend to be sort of a detriment to that that’s kind of the whole point really.

    Add on to the fact that China isn’t all that industrialised and tends to import a lot of its food and you’ve got a problem. The Chinese government are more competent than most (not really a shining endorsement of capitalism is it) so they might have pivoted to India but I don’t know how much time they would require.

    The amazing thing about all of this is it probably isn’t going to increase the price of RAM, so that’s the first for 2026.




  • Ok but walk it back a bit, why did they become homeless?

    If somebody is completely 100% mentally healthy I can’t see how an AI can convince them to kill themselves any more than another person could convince them to kill themselves. Only vulnerable people join cults, because it’s difficult to pray on people who have proper defences.

    I’m still not convinced that the AI isn’t just triggering some underlying mental condition that other people in their lives are just not aware of or not willing to accept.


  • Some people think that LLMs are true AGI or at least they have thoughts that run along those lines even if they can’t articulate it like that.

    They tend to be people who aren’t particularly tech savvy and so they see this thing that seems to be pretty much a miracle of technology and believe that it truly is a super intelligence.

    I’ve seen evolution simulators come up with some truly interesting behaviour, like finding shortcut glitches in Mario that no human has ever found, if I didn’t know how the program worked I suppose I might believe that there was some intelligence there.


  • I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.

    Then he’s an idiot.

    Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.

    I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.


  • I think the important point here is that just because the father is doing Google doesn’t necessarily mean that Google are at fault. People tend to feel that if an individual is suing a corporation for malfeasance the corporation is necessarily guilty. But reality doesn’t always run like that.

    I can’t see any reason that Google would want to encourage more suicide so I have to assume that it’s just an unfortunate interaction of a mentally unsound mind and a product that frankly even its own creators don’t understand. This is highly unfortunate but I’m not certain where the crime was.


  • Yes people can have mental delusions and psychotic episodes; I’m not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.

    For one thing I’ve yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.










  • Copy editing won’t be an executive’s job. But yeah, they didn’t do the bare minimum which is concerning, it seems to indicate that they may not do the bare minimum on all of their articles. How much stuff went undiscovered?

    I’m not going to outright say that journalist shouldn’t use AI to write articles, because it’s basically an enforceable rule, but there should be someone at some point whose ultimate responsibility is to make sure that the articles are at least factual, whether they were written by a human or not. Determining whether a quote is legitimate is pretty easy, you just have to Google the quote, if you can’t find any other sources you start to ask questions. As I said it’s the bare minimum they could have done.