• 0 Posts
  • 4.7K Comments
Joined 3 年前
cake
Cake day: 2023年7月26日

help-circle


  • Ok but walk it back a bit, why did they become homeless?

    If somebody is completely 100% mentally healthy I can’t see how an AI can convince them to kill themselves any more than another person could convince them to kill themselves. Only vulnerable people join cults, because it’s difficult to pray on people who have proper defences.

    I’m still not convinced that the AI isn’t just triggering some underlying mental condition that other people in their lives are just not aware of or not willing to accept.


  • Some people think that LLMs are true AGI or at least they have thoughts that run along those lines even if they can’t articulate it like that.

    They tend to be people who aren’t particularly tech savvy and so they see this thing that seems to be pretty much a miracle of technology and believe that it truly is a super intelligence.

    I’ve seen evolution simulators come up with some truly interesting behaviour, like finding shortcut glitches in Mario that no human has ever found, if I didn’t know how the program worked I suppose I might believe that there was some intelligence there.


  • I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.

    Then he’s an idiot.

    Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.

    I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.


  • I think the important point here is that just because the father is doing Google doesn’t necessarily mean that Google are at fault. People tend to feel that if an individual is suing a corporation for malfeasance the corporation is necessarily guilty. But reality doesn’t always run like that.

    I can’t see any reason that Google would want to encourage more suicide so I have to assume that it’s just an unfortunate interaction of a mentally unsound mind and a product that frankly even its own creators don’t understand. This is highly unfortunate but I’m not certain where the crime was.


  • Yes people can have mental delusions and psychotic episodes; I’m not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.

    For one thing I’ve yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.










  • Copy editing won’t be an executive’s job. But yeah, they didn’t do the bare minimum which is concerning, it seems to indicate that they may not do the bare minimum on all of their articles. How much stuff went undiscovered?

    I’m not going to outright say that journalist shouldn’t use AI to write articles, because it’s basically an enforceable rule, but there should be someone at some point whose ultimate responsibility is to make sure that the articles are at least factual, whether they were written by a human or not. Determining whether a quote is legitimate is pretty easy, you just have to Google the quote, if you can’t find any other sources you start to ask questions. As I said it’s the bare minimum they could have done.




  • None of the moons in our solar system have atmospheres. Earths moon is too small to hold on to an atmosphere, and the Galilean moons of Jupiter are too cold for an atmosphere, the gases just freeze.

    The best place would be either a space station in low earth orbit or of the L4 or L5 point. The data issue would be the problem though I suppose you could just use the data centres for training but not for active processing but then you would need to build data centres on earth for that.

    Given that you’re going to build the earth data centres anyway you might as well do all of the processing on earth at the same time.