• 1 Post
  • 3.25K Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle













  • I posted my response to this sentiment in another thread of another man killing himself because of his deep AI chatbot addiction, but it applies here too.

    It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile.

    Do you believe you have never responded to a post by a bot on Reddit, Lemmy, or elsewhere where you believe to be conversing with a human? While I know we’re talking about different degrees between this man and the rest of us, it should give a tiny piece of what they were experiencing before we dismiss that it could never happen to us too.


  • I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others

    Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.

    It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read




  • OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.

    When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s


  • It really doesn’t make sense to lump rent and mortgage together, and I feel like Gen Z is hit hardest because they’d have the lowest rates of homeownership.

    The real title is the title of the graph in the artle: “Gen Zers Most Likely to Struggle with Housing Payments”.

    The article is lumping rent and mortgage together because including both covers all ways someone can pay for housing. The “hit hardest” part is in there to communicate that, while GenZ is getting its ass kicked the most on housing costs, it isn’t the only generation having trouble.


  • Guess what I’m saying is I’ve sort of dared AI to suck me in, and … I am unchanged.

    I’m not sure this tests the point I was raising. In all of those cases, you knew at the beginning that you were dealing with AI. Yes, the man in our article did too, but what if you didn’t know it was AI to begin with when you started interacting with it? How would your interactions change? What “safe guards” would you not have up if, as an example, it was appearing to you like a Lemmy poster instead of a dedicated AI interaction window?

    I don’t think for a second there is any sort of emotional or intelligent entity in the other end.

    Of course, because there isn’t when we are rational. I also assume you are a psychologically healthy person. There is a suggestion the man in the article may have had an underlying condition, but he wasn’t aware of it.

    I think if more people experimented with generation settings like temperature and watched AI go to incoherent acid trips, it would feel more like a machine to them.

    I completely agree. I’ve done some experiments of my own training a small LLM from scratch (not Fine Tuning an existing commercial model) using training data exclusively from a small set of public domain books I have read. I then had this LLM produce output. Since I had read the books, I could see pieces of where it got components of its responses. Cranking up temperature would make it go off the rails, which was fun to see. Overfitting made it try to give me something close to what I asked for, but obviously fail. I really liked the whole exercise because it was a small enough set of data with all of the levers and knobs exposed for me to see how far it could go, and more importantly how far it couldn’t.