• 1 Post
  • 3.23K Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle
  • I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others

    Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.

    It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read




  • OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.

    When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s


  • It really doesn’t make sense to lump rent and mortgage together, and I feel like Gen Z is hit hardest because they’d have the lowest rates of homeownership.

    The real title is the title of the graph in the artle: “Gen Zers Most Likely to Struggle with Housing Payments”.

    The article is lumping rent and mortgage together because including both covers all ways someone can pay for housing. The “hit hardest” part is in there to communicate that, while GenZ is getting its ass kicked the most on housing costs, it isn’t the only generation having trouble.


  • Guess what I’m saying is I’ve sort of dared AI to suck me in, and … I am unchanged.

    I’m not sure this tests the point I was raising. In all of those cases, you knew at the beginning that you were dealing with AI. Yes, the man in our article did too, but what if you didn’t know it was AI to begin with when you started interacting with it? How would your interactions change? What “safe guards” would you not have up if, as an example, it was appearing to you like a Lemmy poster instead of a dedicated AI interaction window?

    I don’t think for a second there is any sort of emotional or intelligent entity in the other end.

    Of course, because there isn’t when we are rational. I also assume you are a psychologically healthy person. There is a suggestion the man in the article may have had an underlying condition, but he wasn’t aware of it.

    I think if more people experimented with generation settings like temperature and watched AI go to incoherent acid trips, it would feel more like a machine to them.

    I completely agree. I’ve done some experiments of my own training a small LLM from scratch (not Fine Tuning an existing commercial model) using training data exclusively from a small set of public domain books I have read. I then had this LLM produce output. Since I had read the books, I could see pieces of where it got components of its responses. Cranking up temperature would make it go off the rails, which was fun to see. Overfitting made it try to give me something close to what I asked for, but obviously fail. I really liked the whole exercise because it was a small enough set of data with all of the levers and knobs exposed for me to see how far it could go, and more importantly how far it couldn’t.


  • I read this story this morning and have been thinking back to it all day. This wasn’t just some idiot that was too stupid or young to not realize he was talking to a bot and did something like drink bleach because it told him to.

    This was one of us.

    He fit lots of behaviors I see here from me and my fellow Lemmy posters. He:

    • built computers for himself and family members
    • was a hobbyist (at least) coder
    • wasn’t a young kid that didn’t know the world. He was 48 or 49.
    • was an early adopter embracing the modern LLM technology in 2022 when it first really became public.
    • sold his house in an urban metropolis (Portland) and moved to a rural area so he could use his additional wordworking skills on building sustainable housing.
    • worked part time at a homeless shelter

    Doesn’t this guy sound like someone that would be a Lemmy poster to you too?

    He started using LLMs (ChatGPT specifically) as a tool only to advance his hobby and work. When he first started it appears he understood it was just a tool, and didn’t think it was something sentient. Only later after hundreds of hours of exposure did this idea arise in him.

    Was there some underlying psychological problem that the LLM exacerbated? Possibly. But at what level was his original underlying issue? Do we all have some low level condition that would make us equally susceptible? I know we’d like to think we don’t, but how do we know? This man certainly didn’t think he did, I’m sure.

    Next I think about what it would take for me to get down this bad path without realizing it. At one point would I be talking to a chat bot, not realize it, and let what that chat bot said change or influence my thoughts when I’d have zero knowledge of it being just a fancy program? I consider myself moderately smart with good critical thinking skills, but I’m sure this man did too.

    Then it occurred to me that I have to concede that I have, at some point, already interacted with a bot in years past on Reddit or even today on Lemmy and I had no idea it was a bot. Was that interaction a throwaway conversation about pop culture that would have no impact on my world view or was it a much deeper and important political or philosophical conversation that the bot introduced an idea or hallucinated evidence to support a point and I didn’t catch it to challenge it? Am I already a few or many steps down the bad path of falling for illusions of a bot? I certainly don’t think so, but neither did he.

    How many of us are already on the same path as this guy and just as ignorant about the danger as the man in the article?






  • Forgive the machine translation to English, but reading that shows the a very similar exception to privacy protection we have here in the USA

    Here’s one example:

    "There are exceptions to events (demonstrations, general meetings, cultural events, etc.). Here, participants must expect to be photographed. This is about what is happening and not about the person itself. "

    Most of the wiki article is talking specifically about copyright, which isn’t the scope of what we’re talking about. Publication of taken images is a different topic.


  • In my opinion, go the Mondragón route. Bring democracy into the enterprise and allow those who work to control how they work. That way those who are being “automated” away can have a voice in what to do next.

    Isn’t that what we already have today? Jim no longer has a job at this employer. Jim can choose where he works next.

    Also, your vision of human capacity is very limiting. Why can’t Jim learn new skills? Everyone does it, literally all the time. Even construction workers have domain knowledge on how to pour cement that they learnt from others.

    As shown in the example, Jim is not capable of learning the skills (in any reasonable amount of time) to take on another open position at that company. So are you suggesting that Jim go back to school? Who are you suggesting, in your vision, is pay for Jim’s living and school expenses until he is ready to work a position with a higher skillset?





  • I digress though, no one thinks people should be driving drunk, I am just making the point, that .12 for generations was the standard, in some states.

    And the standard before .12 was “no standard” where driving drunk wasn’t even a crime.

    The larger problem is why we are completely reliant on vehicles, that we cannot even enjoy more than two drinks on the town and legally go home. There must be better ways, fuck cars.

    Taxi cabs have exist since before the invention of cars. They were horse drawn carriages. Today we even have Uber and Lyft that are easier that hailing a cab.


  • Completely unrelated to the article: I would encourage any woman of child bearing age to obtain a passport now when there is no rush. Using the slow process it takes about 6-10 weeks of waiting to get your passport after you apply. For a full passport that can be used in any country the cost is $130. If you only want to go to Canada and/or Mexico, you only need a passport card, which can be had for only $30. Its the same form to get either the book or the card, you would just check a different box.

    Also unrelated: Abortion pills are easily available in both Mexico and Canada.


  • Uh huh, hey, why don’t these job numbers reports ever talk about whether these new jobs are keeping up with the cost of living? Seems like it’d be important to discern how many jobs are paying minimum wage and how many are paying enough to actually afford to survive longer than the next 24 fucking hours.

    You’d get closer to that answer with a different report. Probably a combination of the Occupation Finder data showing wage ranges and the Employment Projections data which shows employment increase in number of jobs or declines in each sector.

    The BLS used to be a gold standard for fantastic data collection, analysis, and sharing. However, I am not putting much confidence behind any data coming out of the trump administration.