

GOP 2016: “fuck your feelings”
GOP 2026: “feelings are now national policy and justification for unprovoked war”


GOP 2016: “fuck your feelings”
GOP 2026: “feelings are now national policy and justification for unprovoked war”


I agree, but we should also take it a personal warning that, maybe not today, but as we age and our mental faculties decline, we too may fall victim to something like this.


I posted my response to this sentiment in another thread of another man killing himself because of his deep AI chatbot addiction, but it applies here too.
It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile.
Do you believe you have never responded to a post by a bot on Reddit, Lemmy, or elsewhere where you believe to be conversing with a human? While I know we’re talking about different degrees between this man and the rest of us, it should give a tiny piece of what they were experiencing before we dismiss that it could never happen to us too.


I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others
Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.
It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read


Depends on your definition of “good-ish”. Do you mean:
Running one locally on your own hardware would likely reach “good-ish” with some sacrifices against performance/accuracy (unless you’ve got a lot of expensive hardware to run very large models). As far as ethical origins, there are few small models trained on public domain/nonstolen content, but their functions are far more limited.


Someone see if “Micros|op” (with the pipe character) or “MicrosIop” (with a capital letter i) is also blocked.


OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s


It really doesn’t make sense to lump rent and mortgage together, and I feel like Gen Z is hit hardest because they’d have the lowest rates of homeownership.
The real title is the title of the graph in the artle: “Gen Zers Most Likely to Struggle with Housing Payments”.
The article is lumping rent and mortgage together because including both covers all ways someone can pay for housing. The “hit hardest” part is in there to communicate that, while GenZ is getting its ass kicked the most on housing costs, it isn’t the only generation having trouble.


Guess what I’m saying is I’ve sort of dared AI to suck me in, and … I am unchanged.
I’m not sure this tests the point I was raising. In all of those cases, you knew at the beginning that you were dealing with AI. Yes, the man in our article did too, but what if you didn’t know it was AI to begin with when you started interacting with it? How would your interactions change? What “safe guards” would you not have up if, as an example, it was appearing to you like a Lemmy poster instead of a dedicated AI interaction window?
I don’t think for a second there is any sort of emotional or intelligent entity in the other end.
Of course, because there isn’t when we are rational. I also assume you are a psychologically healthy person. There is a suggestion the man in the article may have had an underlying condition, but he wasn’t aware of it.
I think if more people experimented with generation settings like temperature and watched AI go to incoherent acid trips, it would feel more like a machine to them.
I completely agree. I’ve done some experiments of my own training a small LLM from scratch (not Fine Tuning an existing commercial model) using training data exclusively from a small set of public domain books I have read. I then had this LLM produce output. Since I had read the books, I could see pieces of where it got components of its responses. Cranking up temperature would make it go off the rails, which was fun to see. Overfitting made it try to give me something close to what I asked for, but obviously fail. I really liked the whole exercise because it was a small enough set of data with all of the levers and knobs exposed for me to see how far it could go, and more importantly how far it couldn’t.


I read this story this morning and have been thinking back to it all day. This wasn’t just some idiot that was too stupid or young to not realize he was talking to a bot and did something like drink bleach because it told him to.
This was one of us.
He fit lots of behaviors I see here from me and my fellow Lemmy posters. He:
Doesn’t this guy sound like someone that would be a Lemmy poster to you too?
He started using LLMs (ChatGPT specifically) as a tool only to advance his hobby and work. When he first started it appears he understood it was just a tool, and didn’t think it was something sentient. Only later after hundreds of hours of exposure did this idea arise in him.
Was there some underlying psychological problem that the LLM exacerbated? Possibly. But at what level was his original underlying issue? Do we all have some low level condition that would make us equally susceptible? I know we’d like to think we don’t, but how do we know? This man certainly didn’t think he did, I’m sure.
Next I think about what it would take for me to get down this bad path without realizing it. At one point would I be talking to a chat bot, not realize it, and let what that chat bot said change or influence my thoughts when I’d have zero knowledge of it being just a fancy program? I consider myself moderately smart with good critical thinking skills, but I’m sure this man did too.
Then it occurred to me that I have to concede that I have, at some point, already interacted with a bot in years past on Reddit or even today on Lemmy and I had no idea it was a bot. Was that interaction a throwaway conversation about pop culture that would have no impact on my world view or was it a much deeper and important political or philosophical conversation that the bot introduced an idea or hallucinated evidence to support a point and I didn’t catch it to challenge it? Am I already a few or many steps down the bad path of falling for illusions of a bot? I certainly don’t think so, but neither did he.
How many of us are already on the same path as this guy and just as ignorant about the danger as the man in the article?


while at the same time, ignoring Windows telemetry,
You’re posting this statement on Lemmy? There is a dispropotionatly high population of Linux and OSX users here. Most of those here ignoring Windows telemetry aren’t running Windows.


He said it was not yet clear how many gunmen were involved, adding that detectives and officers from the Taxi Violence Investigations Unit were investigating the attack.
Taxi Violence Investigations Unit


It also has a good use of being the toilet of browsers. As in, if you ever are required to temporarily install some pervasive plugin or extension to take a proctored exam or something, Edge is good to use because you know you won’t use the that browser for anything you care about and you can protect good browsers from those garbage plugins.


With your comments I found additional German legal guidance that mostly matches what you said. It appears that Germany does indeed have a portion of privacy from someone intentionally walking up to you and taking your picture. I don’t think this invalidates my original point because it doesn’t appear that expectation of privacy extends to installed surveillance cameras in public.
However, I appreciate having a better understanding of German law. Thank you.


Forgive the machine translation to English, but reading that shows the a very similar exception to privacy protection we have here in the USA
Here’s one example:
"There are exceptions to events (demonstrations, general meetings, cultural events, etc.). Here, participants must expect to be photographed. This is about what is happening and not about the person itself. "
Most of the wiki article is talking specifically about copyright, which isn’t the scope of what we’re talking about. Publication of taken images is a different topic.


In my opinion, go the Mondragón route. Bring democracy into the enterprise and allow those who work to control how they work. That way those who are being “automated” away can have a voice in what to do next.
Isn’t that what we already have today? Jim no longer has a job at this employer. Jim can choose where he works next.
Also, your vision of human capacity is very limiting. Why can’t Jim learn new skills? Everyone does it, literally all the time. Even construction workers have domain knowledge on how to pour cement that they learnt from others.
As shown in the example, Jim is not capable of learning the skills (in any reasonable amount of time) to take on another open position at that company. So are you suggesting that Jim go back to school? Who are you suggesting, in your vision, is pay for Jim’s living and school expenses until he is ready to work a position with a higher skillset?


Apathy? Not at all. Its simply a matter of established law, in the USA anyway. I can’t speak to the legal systems of the other 140+ countries on planet Earth.
Can you cite a law in the USA or in your own country where you have a right to privacy making photographing you simply standing in a public park an illegal act perpetrated by another person or government entity?


Now if they can just notify you that some asshole is recording you on their cell phone instead of reading reddit.
If you’re out in public, always assume you’re on someone’s camera. That isn’t really new either.


before that it wasn’t always considered as big of a deal as you are referring to, idk pre 1970s or what.
We’re agreeing with the reality that it wasn’t considered a crime or a big deal in generations past. Where we have a huge gulf of disagreement is if this was a problem or not. I am flabbergasted about the strong defense you’re putting up to be able to drink and drive.
May I ask if you or your family have ever been negatively affected by a drunk driver before?
Especially when they’re using it as a defense to use racial slurs in a Wal-Mart on a Saturday afternoon.