

Oops, my mistake
Apology accepted. Have a great day!


Oops, my mistake
Apology accepted. Have a great day!


Read my prior post, I specifically SAID it was a model number.
You’re embarrassing yourself with your pedantry. You said 80486 didn’t exist. It did. Seriously, quit while you’re behind here.


Such a confident answer! And so incorrect too!



Honestly, we know where the root of this problem came from. Back in the 1990s Intel broke with convention of using ever increasing numeric model numbers
Intel didn’t like that other CPU manufacturers of x86 CPUs (AMD, Cyrix, IBM) could use the same numbering scheme. So Intel created “Pentium” because it could be copyrighted/trademarked so other companies couldn’t use it.


That’s my bad for not remembering AMD’s fucking atrocious nonstandard mobile chip naming schemes.
Atrocious compared to Intel? The first CPU with the name Core i7 was released in 2008, but Intel is still releasing a CPU named Core i7 as recently as 2023. They both suck, but in different ways.


Especially when they’re using it as a defense to use racial slurs in a Wal-Mart on a Saturday afternoon.


GOP 2016: “fuck your feelings”
GOP 2026: “feelings are now national policy and justification for unprovoked war”


I agree, but we should also take it a personal warning that, maybe not today, but as we age and our mental faculties decline, we too may fall victim to something like this.


I posted my response to this sentiment in another thread of another man killing himself because of his deep AI chatbot addiction, but it applies here too.
It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile.
Do you believe you have never responded to a post by a bot on Reddit, Lemmy, or elsewhere where you believe to be conversing with a human? While I know we’re talking about different degrees between this man and the rest of us, it should give a tiny piece of what they were experiencing before we dismiss that it could never happen to us too.


I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others
Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.
It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read


Depends on your definition of “good-ish”. Do you mean:
Running one locally on your own hardware would likely reach “good-ish” with some sacrifices against performance/accuracy (unless you’ve got a lot of expensive hardware to run very large models). As far as ethical origins, there are few small models trained on public domain/nonstolen content, but their functions are far more limited.


Someone see if “Micros|op” (with the pipe character) or “MicrosIop” (with a capital letter i) is also blocked.


OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s


It really doesn’t make sense to lump rent and mortgage together, and I feel like Gen Z is hit hardest because they’d have the lowest rates of homeownership.
The real title is the title of the graph in the artle: “Gen Zers Most Likely to Struggle with Housing Payments”.
The article is lumping rent and mortgage together because including both covers all ways someone can pay for housing. The “hit hardest” part is in there to communicate that, while GenZ is getting its ass kicked the most on housing costs, it isn’t the only generation having trouble.


Guess what I’m saying is I’ve sort of dared AI to suck me in, and … I am unchanged.
I’m not sure this tests the point I was raising. In all of those cases, you knew at the beginning that you were dealing with AI. Yes, the man in our article did too, but what if you didn’t know it was AI to begin with when you started interacting with it? How would your interactions change? What “safe guards” would you not have up if, as an example, it was appearing to you like a Lemmy poster instead of a dedicated AI interaction window?
I don’t think for a second there is any sort of emotional or intelligent entity in the other end.
Of course, because there isn’t when we are rational. I also assume you are a psychologically healthy person. There is a suggestion the man in the article may have had an underlying condition, but he wasn’t aware of it.
I think if more people experimented with generation settings like temperature and watched AI go to incoherent acid trips, it would feel more like a machine to them.
I completely agree. I’ve done some experiments of my own training a small LLM from scratch (not Fine Tuning an existing commercial model) using training data exclusively from a small set of public domain books I have read. I then had this LLM produce output. Since I had read the books, I could see pieces of where it got components of its responses. Cranking up temperature would make it go off the rails, which was fun to see. Overfitting made it try to give me something close to what I asked for, but obviously fail. I really liked the whole exercise because it was a small enough set of data with all of the levers and knobs exposed for me to see how far it could go, and more importantly how far it couldn’t.


I read this story this morning and have been thinking back to it all day. This wasn’t just some idiot that was too stupid or young to not realize he was talking to a bot and did something like drink bleach because it told him to.
This was one of us.
He fit lots of behaviors I see here from me and my fellow Lemmy posters. He:
Doesn’t this guy sound like someone that would be a Lemmy poster to you too?
He started using LLMs (ChatGPT specifically) as a tool only to advance his hobby and work. When he first started it appears he understood it was just a tool, and didn’t think it was something sentient. Only later after hundreds of hours of exposure did this idea arise in him.
Was there some underlying psychological problem that the LLM exacerbated? Possibly. But at what level was his original underlying issue? Do we all have some low level condition that would make us equally susceptible? I know we’d like to think we don’t, but how do we know? This man certainly didn’t think he did, I’m sure.
Next I think about what it would take for me to get down this bad path without realizing it. At one point would I be talking to a chat bot, not realize it, and let what that chat bot said change or influence my thoughts when I’d have zero knowledge of it being just a fancy program? I consider myself moderately smart with good critical thinking skills, but I’m sure this man did too.
Then it occurred to me that I have to concede that I have, at some point, already interacted with a bot in years past on Reddit or even today on Lemmy and I had no idea it was a bot. Was that interaction a throwaway conversation about pop culture that would have no impact on my world view or was it a much deeper and important political or philosophical conversation that the bot introduced an idea or hallucinated evidence to support a point and I didn’t catch it to challenge it? Am I already a few or many steps down the bad path of falling for illusions of a bot? I certainly don’t think so, but neither did he.
How many of us are already on the same path as this guy and just as ignorant about the danger as the man in the article?


while at the same time, ignoring Windows telemetry,
You’re posting this statement on Lemmy? There is a dispropotionatly high population of Linux and OSX users here. Most of those here ignoring Windows telemetry aren’t running Windows.


He said it was not yet clear how many gunmen were involved, adding that detectives and officers from the Taxi Violence Investigations Unit were investigating the attack.
Taxi Violence Investigations Unit


It also has a good use of being the toilet of browsers. As in, if you ever are required to temporarily install some pervasive plugin or extension to take a proctored exam or something, Edge is good to use because you know you won’t use the that browser for anything you care about and you can protect good browsers from those garbage plugins.
With as toxic as Orban (Hungarian Leadership) is to Ukraine, it surprises me that Urkaine would take the risk of transporting this valuable cargo through Hungary or Slovakia when bound for Astria. It might be a longer drive, but the path through Poland and Czechia may be preferred going forward.