

Also, Trump and his oligarchs. Actually, let’s destroy all oligarchs while we’re at it.


Also, Trump and his oligarchs. Actually, let’s destroy all oligarchs while we’re at it.


Hey Microsoft, unrelated question, how’s that “70% AI written code” directive working out?
Having a client that’s actually integrated with your account, with the ability browse, purchase, and download games, is going to be a requirement to complete with Steam


Sometimes the purpose of your acquisition is just to eliminate a competitor.


I’ve heard this is game is kind of hard to play single-player. Any thoughts?


Legend of the Red Dragon!


Your comments have made it QUITE clear that you have no idea.
Odd, I can say the exact same thing about your comments on the subject.
We are clearly at an impasse that won’t be solved through this discussion.


Why are you so focused on just the training?
Because I work with LLMs daily. I understand how they work. No matter how much I type at an LLM, its behavior will never fundamentally change without regenerating the model. It never learns anything from the content of the context.
The model is the LLM. The context is the document of a word processor.
A Jr developer will actually learn and grow in to a Sr developer and will retain that knowledge as they move from job to job. That is fundamentally different from how an LLM works.
I’m not anti-AI. I’m not “crying” about their issues. I’m just discussing the from a practical standpoint.
LLMs do not learn.


You do understand that the model weights and the context are not the same thing right? They operate completely differently and have different purposes.
Trying to change the model’s behavior using instructions in the context is going to fail. That’s like trying to change how a word processor works by typing in to the document. Sure, you can kind of get the formatting you want if you manhandle the data, but you haven’t changed how the application works.


What part about how LLMs actually work do you not understand?
“Customizing” is just dumping more data in to it’s context. You can’t actually change the root behavior of an LLM without rebuilding it’s model.


Unless you are retraining the model locally at your 23 acre data center in your garage after every interaction, it’s still not learning anything. You are just dumping more data in to its temporary context.


… That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.


My best guess is that they are probably referring to a degaussing hoop. You used to be able to rent / borrow those to try to fix your TV if you kid played with magnets near it. I’d never describe it as a tube though.


Correction: rich humans


We are losing words due to declining literacy and trying to push back on that shouldn’t be seen as a worthless effort.
But we are inventing new extremely awesome and unimaginably, um, awesome words, like 67, too!


When someone gives you a compliment, just accept it.


The IRS won’t care, all they’ll see is that a person hasn’t paid their federal income tax. They’ll make an example out of them, with huge fines, and probably jail time, because cruelty is the point, and they would want to send a message.


Federal taxes don’t pass through the state, there is no way they could withhold then. Citizens pay federal taxes directly to the federal government.
As an American, l say that any soldiers that willingly take part in this invasion, even in a support role, are NOT “good honest people”. They are traitors following illegal orders. I fully support you defending your country with lethal force if it comes to that.
I wish that I had more power to stop this, but I don’t. Do what you have to do, and don’t feel guilty about it. Give us the hell we deserve.