It’s certainly lower than the 20-30% game distribution platforms take.
I can pretty much guarantee the server & staff costs are more than 1% of sticker price, especially since BC includes streaming services.
It’s certainly lower than the 20-30% game distribution platforms take.
I can pretty much guarantee the server & staff costs are more than 1% of sticker price, especially since BC includes streaming services.


They also have dev/art teams orders of magnitudes larger than Team Cherry. (I do agree with your sentiment about useless managers and execs, of course.)


I think that’s a bad idea, both legally and ethically. Vehicles cause tens of thousands of deaths - not to mention injuries - per year in North America. You’re proposing that a company who can meet that standard is absolved of liability? Meet, not improve.
In that case, you’ve given these companies license to literally make money off of removing responsibility for those deaths. The driver’s not responsible, and neither is the company. That seems pretty terrible to me, and I’m sure to the loved ones of anyone who has been killed in a vehicle collision.


Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.
In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.
Every credit card company charges large fees to the service provider for charge backs. It’s standard practice. This is also leads to service providers straight up perma-banning customers who initiate charge backs instead of resolving a dispute with the provider.


Exactly, it’s just regular old enshittification.
While I’m sure the obvious systemic issues contribute to not looking for alternatives, that does sound like largely an issue inherent to optical pulse oximeters. Engineers aren’t miracle workers, they can’t change physics to their liking.
I’m sure pulse oximeters now are more accurate than they were 20 years ago. The fact we’re still using them is because no alternatives have been found which are as easy to use, reliable, and non-invasive as pulse oximeters, even with the known downsides.


Repository: a collection of computer code for a software program (or app if you insist).
Fork: a copy of a repository so you can edit it without affecting the original.
Pull request: a request to the owner of a repo to bring in some changes you made in a fork.
I think I even got the word count down.


Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.
Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.


You’re completely right, if the goal is good customer support and decent working conditions for the operators.
It’s not. The goal is like 1rre said - make people get fed up and stop trying to get their stuff fixed, just buy a new one. Oh, and they could fire half the operators too, since less people would be willing to wade through the pile of shit to talk to them.
Money and profit, screw the rest.


And an excuse to fire half of the support staff.
I didn’t test other values but they’re probably OK.
Excellent work, thanks for the laugh.
This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.
I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.
Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.


Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
I find it difficult to lay the blame with VSCode when the terminology belongs to git, which (even 7 years ago) was an industry standard technology.
People using tools they don’t understand and plowing ahead through scary warnings will always encounter problems.


That may be part of it, but Saudi Arabia also has a long track record of being incredibly abusive and generally just not giving a shit about worker’s rights.
== vs === is easy: == is wrong and bad, don’t use it. :D