
i know the author personally. We went to the same university for IT security. His skills are undeniable. Ignoring a legitimately working tool that finds legitimate security problems is just asking for trouble. For all its flaws there are some legitimate uses of LLMs and this is one of them.
maintainers of critical Software can’t afford to be that ignorant.
4.6 Opus was a huge jump from earlier models and the first that was actually useful for things like this from my experience (and 4.7 is significantly worse for some reason).
I have made many anti-LLM posts here and I remain pretty negative on them, but they have absolutely become useful. Part of the problem is the truth is really somewhere between the insane promises and the dismissals.
My problems are many fold though, from being propped up by insane subsidies, the massive power usage to the thing I most care about: taking more power from the masses. The more useful they get, the more power gets concentrated to those able to afford the data centers.
Computers used to be at least somewhat democratizing, sure there were some things like weather modeling that an ordinary person couldn’t do, but a random person on thier computer could put something together to change the world.
What happens when the breakthroughs are available only for the wealthiest? Regular folks can buy tokens at a reasonable price today, but running cutting edge models on consumer hardware isn’t really feasible. We’ve ceded too much control.
I prefer Gemma 4. Does what I need. Obviously there are quite a few problems. But democratization of technology is starting to catch up.
What democratization? The AI companies you prefer are creating a worse oligarchy in an economic and warmonger sense. This should disturb the people who don’t have their heads buried deep in the sand, or in the orifices of Satya Nadella or Dario Amodei.
taking more power from the masses.
That’s promoted by AI haters. The copyright people want to privatize human knowledge and charge rent for it. The latest lawsuit against Meta even includes Elsevier, ffs.
Then there’s all the busy-bodies who want everything surveilled “for the children”. Cause people might be chatting about self-harm, or generate nudes, or some other “harmful” content.
a random person on thier computer could put something together to change the world.
Yes. For example the random people who founded these AI start-ups.
Right now, the world of technology is uniquely malleable in a way it has not been since the dotcom crash. That’s what motivates most of the hate. It’s people who feel that they will lose out; eg the news media that already suffers from the rise of the internet.
I think the people that will “lose out” includes all but like 50 people, then those 50 later. Look, it is awesome that this thing can find bugs and help complete code, but the way it is being made, it is actively trying to destroy society, and those making of it are marketing that as a feature while they burn down the forests and evaporate all the fresh water.
We need a better rollout plan or we’ll have bug-free browsers as consolation for most of us dying. There are too many of us now to just have no solution or mitigation plans for runaway resource consumption and carbon release.
it is actively trying to destroy society,
How so?
The owners of the companies that own the largest ai models go on TV as often as they can get a reporter to point a camera at them, and almost every time claim that these tools will be the end of work for most humans, and offer no solutions for that dramatic change.
So, in other words, they claim the AI tools will rapidly destroy one of the bigger underpinnings of western society, and offer no solutions for what to put in their place other than some half-assed UBI suggestions. If you take millions of people’s jobs away in a short time, that’s called a depression, and if they’re never coming back, that’s the end of that society.
If that is where we are destined to go, doing so without a plan for what to do about the masses of unemployed working-age people will lead to global suffering, death, riots, and warfare. Rather than gleefully floor it over a cliff, perhaps we can take the reins from the sociopathic tech bros and try to gracefully migrate to a post-work society without most of us having to die.
Note that previous paragraph is for the sake of the debate, and I do not actually hold the belief that LLMs will meaningfully disrupt global economics over the long term, once the vast should-be-illegal money duplicating scam the AI companies and Nvidia are engaged in is put to a halt.
Those issues are something that societies and democratically elected representatives should work out; not some billionaires. The problem seems to be that some people with a good thing going don’t want something that benefits them personally. They prefer no change over something that benefits everyone (but them relatively less).
Indeed, but, and I’m going to throw a wild hypothetical out there: What if the billionaires use their billions to pay that society’s elected officials not to fix these issues, and then, when those officials are voted out, they pay the new ones?
Is it the billionaire’s job to undermine the society they’re in? You asked how they’re undermining us and I provided some of the many valid answers. If its society’s job to fend off constant attacks from billionaires, I think it is stupid to keep doing that rather than take the thing they’re using to repeatedly violate, their too much money.
General_Effort, it’s pretty clear that you are an AI evangelist, so you should know this already. But for people who are genuinely unfamiliar, they should look at the creepy words of Anthropic ally, Palantir.
Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race
That’s 1 company, not AI as a whole, and its services are obviously much in demand by our democratically elected leaders.
Not only are they the company you decided to post about here, but they are the company people will often claim is the ethical one. OpenAI and X are worse.
You can’t just stick your head in the sand when you see inconvenient truths.
Yes. I hesitated to post this because I understand that many here would prefer not to know. But, at least, people need a chance to learn the facts and make their own decisions. The amount of anti-AI disinformation is crazy.
I agree people turn a blind eye to a breakthrough just to be left behind when its used against them. Inform yourself so you can be an informed member of society.
The amount of Claude Mythos disinformation pushed by MSM is crazy.
That hasn’t aged well.
Why do you think that?
Were you thinking of this instead?Why do you think that?
*gestures at OP*
Were you thinking of this instead?
No. Why would I?
What makes you think your article makes Anthropic look any less meager or perverse (respectively) in the two articles I provided?
I don’t actually know what you mean.
Would he have any moral or ethical concerns about promoting a tool that was used to legitimize killing in children in Iran?
You seem like the kind of person who owns a salt lamp.
@Nomad@infosec.pub thoughts, now that you read this? I hope you and your friend don’t turn a blind eye. Mozilla’s ethical stance is important.
Its not the tool that is evil, but the intent its used with. That’s all I’m gonna say on that topic.
I’m learning a lot about Mozilla’s ethics today.
It sounds like you and your friend believe they are outmoded. Anthropic is not just a toolmaker, they host the thing that was used to decide to kill children, and its CEO Dario Amodei has expressed desire to continue building war weapons.
In retrospect, do you and your friend believe that the product made by a homophobe such as Brendan Eich is just a tool too?
@Nomad@infosec.pub if you’re going to interact, please remember that you were accepted by the Firefox fan community as a spokesperson for your Mozilla employee friend. How deeply are you and friend burying your heads? Surely your ethics hasn’t degenerated into no longer believing the thing you said a few hours ago, I hope.
It would be interesting to know the token cost for all of this. I think they are getting lots free as advertising for Anthropic, would it be feasible otherwise?
They are supporting some projects with free tokens. Their own people are also helping to find and patch bugs. The latter is probably more to help improve the model and harness than PR. I don’t think they have to worry about advertising anymore. Mind that there certainly would be a major outcry if something important got hacked with help by their service. They might even have to pay damages. But I wouldn’t put it past them that they are being responsible as a matter of principle.


