

Lots of people do that. It’s how they end up in crippling debt.


Lots of people do that. It’s how they end up in crippling debt.
Silence Unknown Callers and Filter Unknown Senders on iPhone.
Doesn’t iphone have a “auto silence spam callers” function? The one Android is amazingly good.
Are there? I find unlocking my phone takes more time than shaking.
Since I have a shake-for-light feature, I’ve started using it numerous times every day.


This entire thread is a gigantic Yo Momma joke, only real


my body isn’t what it used to be. Office chair is about to be on that list as well, though I’ve been putting it off for ages
I found a (used) standing desk was roughly the same cost as a good office chair, and I’m very happy with my choice for the desk. I stand up about 6 hours out of the day, and it’s been great for my back and feet.


They spend 8 hours a day inconveniencing themselves for money.


Seeing much younger versions of myself wearing the goth stuff I kept for nostalgia reasons made me try it on again.
Turns out my parents were right, it was a phase. Damnit.


Exactly. I’d rather see Russia building Dachas than bombs


Also, Russia technically isn’t at war right now either.


This is great news for Ukraine


Yo dawg, I heard you like ads, so I put some ads in your ads so I can sell toys while I sell toys


Workplace safety is quickly turning from a factual and risk-based field into a vibes-based field, and that’s a bad thing for 95% of real-world risks.
To elaborate a bit: the current trend in safety is “Safety Culture”, meaning “Getting Betty to tell Alex that they should actually wear that helmet and not just carry it around”. And at that level, that’s a great thing. On-the-ground compliance is one of the hardest things to actually implement.
But that training is taking the place of actual, risk-based training. It’s all well and good that you feel comfortable talking about safety, but if you don’t know what you’re talking about, you’re not actually making things more safe. This is also a form of training that’s completely useless at any level above the worksite. You can’t make management-level choices based on feeling comfortable, you need to actually know some stuff.
I’ve run into numerous issues where people feel safe when they’re not, and feel at risk when they’re safe. Safety Culture is absolutely important, and feeling safe to talk about your problems is a good thing. But that should come AFTER being actually able to spot problems.


I do (workplace) safety, compliance and hazardous waste handling.


I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.
And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.


you will never be able to eliminate your attack surface, and employees with good will can be your eyes and ears on the ground.
All the good will in the world won’t make up for ignorance. Most people know basically next to nothing about IT security, and will just randomly click shit to make the annoying box go away and/or get to where they think they want to go. And if that involves installing a random virus they’ll happily do it, and be annoyed that it requires their password.


I wonder if this is what it felt like for the first people who thought “you know what, instead of this horse carrying my spare food and spears, I bet I could sit on it DURING the fight!”


That’s impressive!
Even Russia would probably have netting and sonar bouys in place around their own major naval base. It seems extremely unlikely that Russia wouldn’t take those super basic measures, which means this drone was stealthy enough to avoid sonar systems AND able to Dodge Submarine/torpedo netting while inside a shallow port.


The column is Ionic, but don’t let that detract from the joke!
Doric is straight, Ionic has scrolls, Corinthian is with frills and leafs.
Really? Cool! What’s your take on people deliberately muddying the waters by conflating LLMs with other forms of AI like interpretative models?
And especially keeping in mind that the latter are a roaring success, while the former is probably the single most expensive waste of time, money and effort known to humanity.