
I kinda prefer her without


Sometimes you got a nut.

She sure is hot, probably could chug a few thousand gallons of water.
I like a girl who stays hydrated
She looks sexy without makeup
“you wouldn’t fuck a server would you?”
I read this to the tone of “you wouldn’t download a car.”
What connotation of fuck are we using?
This day and age feels like there is a 50:50 chance it’s a rack or a 48 year old Asian man.
So that means there’s a small chance it’s an Asian man with a giant rack?
Fuck yeah, sign me up!
I like where your head is at.
Deep inside some asian man’s rack probably.
I’m not seeing the downside. Have you seen that cable management?!
Looks like cable ties. As a former network admin. Fuck cable ties
Good call.
She was a vast machine.
She keeps her uptime green.
She was a mem-ry hungry woman–
That i, surely deemShe’ll lead to our demise:
Hallucinations, lies.
Selling us out, as she’s a corporate spy.Poisoning all our air.
Pullin’ out all my hair.
They told me “comply”
But it’s a slop-filled affair.‘Cause the walls start shakin’,
The Earth was quakin’,
My mind was achin’,
Society’s breakin’And you
Simply don’t belong.
Yeah, you
Simply don’t belong.I don’t love the chorus but i got nothin’. 🤷♂️
I mean, you can run an LLM locally, its not that hard.
And you can run such a local machine off of solar power, if you have an energy efficient setup.
It is possible to use this tech in a way that is not horrendously evil, and instead merely somewhat questionable, lol.
Hell, I guess you could arguably literally warm a room of your home with your conversations.
As far as energy goes, its a matter of degree. LLMs are mainly bad emissions-wise because of the volume of calls being made. If you’re running it on your GPU, you could have been playing a game or something similarly emitting.
The issue is more image generation models which are 1000 times worse https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/
Original Paper: https://arxiv.org/pdf/2311.16863
A moderately sized text-to-text model that you would run locally is about 10g of carbon for 1000 inferences which is driving a car about 1/40th of a mile. Even assuming your model is running in some kind of agentic loop, maybe 5 inferences / actual response (though it could be dozens depending on the architecture) that gets to you, that’s 10gcarbon / 200 messages to your model which is at least 2-3 sessions on the heavy end I would think. You could use it for a year and its equivalent to driving 3 miles if you do that every day.
Image generation, however, is 1000-1500x that so just chatting with your GF isn’t that bad. Generating images is where it really adds up.
I wouldn’t trust these numbers exactly, they’re more ball-park. There’s optimizations that they don’t include and there’s a million other variables that could make it more expensive. I doubt it would be more than 10-20 miles in a car / year for really heavy usage though.

Feeling cute, might cause RAM shortages later.
I like the bottom image
Hell! I’d still tap that!








