

Fully Automated Luxury Gay Space Communism
Just a guy shilling for gun ownership, tech privacy, and trans rights.
I’m open for chats on mastodon https://hachyderm.io/
my blog: thinkstoomuch.net
My email: nags@thinkstoomuch.net
Always looking for penpals!
Fully Automated Luxury Gay Space Communism
I have helped my neighbor across the halls kid with his gaming PC.
Couldn’t tell ya her name, but the dog below me with a heart problem is named Sophie, the neighbors down the hall have cats named Mink and Stink, and a few buildings down there’s a lady with two huskies one named Pogs and the other Skips (Skips has 3 legs)
Oh the rossman video.
I hate how obsessed on dumb shit he gets. The man is legitimately doing great work usually, and then he takes something minor that an otherwise ally says or does and blows it out of proportion.
This man would have made a great tankie. Unfortunately he made a whole 20 minute video on why AOC is stupid for saying unskilled labor doesn’t exist and then explaining exactly the points she was making.
I legitimately love this mans work and I wanna support him, but man is he petty.
I’m mad you started your citation at 0.
I’m telling Chicago style!
I’ve been trying to talk my fiance into moving because I found out that one of the guys my high school sweetheart ex cheated on me with frequents my favorite coffee shop.
I’m considering a coastal swap.
Life hack: Move to the other side of the state.
True, but I have an addiction and that’s buying stuff to cope with all the drawbacks of late stage capitalism.
I am but a consumer who must be given reasons to consume.
The Lenovo Thinkcentre M715q were $400 total after upgrades. I fortunately had 3 32 GB kits of ram from my work’s e-waste bin but if I had to add those it would probably be $550 ish The rack was $120 from 52pi I bought 2 extra 10in shelves for $25 each the Pi cluster rack was also $50 (shit I thought it was $20. Not worth) Patch Panel was $20 There’s a UPS that was $80 And the switch was $80
So in total I spent $800 on this set up
To fully replicate from scratch you would need to spend $160 on raspberry pis and probably $20 on cables
So $1000 theoratically
I’ve only seen the episode with Toby Turner in it and it has made me a worse person.
The put some damn fan service into Law and Order: SVU and I might start watching!
Am I right fellas!?
I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.
Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn’t great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.
A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100
A mac is a very funny and objectively correct option
I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.
With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.
As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.
But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.
That’s fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.
I’m man feeding orphans to the orphan crushing machine. I can stop this at any moment.
Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.
I’m a huge fan of this all in one idea that is upgradable.
These are M715q Thinkcentres with a Ryzen Pro 5 2400GE
Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.
I also decided I didn’t want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.
This was just what I wanted at a price I was good with at Like $120. There’s a 3D printable version but I wasn’t interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.
But this set up is way cheaper if you have a printer and some patience.
I learned something interesting from my AI researcher friend.
ChatGPT is actually pretty good at giving mundane medical advice.
Like “I’m pretty sure I have the flu, what should I do?” Kinda advice
His group was generating a bunch of these sorta low stakes urgent care/free clinic type questions and in nearly every scenario, ChatGPT 4 gave good advice that surveyed medical professionals agreed they would have given.
There were some issues though.
For instance it responded to
“Help my toddler has the flu. How do I keep it from spreading to the rest of my family?”
And it said
“You should completely isolate the child. Absolutely no contact with him.”
Which you obviously can’t do, but it is technically a correct answer.
Better still, it was also good at knowing its limits and anything that needed more than OTC and bedrest was seemingly recognized and it would suggest going to an urgent care or ER
So they switched to Claude and Deepseek because they wanted to research how to mitigate failures and GPT wasn’t failing often enough.