

There’s a few options now, but you can get Intel N100 ITX boards Like this one from ASUS with a soldered on N100 CPU for the cost of a normal motherboard.
If its just a NAS, and I do recommend having a separate Just a NAS, that CPU fucks hard.
Just a guy shilling for gun ownership, tech privacy, and trans rights.
I’m open for chats on mastodon https://hachyderm.io/
my blog: thinkstoomuch.net
My email: nags@thinkstoomuch.net
Always looking for penpals!


There’s a few options now, but you can get Intel N100 ITX boards Like this one from ASUS with a soldered on N100 CPU for the cost of a normal motherboard.
If its just a NAS, and I do recommend having a separate Just a NAS, that CPU fucks hard.


Oh man, Teams +Outlook + Office 365 + onedrive +Copilot?
So good for office shit. So bad for hood practices.
“Hey copilot I’m pretty sure I got an email asking if I had an SOP on X. Can you find that email and the SOP?”
“Copilot, using the recording of the teams meeting ‘Training from Vendor X’ and my notes on ‘Tool Y’ can you compile that into a FAQ sheet for us?”
Sure it misses stuff and is only so good because none of the data is private, but man that’s 90% of my work load for SOP making. Worth the $400 a year corporate pays for it.
And if you just want a NAS? It is really hard to go wrong with a 4 bay NAS from one of the reputable vendors (which may just be ugreen at this point?) as those tend to still come out cheaper than building it yourself and 4 disks means you can either play with fire with RAID5 or not be stupid and do RAID1.
Actually ASUS started to sell N100 motherboards with the CPU soldered on for $120
That plus a jonsbo N2 or N3, a few extra pieces, and its a few hundred dollars cheaper than the Ugreen options. Sure it will probably run Truenas instead of Ugreens custom truenas or whatever its built on, but that extra $300 is another 24TB hard drive or a HexOS lifetime subscription.
There’s also always the classic buy an old mid sized tower for $100 and slap two massive hard drives in it
Imagine less off a proper BOD and more of a 3D Printable holder for 3.5 inch drives and no actual connections.
I was considering a mini ITX PC with just an external SAS to Sata PCI card. But at the rate of just building that I might end up just building a better tiny nas box with maybe a jonsbo case like the N3
That’s basically what I’m going for.
How are you connecting the mini PC via SATA and how are you powering the BOD?


Do you fear technology
Oh yes! Greatly!
Windows
Ah, a different kind of fear was meant…


A 4060ti has been out long enough that you’re fine with basically any main stream distro.
I think even the 50 series is fine now with most mainstream distros as well.
I still prefer arch based distros now for Nvidia cards and honestly, Fedora is great!


It was really simple to do in Proxmox.
You will find no name brand HBAs in IT mode on eBay for half the price of Intel, Supermicro, Dell, Etc branded ones. Do not buy the no names. I spent a week flashing and reflashing some cheap one, cycling through cables, etc. Nothing.
My supermicro branded one worked absolutely no issue. And I think it was like $40
It probably took a total of 30 minutes to pass it through and build the VM and everything. It took a couple days to rebuild my data from my previous truenas server but I had 10 TB of data on 4 drives.
The only issues I’ve had have been my own reading comprehension in setting up truenas accounts.


Are you using truenas as the entire homelab?
I also love messing with stuff until it breaks and I learn something, but I’ve decided I just want my files to be accessible.
So I actually have truenas virtualized with a passed through HBA so I can run proxmox to host all my breakable VMs while leaving truenas alone.


Dell Optiplex 3050
Lenovo m720
HP whatever with a 7th gen Intel
All can be had for $50 ish
A Microsoft glazing botnet leveraging copilot and all of r/linuxsucks training data to shitpost on Lemmy made by a developer who took a Janatorial job at Microsoft to “get his foot in the door” during an internal hackathon he was accidentally invited to.
From what I understand its not as fast as a consumer Nvdia card but but close.
And you can have much more “Vram” because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000
Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.
$200 and its the 64GB model with 2x 4090’s amounts of Vram.
Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.
True, but I have an addiction and that’s buying stuff to cope with all the drawbacks of late stage capitalism.
I am but a consumer who must be given reasons to consume.
The Lenovo Thinkcentre M715q were $400 total after upgrades. I fortunately had 3 32 GB kits of ram from my work’s e-waste bin but if I had to add those it would probably be $550 ish The rack was $120 from 52pi I bought 2 extra 10in shelves for $25 each the Pi cluster rack was also $50 (shit I thought it was $20. Not worth) Patch Panel was $20 There’s a UPS that was $80 And the switch was $80
So in total I spent $800 on this set up
To fully replicate from scratch you would need to spend $160 on raspberry pis and probably $20 on cables
So $1000 theoratically
The PIs were honestly because I had them.
I think I’d rather use them for something else like robotics or a Birdnet pi.
But the pi rack was like $20 and hilarious.
The objectively correct answer for more compute is more mini PCs though. And I’m really thinking about the Mac Mini option for AI.
Ollama and all that runs on it its just the firewall rules and opening it up to my network that’s the issue.
I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.
I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn’t open on my network.
With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.
As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.
But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.
That’s fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.
I’m man feeding orphans to the orphan crushing machine. I can stop this at any moment.
Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.
I’m a huge fan of this all in one idea that is upgradable.
FreeCAD isn’t terrible if you haven’t already learned F360
I had to watch a bunch of videos on FreeCAD to sorta unlearn the work flow of F360 stuff but its not bad.