What’s up, what’s down and what are you not sure about?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
Got my jetKVM in the mail yesterday. Really sleek build and software. Liking it a lot so far.
Migrated my network to a router running openwrt this past week as well. Having issues with avahi-daemon crash looping, so I haven’t been able to get mdns working in between networks 🤷
Finally starting my self hosted journey. I have everything I need I’m setting up a 6tb nas for linux iso’s photos and files. And I recently got a “broken” laptop that works perfectly fine that I will use for running all my applications in proxmox such as immich, jellyfin and nextcloud. And probably many others in the near future.
More incus:
- mounting persistent storage into containers (cheating by exporting NFS from my proxmox zfs into the incus host.
- wrote a pruning backup script for containers, runs daily
- passed through hardware (quicksync) into jellyfin container (it works!)
- launched an OCI container (docker home assistant) natively in incus (this is a game-changer!)
Next:
- build 2nd incus node
- move all containers from proxmox to incus
- decom proxmox
- setup Debian with NFS export
I hear about Incus being the next best thing. I’ve never played around with it. Is it all that and a bag o’ chips?
I think so.
It is LXD + KVM, so way more and finer tune control on lxc instances. It can run OCI images as well, so for docker instances with only a few configs and no persistent storage, it is actually quite handy. For docker instances that need pretty complicated compose files, I just run docker inside an lxc for now, until I figure that out.
Does Incus allow you to use a VM with a GUI? One thing that’s nice about Proxmox is I have one VM with a very basic lxqt setup for when I need that, and I can either use remote-viewer + the spice protocol to access it or access it through the Proxmox web ui. That’s been very handy.
Side question, but where are you hearing this about incus?
I’m wrapping up 9 years of using proxmox and I have very specific reasons for switching to incus, but I this is the third time I’m fielding questions in the last month about incus.
I read a lot. LOL I might not understand it all, but I read TBs of articles and stuff.
I have a self-hosted AI system that works pretty well. I can interact with it via my phone, the shell, my IRC server, and I can verbally talk to it.
But I want to get it to remember things, so I need to start working on RAG or something. Eventually I’d like to be able to have it draft emails for me, and schedule appointments.
Same, except the irc, I have a python thing to interface.
Stealing your idea, that sounds awesome.
i run coolify and I have to make my own solutions so I’m learning a lot about docker.
I tried to update my lemmy instance and it all went so horribly wrong. DB never came up, errors everywhere, searching implied I updated to a dev branch sometime in the past (not a dev, don’t think I did) and it’ll be console and DB queries for a fix.
Ran out of time and overwhelmed, I restored backups and buried my head in the sand. Nope, not now. Future, yes, but oh not now.
Sometimes we get so engrossed in what we’re doing we can’t see the problem(s). I do that a lot, so I have take a break. Same with creating music. You get so deaf to what you are trying to write that nothing sounds good no matter what you do. In the words of Snoop Dog, ‘I had to back up off of it and sit my cup down. Tanqueray and chronic, yeah, I’m fucked up now.’
Take a break.
Firing up my NAS and Arrs. My Aoostar WTR Pro and all the components arrived, it’s all setup, and I swapped out the fan for a larger one to get more airflow into the nvme drive area since I live in a hot climate.
Spending the day configuring a vpn, sab, and qbit. Already learning a lot!
Scrubbing a little demo project I made featuring a web app behind oauth2-proxy leveraging keycloak as local idp with social login. It also uses a devcontainer config for development. The demo app uses the Litestar framework (fka starlite, in Python) because I was interested, but it’s hardly the focus. Still gotta put caddy in front of it all for easy SSL. Oh, and clean up all the default secrets I’ve strewn about with appropriate secret management.
All of it is via rootless podman and declarative configuration.
Think I might have to create my own Litestar RBAC plugin that leverages the oauth headers provided by the proxy.
It has been a minute since I worked daily in this space, so it has been good to dust off the cobwebs.
I’d appreciate some feedback on what I’m looking to do.
I’m wanting to follow the FUTO guide, but I don’t want to build a router, to save on some money for now.
So I’m planning on buying a Mikrotik MT RB750Gr3 and putting OpenWrt on it, then using my current TP-Link Archer C6 as a wireless access point. (will buy a dedicated AP in the future).
One thing I wonder is, if there is a Mikrotik model that would be better?I’m using the rb5009 but im using RouterOS not openwrt. Any reason why you’d want to do that?
I personally think if you’re buying a purpose built hardware and then putting your own software on it, you should move to a mini computer with OpnSense.
Besides adding a UPS, how do you deal with power failures? Are you somewhere where they’re not much of a problem?
In my experience mini computers don’t handle power failures nearly as well as purpose-built hardware.
After several power failures the SSD on my Raspberry Pi became so corrupted it wouldn’t boot, and I was 250 miles away at the time and lost access to my home network for weeks. Overlay file systems work but are a PITA to maintain. By contrast my routers have never had a problem even with repeated power failures, so instead of relying on the Pi I’ve moved my DNS and Wireguard servers to my router.
All of my remote routers are running RouterOS without anything on top of it. RouterOS is powerful enough for anything I throw on it. But I am using much beefier routers, I have 2 x 5009 and a HAP AX3 which have plenty of flash and ram ro run the additional packages I need.
As for normal computers, I have it on a UPS and I backup core files to off-site areas. Additionally, I buy SSDs that have a little bit of powerloss protection.
I’ve never had issues with mini PCs but I’ve had issues with PIs. I’ve since switched to high endurance SD cards for my Pis and they’ve been rock solid. One’s actually semi exposed to the elements for about a year now without a hiccup.
With RouterOS you can still use DoH with either a self hosted list or a selected ad list. If you want to selfhost a DNS server I’d just host a Adguard Home instance on a VPS for all of your devices.
I also have 2 VPN system for my remote management on 2 separate systems. I learned that the hard way when one of my clients is 8 timezones away.
Power loss protection on SSDs is an interesting addition I hadn’t come across before.
We live in a very windy area and power blinks are common. A high endurance MicroSD was in use the first time the Pi wouldn’t boot, but I was in town and it was just annoying. It was a big issue when the Pi wouldn’t boot from the SSD while I was out of the country.
We don’t have high bandwidth demands so any decent OpenWRT router works fine and supports both Adguard Home and Wireguard. What I really like about putting WG in particular on the router is that if the router is up, WG is working, and the routers come back up without fail after every power outage. A 2nd Wireguard instance still runs on my Pi but since switching to WG on the router a year ago there hasn’t been a reason to even connect to it.
My problems with the Pi had me looking for other solutions and I ended up with a mini Dell laptop running Debian. (Can’t easily run WG on it due to some software conflicts.) It alleviates the need for a UPS and runs for 6+ hours if the power goes out, rather the minutes provided by my small UPS.
One of these days I’ll find a bogus reason to talk myself into upgrading the router with more powerful hardware. Mikrotik looks like a great option and I’ll take a look at RouterOS. Thanks for the info.
RouterOS has WG built in as well as ZeroTier. RouterOS has become quite powerful lately, but make sure you have at least an ARM/ARM64 CPU for it.
It looks like the hEX refresh is the same price from that vendor.
RB5009 is better but more expensive. There’s a PoE version that can power your WiFi APs in the future.
I also question the decision to put OpenWrt on it. RouterOS is solid. There’s a learning curve, but it’s worth it if you’re a nerd.
My radarr instances won’t download anything. It will search and find compatible torrents, but then it just spins and spins, nothing ever moves to the queue. If I refresh its like nothing happened at all. I confirmed that qbt is running properly and my Sonarr instances seem to be running ok.
I recently reorganized the root files to separate HD/UHD content so that I can run 2 instances for Overseerr requests, then this issue started. I had to reset the root folders and now there’s also a root folder error about collections that I can’t resolve either… got me thinking about doing a full reinstall.
The root folder error for collections. I think I know this one. You need to go into every movie and update the filepath to the use the new root folder. Radarr isn’t smart enough to do that automatically for you. Though you’d think they’d have $rootfolder as a var, but no.
What’s in the radarr log? You have your downloader configured, enabled, and tested I assume?
I really need to figure out how to get Jellyfin to use SSL certs and assigning a domain to the instance.
I have my instance running in my k3s cluster. I have its node affinity to only run on my minisforum i9. That way, I can use cert manager to manage the certs.
Caddy is the way.
Caddy! I am embarrassed to think about how long it took me to figure out caddy. I kept cracking away at it tho, and one day it was like the clouds rolled back, and the sun shone on my face, a alien ship came down and this green little dude gave me the secrets, and it was all so simple. Now I can have caddy up and dishing out certs in about 5 minutes. When I look back, I cringe.
Do you have a revese proxy setup?
When in doubt, put it behind nginx
Email… My wife really wants to further de-google, this means moving custom domains off gsute.
Do I move to proton/tuta or go back to self hosting email again like I did for years until about 2010?
If I self host, do I do it at home or on the server that runs my lemmy instance?
Cool your wife is into de googling! My wife thinks I’m a conspiracy nut. I have custom domains on proton and its been great, but with their moves toward AI and crypto who knows. I would probably try tuta if I was setting it up now - but who knows if they will eventually go wonkey then you will wish you self hosted anyway 🤝
I self-host my email using Mailcow, and use a VPS for it. I don’t trust my home server to be reliable enough, and the VPS providers have nicer equipment (modern AMD EPYC CPUs, enterprise SSDs, datacenter-grade 10Gbps or 40Gbps connections, etc). I use a separate VPS just for my emails - it’s the one thing I want to ensure is secure, so I didn’t want any other random software (that could potentially have security issues) running on it…
I also use an outbound SMTP relay to avoid having to deal with IP reputation. SMTP2Go has a free plan for sending <1000 emails per month.
It kind of amazes me that, in this day and age, email has turned out to be the lynchpin of security. Email as a 2FA endpoint. Email password reset systems. If email is compromised, everything else falls. They used to tell us not to put anything in email that you wouldn’t put on a postcard…how did this happen?
That and email protocols are outdated and aren’t too secure. For example:
- Neither SMTP nor IMAP have no way to use two factor authentication.
- Spam blocking is so hard because SMTP was not designed with it in mind.
- SMTP has no way to do end-to-end encryption which is why you need to layer things like GPG on top.
IMAP has a modern replacement in JMAP, but it’s not widespread. SMTP is practically impossible to replace since it’s how email servers communicate with each other.
The “solution” has been for companies to make their own proprietary protocols and apps, for example the Gmail and Outlook apps combined with a Gmail or Microsoft 365 account respectively.
I’ve been testing out immutable distros, in this case openSUSE Aeon (laptop) and openSUSE MicroOS (server).
I set up Forgejo and runners are working, all in podman. I’m about to take the plunge and convert everything on my NAS to podman, which is in preparation for installing MicroOS on it (upgrade from Leap).
I also installed MicroOS on a VPS, which was a pain because my VPS provider doesn’t have images for it, and I’d have to go through support to get it added. Instead, I found a workaround, which is pretty amazing that it works:
- Install Alpine Linux (in my case I needed to provision something else first and mount an ISO to install Alpine, which was annoying)
- Download MicroOS image on VPS (not ISO, qcow image)
- Write image to the disk, overwriting the current OS (qemu-img command IIRC)
- Reboot (first boot takes longer since it’s expanding the disk and whatnot)
The nice thing is that cloud-init works, so my keys set up in step 1 still work with the new OS. It’s not the most convenient way to set things up, but it’s about the same amount of time as asking them for an ISO.
Anyway, now it’s the relatively time consuming task of moving everything from my other VPS over, but I’ll do it properly this time with podman containers. I had an ulterior motive here as well, I’m moving from x86 to ARM, which reduces cost somewhat and it can also function as a test bed of sorts for ARM versions of things I’m working on.
So far I’m liking it, especially since it forces me to use containers for everything. We’ll see in a month or two how I like maintaining it. It’s supposed to be super low effort, since updates are installed in the background and applied on reboot.
As we received new network hardware from our ISP, and inevitably are getting a new IP address again with that, I’m looking into setting up a DDNS. I’ve wanted to check out DuckDNS.
They run their (free) service on AWS EC2 instances, though, and as I am currently also trying to end my reliance on Google and Amazon, I’ve got some more digging to do. If anyone has a good, European (or heck, federated?) solution, hmu!
I’m using the Hetzner nameservers, it’s not exactly DynDNS but they have a DNS API and I just have a cronjob set up that checks every five minutes if the IP is still correct and updates otherwise.
Using this in the cronjob: https://github.com/FarrowStrange/hetzner-api-dyndns
I have been very happy with desec.io, they are a nonprofit based in Berlin.
I added a cheap PCI 4 slot NVMe expansion card and a couple of SSDs for a new pool and then migrated all the database-heavy stuff over to it. Required some use of local ZFS send/receive which I didn’t know was possible, but it has gone smooth so far. Very happy with it! It no longer sounds like my HDD pool is trying to escape from hell and some of the services are much snappier, especially Bitmagnet. I’d highly recommend it as an upgrade for anyone still running purely HDDs. I thought I could get away with it but ZFS speeds are no faster than single drives and the amount of stuff I had was hammering it non-stop.
I also bought my own domain finally to escape the free-tier dynamic DNS woes and I can finally feel good about sharing links with other people. I slapped a file share container with disabled registrations on a sub domain. I put it all behind free tier Cloudflare to hide my server’s IP, it took a little bit of learning what the different records are but so far much easier than I thought. Although I have yet to do the hardest part of setting up dynamic IP for my DNS records. I see a bunch of scripts floating around, but none seem that easy or well-maintained…
Oh, and the PI I’ve had running Pi-Hole v5 for god knows how long with no maintenance couldn’t run Tailscale, so I wiped the entire thing to start fresh and got it up and running with Pi-Hole v6, Tailscale, and Unbound. I like having these separated from my other services as they are more critical to have at all times and I have had 100% uptime with my Pi so far. Although I chose Dietpi for my OS on a whim because it looked interesting and am not sold on it. I like that it has easy software installs with sane defaults so I probably saved time overall, but the amount of time I spent debugging the weird choices Dietpi made for basic shit like networking options really threw me off.