With the recent discussions around replacing Spotify with selfhosted services and the possibilities to obtain the music itself, I’ve been finally setting up Navidrome. I had to do quite a bit of reorganization to do with my existing collection (beets helping a ton) but now it’s in a neatly organized structure and I’m enjoying it everywhere. I get most of my stuff from Bandcamp but I have a big catalog from when I’ve still had a large physical collection.
I’m also still working on my docker quasi gitops stack. I’ve cleaned up my compose files and put the secrets in env files where I hadn’t already, checked them into my new forgejo instance and (mostly) configured renovate. Komodo is about to get productive but I couldn’t find the time yet. Also I need to figure out how to check in secrets in a secure way. I know some but I haven’t tried those with Komodo yet. This close of my fully automated update-on-merge compose stacks!
I’ve also been doing these for quite a while and decided to sometimes post them in !selfhosting@slrpnk.net to possibly help moving a bit from the biggest Lemmy instance, even though this community as it is is perfectly fine as well as it seems.
What’s going on on your servers? Anything you are trying to pursue at the moment?
I’m setting up a yunohost machine for my brother as a birthday present. I got him a domain good for 10 years, and installed nextcloud and Jellyfin with some home videos digitized from our parents’ vhs tapes.
Just got a domain and started exposing my local jellyfin through cloudflare, mostly wanting to listen to my music on my phone when i’m outside too.
I followed some guides that should make it fine with cloudflare’s policy, video doesnt work when i tried it but otherwise its been fun despite me feeling like im walking on eggshells all the time. I guess time will tell if it holds up
Some things which have caused issues for me:
File permissions
Video/audio format (264/aac stereo is best for compatibility)
Oh file permissions are a nightmare to me, I thought I managed to get it sorted but after i installed lidarr, it alone suddenly can’t move files out of the download location anymore. I even tried to chmod 777 the data folders and nothing. I dont think I quite have the grasp on how those work with docker on linux yet, it seems like those arr services also have some internal users too which I dont get why would they.
Wdym with the formats, is this referring to transcoding? I kept those on defaults afaik
In linux user and group names don’t matter. Only the gid and uid matter. Think of user and group names as human names like domains are for IPS.
In docker when you use mounts, all your containers that want to share data must agree on the gid and uids.
In rootless docker and podman things subuids and subgids make it a little more complicated since IDs get mapped between host and container, but its still the IDs that matter.
Could be that lidarr is setting its own permissions for downloaded stuff (look for something like dmask or fmask in the docker config). You might also need to chmod -R so it hits all sub folders. If you have a file or directory mask option, remember that they’re inverse, so instead of 777, you’d do 000 for rwxrwxrwx.
You might be onto something, lidarr does have UMASK=002 setting in the .env file. I think the issue is when sabdnzbd puts the files and then lidarr can’t read them, so what exactly is the expected permission setting then in this case? If I put it to 000 for lidarr, won’t other services then be unable to add the files there?
I always feel so dumb when it comes to these things since in my head it’s something that should be pretty straightforward and simple, why can’t they all just use the same user and share the same permissions within this folder hierarchy…
H A R V E S T E R
Lol
But honestly got all of nodes (some new hardware, some minipcs, some old laptops, some ewaste servers, some raspberry pies, a VM off my Macbook), all in my harvestet cluster. I got Rancher running as a vcluster as well so messed some with Rancher provisioned rke2 clusters too.
Played some with nutanix as a vm in that cluster (what a fing nightmare, anr not virtual hardware just Nutanix …). Playing with ESXI now (its not happy about my amd chips so far…). And also my virtual harvester cluster. Easy so far but i want to get more ambitoius in creating a mock deployment, network and all, so i can test crazier configs without losing a day to rebuilding a cluster via thumb drive again…
Also managed some risk and got my ISP to let me do dual modems on the same bus and configed OpenWRT to load balance between them and via usb my wifi hotspot. Still working with them to try and get more IPs so can use the 4 total ports on my modem stacks to attach to both of my routers.
I like tinkering with junk, so the other half of my hobby is just risk mitigation (which i also enjoy).
Just got my new NAS drives, so about to make the transition.
It’s actually new drives, and a new host. I’ve been running my old Synology NAS for years. But decided I ought to switch to a “real” NAS through Proxmox.
Just set up a simple samba container with Cockpit as a web manager, so far working really well. But I want to validate backups before I start moving all the irreplaceable data.
Something I’m excited about is using my old Synology NAS as an automatic, off-site backup once I transition. Heard about Duplicati from a friend, sounds like a great syncing solution.
Other than that I’ve been looking into using Apple HomeKit features with my Home Assistant devices. And also planning to move my hardware from the cheap Amazon floor shelf to a real 19” rack.
I have a couple pis that run docker containers including pihole. The containers have their storage on a centralized share drive.
I had a power outage and realized they can’t start if they happen to come up before the share drive PC is back up.
How do people normally do their docker binds? Optimally I guess they would be local but sync/backup to the share drive regularly.
Sort of related question: in docker compose I have restart always and yet if a container exits successfully or seemingly early in it’s process (like pihole) it doesn’t restart. Is there an easy way to still have them restart?
You should be able to modify the docker service to wait until a mount is ready before starting. That would be the standard way to deal with that kind of thing.
What if it’s a network mount inside the container? Doesn’t the mount not happen till the container starts?
Considering switching my Forgejo to a Tangled.sh knot. Their easy self hosted CI option is appealing
but mostly itd be easier to collaborate than opening sign ups on my instance of forgejo
That looks super interesting. Just yesterday I was reading on https://radicle.xyz/ for similar reasons.
Forgejo is developing ActivityPub support, so eventually we’ll be able to collaborate across forgejo instances :) I’ll have a look at tangled as well
oh thats cool thanks for sharing!
Nothing new, just peacefully chugging along hosting my blog, Jellyfin, Radicale for calendar and contacts. Still long-term searching for a photo storing & sharing (gallery) solution, as well as a better music server. Maybe Navidrome is what I’m looking for.
Oh, and I need to renew my SSL certificate soon. I don’t like Letsencrypt. Everything EU-based, I’m not going to start making US-based contracts.
Did you have a look at Immich yet for photos?
Can recommend Immich for the Photo gallery and sharing option.
Can recommend Navidrome for music.
I’m a newbie to the whole selfhosting thing. Been doing NAS+minipc for past 6 months with a few services running. 2 days ago I embarresed myself.
So, I been running 5 services on nginx proxy manager. But I heard that NPMplus is slightly better and can renew certs automatically. I had transferred settings from NPM to NPMplus by hand off the photo and for some reason NPMplus couldn’t work with services ran on NAS. I went back to NPM and haven’t touched the issue til last Sunday.
During troubleshooting I found out that my dumb ass didnt pay attention and put ‘’:‘’ instead of ‘’." . So 192.168.xxx.xx became 192:168:xxx:xx and that was the reason I spent whole day troubleshooting the issue.
Next goal: go back to my homeland and set Pi3 at my parent’s place to be my VPN so I can setup an arr stack and automate media downloads in a way that govt. of my current residence couldn’t put a deep hole in my wallet.
Do your parents live in a place where that won’t happen?
For auto renewing certs: You can do that easily with the normal nginx, using certbot alongside it. Just tell certbot to handle the domain once and it will renew it forever.
-
Yes. Kind of. It is in EU but afaik piracy is not an illegal thing there yet or is not enforced as much compared to where I live now.
-
Too late. NPMplus is up and running. Also, I like dark mode.
-
What’s the best ACME server?
Just discovered TinyAuth and it is fantastic. I am replacing Authentik with it because it has what I want but is much faster, smaller, and simpler. Also, the license is FOSS.
What is beets, hobbits?
It’s a tool that checks and corrects metadata for your music collection. You can also import music with it to your collection (it will put everything in the right folders etc).
It does require some manual intervention now and then, though (do you really want to apply this despite some discrepancies? Choose, which of these albums it really is. Etc).
Would you say it’s better than musicbrainz?
I think it uses musicbrainz
Very cool!
I’ve been on full maintenance mode for spring/summer, those are the times to be going placed and doing things. Autumn I’m going to write my winter goals for the server.
I have another n100 box that I’m going to dedicate to immich, I have 7 users now, so when they all upload on a night my current n100 has a little bit of a cry.
Security is always a big one. I’m currently relying on tailscale (limited to necessary lxcs), reverse proxies, Https, and app ‘sign ins’. Not bad (it’s bad) but not good either.
For new projects, I want to integrate Audiobookshelf with Hardcover. I’ve got a project installed but it didn’t work on my first attempt so I gave it up for winter.
I’d like to set up a virtual DosBox, accessable by a browser, for my 1000s of dos games. Again I’ve found a few projects, none worked out of the box so have been given up for winter.
Other than that all my front end services are working well. *arrs are becoming a pain for all the malware named as good files confusing rad/sonarr. Qbit knows not to download .exes, and the like, but sonarr doesn’t know to delete them and look again. Lazylibrarian accepts no shit though, if things aren’t going as expected LL very quickly deletes and goes again. I might try vibecode a script for that.
I’d like to break out my storage into a dedicate box. Probably get some e-waste to fill with drives. Currently I have a n100 running network, storage and virtualization, it’s a little cramped.
It’s probably smarter to break out networking first, build a little router/firewall box (the above n100 mini would be perfect). But, I don’t get along with networking, I find it challenging in an unsatisfying way. When I’m done banging my head against the wall and things work I’m just relieved I don’t have to do it again, instead of feeling accomplished. New projects are fun, Storage I get the feeling of accomplishment from doing the thing. Networking is a dark art full of black boxes I don’t understand that sometimes play nice together and mostly fuck my shit up.
I want to move over to IPv6, not for any other reason than it’s probably a good idea to progress to the 2000s. If I can move everything over to Hostnames however, that’d be the dream.
Moving from Docker to Podman is probably smart.
Lots to do over winter… I’m probably gonna build a fish tank instead
I’d like to set up a virtual DosBox, accessable by a browser, for my 1000s of dos games.
Maybe something with the new Webtop images from LinuxServerIO? The new Desktop streaming protocol they have is seriously speedy, you can totally game on it!
Thanks, I’ll check it out. Speedy could be interesting for games that used the CPU speed as a clock speed.
The LinuxServerIO peeps make fantastic images. When my server was docker only they pretty much built my homelab. I’m sure my docker hosts still have a bunch of their stuff, *arrs are probably all them.
*arrs are becoming a pain for all the malware named as good files confusing rad/sonarr. Qbit knows not to download .exes, and the like, but sonarr doesn’t know to delete them and look again.
And this is exactly why I run Cleanuparr alongside my *arrs. It integrates extension blocking, blocked/failed/stalled retries, and even has crowdsourced blocklists for malware. Between that and Huntarr (which automates background searches, because Sonarr/Radarr don’t continuously search for missing media,) and my *arr stack is running better than ever.
Cleanuparr/Huntarr just got upgraded to a late-summer project.
Edit: they’ve been running a couple days now. They’re good.
Glad to hear you like them! I’ve been using Cleanuparr since before it had a GUI to configure things. And Huntarr is the only reason my library backlog has any chance of filling out; I have lots of old movies and shows that never really get new posts, so Sonarr/Radar would never have an opportunity to grab them from an rss feed. Fair warning, Huntarr may accelerate your plans for expanded storage, as your backlog begins to fill in.
Trying to figure out how to drop my energy requirements and still keep ~100TB running.
Right now it’s 12x 10TB drives in a RAID 6 with ~8TB still available; it might be time to bite the bullet and upgrade to 20TB drives. Problem is, if my calculations are correct, I’d still need 7 drives - 5 X 20TB=100TB and then two more drives for “parity”.
The server I have lined up already has a PERC in it.
Do you actually need 100TB instantly available? Could a portion of that be cold storage that can be booted quickly from a WOL packet from the always-on machine when needed? With some tweaking, you could probably set up an alpine-based NAS to boot in <10 seconds, especially if you picked something that supported coreboot and could avoid that long bios post time.
Don’t need the 100TB instantly. Most of the Linux ISOs are more for archival reasons.
Talk to me more about this NAS with WOL. :-)
Most motherboards support wake packets sent over Ethernet. They only work on your lan, but they will start a machine or wake it from sleep. Sending a packet from another machine is fairly simple, it’s old tech. I’ve seen simple web servers that have a “send wake” button, but you could probably trigger it from a variety of things
The WoL part I’ve got.
It’s the NAS with ~10 second boot time that can house enough drives for 80TB of data which would be triggered and accessible to a plex server when needed that I’m more interested in.
Alpine Linux can boot in a few seconds. Stick to something extremely simple like nfs or samba and nothing else in the boot. Or use suspend to ram with your regular OS.
Last question; how would I get it to wake when someone’s trying to access a file on it?
You would have to script something based on whatever service is actually being used, or maybe node red? In the past, way back, I used something like this that is just a simple web page that the user has to click a button to start the machine - there are a bunch of these https://github.com/Trugamr/wol - the web server is on the lan with the NAS so can send the magic packet, but the page can obviously be served over the internet.
Maybe you could just spin down/turn off the disks? That will reduce power consumption a lot and they’ll get up once requested.
What are you doing with that much space?
Storing Linux ISOs, like everyone else.
Just sitting here surprised that my proxmox backups didn’t interrupt my VMs.
O.o Should I be concerned that this is surprising?
I don’t believe there’s cause for concern. I just assumed based on the prompts while setting up the backups that it would actually restart the VMs. I was wrong.
I understand that COW file-systems can do snapshots at “instantaneous” points in time and KVM snapshots ram state as well, but I still worry that a database could be backed up at just the wrong time and be in an inconsistent state at restore. I’d rather do less frequent backups of a stopped VM and be more confident it will restore and boot correctly. Maybe I’m a curmudgeon?
I suppose it boils down to a threat model. I wouldn’t lose sleep if any of my VMs imploded and I had to rebuild them from scratch.
I finally got around to setting up my internal services with TLS. It was surprisingly easy with a Caddy docker image supporting Cloudflare DNS challenge.
I did this because various services I use are starting to require https.
Now everything is on a custom domain, https, and I can access it through Tailscale as usual.









