Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…

  • Da Oeuf@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Check out the YUNOhost repos. If everything you need is there (or equivalents thereof), you could start using that. After running the installation script you can do everything graphically via a web UI. Mine runs for months at a time with no intervention whatsoever. To be on the safe side I make a backup before I update or make any changes, and if there is a problem just restore with a couple of clicks via my hosting control panel.

    I got into it because it’s designed for noobs but I think it would be great for anyone who just want to relax. Highly recommend.

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I deliberately have not used docker at home to avoid complications. Almost every program is in a debian/apt repo, and I only install frontends that run on LAMP. I think I only have 2 or 3 apps that require manual maintenance (apart from running “apt upgrade”). NextCloud is 90% of the butthurt.

    I’m starting to turn off services on IPv4 to reduce the network maintenance overhead.

  • Melmi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.

    Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.

  • krashmo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.

      • krashmo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        No problem. I have been using it for a while and I really like it. There’s nothing stopping you from doing it the old fashioned way if you find you don’t like portainer but once you familiarize yourself with it I think you’ll be hooked on the concept.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      +1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.

      It’s matomo analytics, so not as bad as some big tech, but still.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    SSO Single Sign-On

    [Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Proxmox?

    And yes. Its like a full time job to homelab. Or a part time job. Its just hard, and sometimes things just don’t work.

    I guess one answer is to pick your battles. You can’t win them all. But things are objectively better than they were in the past.

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I don’t consider an app deployable until I can run a single script and watch it run. For instance I do not run docker/podman containers raw, always with a compose and/or other orchestration. Not consciously but I probably kill and restart it several times just to be sure it’s reproducible.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

    Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This is a crazy take. Docker doesn’t involve much overhead. I’m not sure where your 150GB hard drive space commend comes from, as I just dozens of containers on machines with 30-50GB of hard drive space. There’s no nested computer, as docker containers are not virtualization. Containers have nothing to do with a single projects “dependency hell”, they’re for your dependency hell when trying to run a bunch of different services on one machine, or reproducing them quickly and easily across machines.

    • zen@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Docker in and of itself is not the problem here, from my understanding. You can and should trim the container down.

      Also it’s not a “whole nested computer”, like a virtual machine. It’s only everything above the kernel, because it shares its kernel with the host. This makes them pretty lightweight.

      It’s sometimes even sometimes useful to run Rust or C++ code in a Docker container, for portability, provided you of course do it right. For Rust, it typically requires multiple build steps to bring the container size down.

      Basically, the people making these Docker containers suck donkey balls.

      Containers are great. They’re a huge win in terms of portability, reproducibility, and security.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Yeah, I’m not against the idea philosophically. Especially for security. I love the idea of containerized isolation.

        But in reality, I can see exactly how much disk space and RAM and CPU and bandwidth they take, heh. Maintainers just can’t help themselves.

        • NewNewAugustEast@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Want to mention some? I have no containers using that at all.

          Perhaps you never clean up as you move forward? It’s easy to forget to prune them.

    • unit327@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.

  • falynns@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    My biggest problem is every docker image thinks they’re a unique snowflake and how would anyone else be using such a unique port number like 80?

    I know I can change, believe me I know I have to change it, but I wish guides would acknowledge it and emphasize choosing a unique port.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Why expose any ports at all. Just use reverse proxy and expose that port and all the others just happen internally.

    • unit327@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Most put it on port 80 with the perfectly valid assumption that the user is sticking a reverse proxy in front of it. Container should expose 80 not port forward 80.

      • PieMePlenty@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        There are no valid assumptions for port 80 imo. Unless your software is literally a pure http server, you should assume something else has already bound to port 80.
        Why do I have vague memories of Skype wanting to use port 80 for something and me having issues with that some 15 years ago?
        Edit: I just realized this might be for containerized applications… I’m still used to running it on bare metal. Still though… 80 seems sacrilege.

    • lilith267@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Containers are ment to be used with docker networks making it a non-issue, most of the time you want your services to forward 80/443 since thats the default port your reverse proxy is going to call

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Databases.

      I ran PaperlessNGX for a while, everything is fine. Suddenly I realize its version of Postgresql is not supported anymore so the container won’t start.

      Following some guides, trying to log into the container by itself, and then use a bunch of commands to attempt to migrate said database have not really worked.

      This is one of those things that feels like a HUGE gotcha to somebody that doesn’t work with databases.

      So the container’s kinda just sitting there, disabled. I’m considering just starting it all fresh with the same data volume and redoing all that information, or giving this thing another go…

      …But yeah I’ve kinda learned to hate things that rely on database containers that can’t update themselves or have automated migration scripts.

      I’m glad I didn’t rely on that service TOO much.

      • BrightCandle@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Its a big problem. I also dump projects that don’t automatically migrate their own SQLite scehema’s requiring manual intervention. That is a terrible way to treat the customer, just update the file. Separate databases always run into versioning issues at some point and require manual intervention and data migration and its a massive waste of the users time.

  • Dylancyclone@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    If you’ll let me self promote for a second, this was part of the inspiration for my Ansible Homelab Orchestration project. After dealing with a lot of those projects that practically force you to read through the code to get a working environment, I wanted a way to reproducably spin up my entire homelab should I need to move computers or if my computer dies (both of which have happened, and having a setup like this helped tremendously). So far the ansible playbook supports 117 applications, most of which can be enabled with a single configuration line:

    immich_enabled: true
    nextcloud_enabled: true
    

    And it will orchestrate all the containers, networks, directories, etc for you with reasonable defaults. All of which can be overwritten, for example to enable extra features like hardware acceleration:

    immich_hardware_acceleration: "-cuda"
    

    Or to automatically get a letsencrypt cert and expose the application on a subdomain to the outside world:

    immich_available_externally: true
    

    It also comes with scripts and tests to help add your own applications and ensure they work properly

    I also spent a lot of time writing the documentation so no one else had to suffer through some of the more complicated applications haha (link)

    Edit: I am personally running 74 containers through this setup, complete with backups, automatic ssl cert renewal, and monitoring

    • WhiteOakBayou@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      That’s neat. I never gave ansible playbooks any thought because I thought it would just add a layer of abstraction and that containers couldn’t be easier but reading your post I think I have been wrong.

      • Dylancyclone@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        No that’s totally fair! I’m a huge fan of making things reproducible since I’ve ran into too many situations where things need to be rebuilt, and always open to ways to improve it. At home I use ansible to configure everything, and at work we use ansible and declare our entire Jenkins instance as (real) code. I don’t really have the time for (and I’m low-key scared of the rabbit hole that is) Nix, and to me my homelab is something that is configured (idempotently) rather than something I wanted to handle with scripts.

        I even wrote some pytest-like scripts to test the playbooks to give more productive errors than their example errors, since I too know that pain well :D

        That said, I’ve never heard of PyInfra, and am definitely interested in learning more and checking out that talk. Do you know if the talk will be recorded? I’m not sure I can watch it live. Edit: Found a page of all the recordings of that room from last year’s event https://video.fosdem.org/2025/ua2220/ So I’m guessing it will be available. Thank you for sharing this! :D

        I love the “Warning: This talk may cause uncontrollable urges to refactor all your Ansible playbooks” lol I’m ready

  • zen@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Yes, I get lab burnout. I do not want to be fiddling with stuff after my day job. You should give yourself a break and do something else after hours, my dude.

    BUT

    I do not miss GUIs. Containers are a massive win in terms because they are declarative, reproducible, and can be version controlled.

    • mrnobody@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Yeah, since Christmas, I more it sounds silly, but I’ve been playing a ton of video games with my kids lol. But not like CoD, more like Grounded 2, Gang Beasts, and Stumble Guys lmao

      • zen@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        You’re doing i right. Playing cool games with your kids sounds like a blast and some great memories :)

  • hesh@quokk.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I wouldn’t say im stick of it, but it can be a lot of work. It can be frustrating at times, but also rewarding. Sometimes I have to stop working on it for a while when I get stuck.

    In any case, I like it a lot better than being Google’s bitch.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I have to stop working on it for a while when I get stuck.

      I feel you there bro. Sometimes, when I’m creating a piece of music, I get to a point where, I’m just not making any progress, I’ll step of for a piece, let it simmer for a bit. Same with servers in general for me. It’s the reason I have a test server and have, in the past, leaned a bit heavily on a few backups. LOL! I can screw something up quick when I’m frustrated. The reward for me is learning something new. It’s a rewarding and useful hobby for me. among others.