Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

  • OnfireNFS@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

    It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

    Probably doesn’t work for everyone but it works for me

  • Magiilaro@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

    Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…

  • nuggie_ss@lemmings.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

      • communism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.

          Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.

          I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          True, Docker does it better because any executables also have redundant copies. Running two different node applications on bare metal, they can still disagree about the node version, etc.

          The actual old-school bloat-free way to do it is shared libraries of course. And that shit sucks.

  • kossa@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

    My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    I’m using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

    VMs were never really an option for me because the overhead is too high for the low power machines I use – my entire empire of dirt doesn’t have any fans, it’s all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

    Stuff like docker I didn’t like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn’t intended for tinkering or anything. You aren’t supposed to build from source in docker as far as I can tell.

    The nice thing about proxmox’s lxc implementation is I can hop in and change things or fix things as I desire. It’s all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

  • SailorFuzz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…

    I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…

    And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.

    I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…

    • ChapulinColorado@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

      Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.

      Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

        Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.

        • ChapulinColorado@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.

          The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.

          • Lka1988@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 days ago

            Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.

            When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    In my own experience, certain things should always be on their own dedicated machines.

    My primary router/firewall is on bare metal for this very reason.

    I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

    I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

    And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

    I didn’t see a point in removing it. So it’s there, just not automatically started.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

      My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.

        If you’re talking about backups and updates for addons and core, that works on VMs as well.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.

          I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option

  • medem@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    For me the learning curve of learning containers does not match the value proposition of what benefits they’re supposed to provide.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.

      Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.

      • dogs0n@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.

        Setting up a new service is mostly 0% risk and apps can’t bog down my main file system with random log files, configs, etc that feel impossible to completely remove.

        I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and docker compose up everything and I am exactly where I left off at my last backup point.

  • tychosmoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.

    The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.

  • 51dusty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    my two bare metal servers are the file server and music server. I have other services in a pi cluster.

    file server because I can’t think of why I would need to use a container.

    the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

    if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

    IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.