Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • CaptainBasculin@lemmy.bascul.in
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Bare metal is cheaper if you already have some old pc components layjng around and they are not bound to my host pc being on. My PC uses a 600W power supply to run while the old laptop running my Jellyfin + pihole server use like 40W.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I use k3s and enjoy benefits like the following over bare metal:

    • Configuration as code where my whole setup is version controlled in git
    • Containers and avoiding dependency hell
    • Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
    • Declarative network policies with Calico, mainly to make sure nothing phones home
    • Managing secrets securely in git with Bitnami Sealed Secrets
    • Liveness probes that automatically “turn it off and on again” when something goes wrong

    These are just some of the benefits just for one server. Add more and the benefits increase.

    Edit:

    Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬

  • splendoruranium@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

    If it aint broke, don’t fix it 🤷

  • Bogusmcfakester@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’ve not cracked the docker nut yet. I don’t get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven’t figured out these two things yet

    • boiledham@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You would leave your plex config and db files on the disk and then map them into the docker container via the volume parameter (-v parameter if you are running command line and not docker-compose). Same goes for any other docker container where you want to persist data on the drive.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Anything you want to back up (data directories, media directories, db data) you would use a bind mount for to a directory on the host. Then you can back them up just like everything else on the host.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you’re gold.

      Look into docker compose and volumes to get an idea of where to start.

    • purplemonkeymad@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      An easy option is to add the data folders for the container you are using as a volume mapped to a local folder. Then the container will just put the files there and you can backup the folder. Restore is just put the files back there, then make sure you set the same volume mapping so the container already sees them.

      You can also use the same method to access the db directory for the migration. Typically for databases you want to make sure the container is stopped before doing anything with those files.

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

  • tychosmoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.

    The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.

  • 51dusty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    my two bare metal servers are the file server and music server. I have other services in a pi cluster.

    file server because I can’t think of why I would need to use a container.

    the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

    if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

    IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.

  • Surp@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    What are you doing running your vms on bare metal? Time is a flat circle.

  • kiol@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?

    • Lucy :3@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything

  • misterbngo@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Your phrasing of the question implies a poor understanding. There’s nothing preventing you from running containers on bare metal.

    My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.

    I think you’re actually asking why folks would use bare metal instead of cloud and here’s the truth. You’re paying for that resiliency even if you don’t need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I’ve stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.

    • lazynooblet@lazysoci.al
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Your phrasing of the question implies poor understanding.

      Your phrasing of the answer implies poor understanding. The question was why bare metal vs containers/VMs.

      • sepi@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The phrasing by the person you are responding to is perfectly fine and shows ample understanding. Maybe you do not understand what they were positing.

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

    I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.

    I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      containerisation is to applications as virtual machines are to hardware.

      VMs share the same CPU, memory, and storage on the same host.
      Containers share the same binaries in an OS.

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )

        • slazer2au@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Not so much a fake one but overlay the actual directory with specific needed files for that container.

          Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.

          • LifeInMultipleChoice@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?

            • slazer2au@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
              https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yaml

              all the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line

              volumes:
                    - db_data:/var/lib/mysql  
              

              As the compose file will also be in home/user/Wordpress/ you can drop the common path.

              That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.

              Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.

  • SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve always done things bare metal since starting the selfhosting stuff before containers were common. I’ve recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.