Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

  • Wytch@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Thanks for asking this question. These replies are so much more helpful in understanding the basic premise than anything I’ve come across.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Wow! Thank you all for the civilized responses. This all sounds so great. I am older and I feel like I’ve already seen enough ads for one lifetime and I hate all this fascist tracking crap.

    But how does that work? Is it just a network on which you store your stuff in a way that you can download it anywhere or can it do more? I mean, to me that’s just a home network. Hosting sounds like it’s designed for other people to access. Can I put my website on there? If so, how do I go about registering my domain each year. I’m not computer illiterate but this sounds kind of beyond my skill level. I’ll go search Jellyfin, weird name, and see what I can find. Thanks again!

    • y0kai@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      You’re asking a lot of questions at one time and will be better served understanding you’re knocking at the door of a very deep rabbit hole.

      That said, I’ll try to give you the basic idea here and anyone who can correct me, please do so! I doubt I’ll get everything correct and will probably forget some stuff lol.

      So, self hosting really just means running the services you use on your own machine. There’s some debate about whether hosting on a cloud server - where someone else owns and has physical access to the machine - counts as self hosting. For the sake of education, and because I’m not a fan of gatekeeping, I say it does count.

      Anyway, when you’re running a server (a machine, real or virtualized, that is running a program connected to a network that can - usually - be accessed by other machines connected to that network), who and what you share with other machines on your network or other networks, is ultimately up to you.

      When using a “hosted” service, which is where another entity manages the server (not just the hardware, but the software and administration too, and is therefore the opposite of self hosting. Think Netflix, as opposed to Jellyfin), your data and everything you do on or with that service on that network belongs to the service provider and network owners. Your “saved” info is stored on their disks in their data center. There are of course exceptions and companies who will offer better infrastructure and privacy options but that’s the gist of non-self-hosted services.

      To your specific questions:

      But how does that work?

      Hopefully the above helps, but this question is pretty open ended lol. Your next few questions are more pointed, so I’ll try to answer them better.

      Is it just a network on which you store your stuff in a way that you can download it anywhere or can it do more?

      Well, kind of. If you’re hosting on a physical machine that you own, your services will be accessible to any other machine on your home network (unless you segment your network, which is another conversation for another time) and should not, by default, be accessible from the internet. You will need to be at home, on your own network to access anything you host, by default.

      As for storage of your data, self hosted services almost always default to local storage. This means, you can save anything you’re doing on the hard-drive of the machine the server is running on. Alternatively if you have a network drive, you can store it on another machine on your network. Some services will allow you to connect to cloud storage (on someone else’s machine somewhere else). The beauty is that you decide where your data lives.

      I mean, to me that’s just a home network. Hosting sounds like it’s designed for other people to access. Can I put my website on there?

      Like almost anything with computers and networking, the defaults are changeable. You can certainly host a service on the internet for others to access. This usually involves purchasing the rights to a domain name, setting that domain up to link to your private IP address, and forwarding a port on your router so people can connect to your machine. This can be extremely dangerous if you don’t know what you’re doing an isn’t recommended without learning a lot more about network and cyber security.

      That said, there are safer ways to connect from afar. Personally, I use a software called Wireguard. This software allows devices I approve (like my phone, or my girlfirend’s laptop) to connect to my network when away from home though what is called an “encrypted tunnel” or a "Virtual Private Network (VPN) ". These can be a pain to set up for the first time if you’re new to the tech and there are easier solutions I’ve heard of but haven’t tried, namely Tailscale, and Netbird, both of which use Wireguard but try to make the administration easier.

      You can also look into reverse proxies, and services like cloudflare for accessing things away from home. These involve internet hostng, and security should be considered, like above. Anything that allows remote access will come with unique pros and cons that you’ll need to weigh and sort for yourself.

      If so, how do I go about registering my domain each year.

      Personally, I use Porkbun.com for cheap domains, but there are tons of different providers. You’ll just have to shop around. To actually use the domain, I’m gonna be linking some resources lower in the post. If I remember correctly, landchad.net was a good resource for learning about configuring a domain but idk. There will be a few links below.

      I’m not computer illiterate but this sounds kind of beyond my skill level.

      It was beyond my skill level when I started too. It’s been nearly a year now and I have a service that automatically downloads media I want, such as movies, shows, music, and books. It stores them locally on a stack of hard drives, I can access them outside of my house with wireguard as well. Further, I’ve got some smaller services, like a recipe book I share with my girlfriend and soon with friends and family. I’ve also started hosting my own AI, a network wide ad-blocker, a replacement for Google photos, a filesharing server, and some other things that are escaping me right now.

      The point is that it’s only a steep hill while you’re at the bottom looking up. Personally, the hike has been more rejuvenating than tiresome, though I admit it takes patience, a bit of effort, and a willingness to learn, try new things, and fail sometimes.

      Never sweat the time it takes to accomplish a task. The time will pass either way and at the end of it you can either have accomplished something, or you’ll look back and say, “damn I could’ve been done by now.”

      I’ll go search Jellyfin, weird name, and see what I can find. Thanks again!

      Also check these out, if you’re diving in:

      YouTube:

      Guides:

      Tools:

      Hopefully this helps someone. Good luck!

  • 0^2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Now compare Docker vs LXC vs Chroot vs Jails and the performance and security differences. I feel a lot of people here are biased without knowing the differences (pros and cons).

  • Matt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    It’s the platform that runs all of your services in containers. This means they are separated from your system.

    Also what are other services that might be interesting to self host in The future?

    Nextcloud, the Arr stack, your future app, etc etc.

  • CodeBlooded@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a Dockerfile, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.

    This is instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?

    On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Its an extremely fast and insecure way to setup services. Avoid it unless you want to download and execute malicious code.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Package managers like apt use cryptography to check signatures in everything they download to make sure they aren’t malicious.

        Docket doesn’t do this. They have a system called DCT but its horribly broken (not to mention off by default).

        So when you run docker pull, you can’t trust anything it downloads.

        • Darioirad@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          Thank you very much! For the off by default part i can agree, but why it’s horribly broken?

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 days ago

            PKI.

            Apt and most release signing has a root of trust shipped with the OS and the PGP keys are cross signed on keyservers (web of trust).

            DCT is just TOFU. They disable it because it gives a false sense of security. Docker is just not safe. Maybe on 10 years they’ll fix it, but honestly it seems like they just dont care. The well is poisoned. Avoid. Use apt or some package manager that actually cares about security

            • Darioirad@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 days ago

              So, if I understand correctly: rather than using prebuilt images from Docker Hub or untrusted sources, the recommended approach is to start from a minimal base image of a known OS (like Debian or Ubuntu), and explicitly install required packages via apt within the Dockerfile to ensure provenance and security. Does that make sense?

              • jagged_circle@feddit.nl
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                5 days ago

                Install the package with apt. Avoid docker completely.

                If the docker image maintainer has a github, open a ticket asking them to publish a Debian package

                • Darioirad@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  4 days ago

                  I see your point about trusting signed Debian packages, and I agree that’s ideal when possible. But Docker and APT serve very different purposes — one is for OS-level package management, the other for containerization and isolation. That’s actually where I got a bit confused by your answer — it felt like you were comparing tools with different goals (due to my limited knowledge). My intent isn’t just to install software, but to run it in a clean, reproducible, and isolated environment (maybe more than one in the same hosting machine). That’s why I’m considering building my own container from a minimal Debian base and installing everything via apt inside it, to preserve trust while still using containers responsibly! Does this makes sense for you? Thank you again for wasting your time to reply to my dumb messages

        • ianonavy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          A signature only tells you where something came from, not whether it’s safe. Saying APT is more secure than Docker just because it checks signatures is like saying a mysterious package from a stranger is safer because it includes a signed postcard and matches the delivery company’s database. You still have to trust both the sender and the delivery company. Sure, it’s important to reject signatures you don’t recognize—but the bigger question is: who do you trust?

          APT trusts its keyring. Docker pulls over HTTPS with TLS, which already ensures you’re talking to the right registry. If you trust the registry and the image source, that’s often enough. If you don’t, tools like Cosign let you verify signatures. Pulling random images is just as risky as adding sketchy PPAs or running curl | bash—unless, again, you trust the source. I certainly trust Debian and Ubuntu more than Docker the company, but “no signature = insecure” misses the point.

          Pointing out supply chain risks is good. But calling Docker “insecure” without nuance shuts down discussion and doesn’t help anyone think more critically about safer practices.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 days ago

            Oof, TLS isnt a replacement for signatures. There’s a reason most package managers use release signatures. x.509 is broken.

            And, yes PGP has a WoT to solve its PKI. That’s why we can trust apt sigs and not docker sigs.

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Entirely depends on who’s publishing the image. Many projects publish their own images, in which case you’re running their code regardless.

  • xavier666@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Learn Docker even if you have a single app. I do the same with a Minecraft server.

    • No dependency issues
    • All configuration (storage/network/application management) can be done via a single file (compose file)
    • Easy roll-backs possible
    • Maintain multiple versions of the app while keeping them separate
    • Recreate the server on a different server/machine using only the single configuration file
    • Config is standardized so easy to read

    You will save a huge amount of time managing your app.

    PS: I would like to give a shout out to podman as the rootless version of Docker

  • echutaaa@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    It’s a container service. Containers are similar to virtual machines but less separate from the host system. Docker excels in creating reproducible self contained environments for your applications. It’s not the simplest solution out there but once you understand the basics it is a very powerful tool for system reliability.

  • tuckerm@feddit.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    You can think of Docker as something that lets you run all of your self-hosted services inside of their own virtual machine. To each service, it looks like that service is running on its own separate computer. (A Docker container is not actually a virtual machine, it’s something much faster than that, but I like to think about it the same way. It has similar advantages.)

    This has a few advantages. For example, if there is a security vulnerability in one of your services, it’s less likely to affect your whole server if that vulnerable service is inside of a Docker container. Even if the vulnerability lets an attacker see files on your system, the only “system” they can see is the one inside of the Docker container. They can’t look at anything else on the rest of your actual computer, they can only see the Docker “virtual machine” that you created for that one service.

  • Black616Angel@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Please don’t call yourself stupid. The common internet slang for that is ELI5 or “explain [it] like I’m 5 [years old]”.

    I’ll also try to explain it:

    Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
    It’s called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

    You then need the software docker (or podman, etc.) to run this container.

    Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
    This way the software can’t destroy your system and you can’t accidentally destroy the software inside the container.

      • folekaule@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I know it’s ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

        For example, containers are disposable cattle. You don’t backup containers. You backup volumes and configuration, but not containers.

        Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

        For self hosting maybe the difference doesn’t matter much, but there is a difference.

        • fishpen0@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          A million times this. A major difference between the way most vms are run and most containers are run is:

          VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

          You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

          Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

          • chunkystyles@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            containers should be immutable and not be able to write to their internal filesystem

            This doesn’t jive with my understanding. Containers cannot write to the image. The image is immutable. However, a running container can write to its filesystem, but those changes are ephemeral, and will disappear if the container stops.

            • fishpen0@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 days ago

              This is why I said “most containers most of the time should”. It’s a bad practice to write to the inside of the container and a better practice to treat them as immutable. You can go as far as actively preventing them from writing to themselves when you build them or in certain container runtimes, but this is not usually how they work by default.

              Also a container that is stopped and restarted will not lose its internal changes in most runtimes. The container needs to be deleted and recreated from the image to do that

  • LittleBobbyTables@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I’m not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it’s looking to solve. I’ll try and keep it simple.

    Imagine you have a computer program. It could be any program; the details aren’t important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend’s computer.

    Reproducibility is really important in computing, especially if you’re the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

    Docker helps massively simplify this dilemma by running the program inside a ‘container’, which is basically a way to run the same exact program, with the same exact operating system and ‘system components’ installed (if you’re more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn’t have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this ‘reproducibility’ problem.

    Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won’t compile. And then you’d run into the same exact problem where it compiles on your machine, but not your friend’s. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

    Also, since Docker puts things in ‘containers’, it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don’t affect your ‘host’ computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer’s timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

    There’s plenty of other things that make Docker useful, but I’d say those are the most important ones–reproducibility, ease of setup, containerization, and configurable permissions.

    One last thing–Docker is comparable to something like a virtual machine, but the reason why you’d want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

    So then the whole rest of the world said “Hey Google’s doing that and they’re super smart, we should do that too.” So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don’t really understand.

    So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don’t worry they’re all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who’s not using Docker must be an idiot, which is the primary purpose for technology as far as they’re concerned.

    But anyway in the meantime a bunch of FOSS software authors said “Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they’ll just hit the button and make it go.”

    And so now everyone uses Docker and it’s a pain in the ass to make any edits to the configuration or setup and it’s all in this weird virtualized box, and the “from scratch” instructions are usually out of date.

    The end

    • tuckerm@feddit.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I’m an advocate of running all of your self-hosted services in a Docker container and even I can admit that this is completely accurate.

    • i_am_not_a_robot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Borg / k8s / Docker are not the same thing. Borg is the predecessor of k8s, a serious tool for running production software. Docker is the predecessor of Podman. They all use containers, but Borg / k8s manage complete software deployments (usually featuring processes running in containers) while Docker / Podman only run containers. Docker / Podman are better for development or small temporary deployments. Docker is a company that has moved features from their free software into paid software. Podman is run by RedHat.

      There are a lot of publicly available container images out there, and most of them are poorly constructed, obsolete, unreprodicible, unverifiable, vulnerable software, uploaded by some random stranger who at one point wanted to host something.

    • 0x0@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Incus (formerly LXC/D, on which Docker used to be based on) is on my to-learn list.
      Docker is not.