Some services run really good behind a reverse proxy on 443, but some others can really become an hassle… And sometimes just opening other ports would be easier than to try configuring everything to work through 443.

An example that comes to my mind is SSH, yeah you can use SSLH to forward requests coming from 443 to 22, but it’s so much easier to just leave 22 open…

Now, for SSH, if you have certificate authentication or a strong password, I think you can feel quite safe, but what about other random ports? What risks I’m exposing my server to if I open some of them when needed for a service? Is the effort of trying to pass everything through 443/80 worth it?

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 hours ago

    It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.

    Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.

    While we can agree that SSH is a very safe service, if you allow password login for root and the password is “root” the first scanner that passes will get control of your server.

    As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.

  • 0x0@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 hours ago

    Firewalls, containers, separate subnets (or VLANs if possible), VPNs.
    Keep really public stuff in a VPS though, and the private in your home server. Connect them via wireguard (using e.g. headscale).

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 hours ago

    With SSH it is easier to do key authentication. Certificate authentication is supported but it is a little more hassle. Don’t use password authentication as it is deprecated and not secure.

    The key with SSH (openssh specifically) is that it is heavily audited so it is unlikely to have any issues. The problem is when you start exposing self hosted services with lots of attack surface. You need to be very careful when exposing services as web services are very hard to secure and can be the source of a compromise that you may or may not be aware of.

    It is much safer to use a overlay VPN or some other frontend for authentication like mTLS or an authenticated reverse proxy.

  • skankhunt42@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 hours ago

    It’s not so much about the ports, its about what you’re running that’s accessible to the public.

    If you have a single website on 443 and SSH on 22 (or a non-standard port like 6543) you’re generally considered safe. This is 2 services and someone would need to attack one of the two to get in.

    If you have a VPN on 4567 and everything behind the VPN then someone would need to hack the VPN to get in.

    If you have 100 different things behind 443 then someone just needs to find a hole in one to get in.

    Generally ssh, nginx, a VPN are all safe and they should be on their own ports.

    • sfjvvssss@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      Sorry to nitpick but I feel like beimg precise here is important. Nginx is a project, ssh a protocol and VPN an overlay network, so more of a concept. All 3 can be run somewhere on the spectrum between quite secure and super insecure. Also safe and secure are two different things, I guess you meant secure so no big deal.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 hours ago

      Everything you expose is fine until somebody finds a zero day.

      Everything these days is being built from a ton of publically maintained packages. All it takes is for one of those packages to fall into the wrong hands and get updated which happens all the time.

      If you’re going to expose web yourself, use anubus and fail2ban

      Put everything that doesn’t absolutely need to be public open behind a VPN.

      Keep all of your software updated, constant vigilance.

        • sfjvvssss@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 hours ago

          While this helps getting volume down it just adds a layer of obscurity and the service behind should still be treated and maintained as if it was fully public-facing.

  • Tavi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 hours ago

    Not a sysadmin, just a casual IT.

    If it is open, it is going to get hit by scanners, scrapers, everything and the sun, even if it is secure. Generally, 443 for your websites via reverse proxy with an IP whitelist + password is okay. Not special, lets you add subdomains, very convenient.

    Now, there isn’t anything special about any given port, but you still need to have some form of access control that you need to set up. If it is an API have some sort of API key in place. Implement 2FA. Try to isolate the service from the machine. Isolate the machine from bare metal. Keep the bare metal machine isolated from your home network. Take up farming. Change the default port and add some form of access alerts/logs. Have some sort of fail2ban service in place because you will be firehosed with scripts and bad traffic.

    Maybe some of the stuff I recommend is paranoid overkill, but I don’t know enough to cut corners. Security is a hassle, a breach is a nightmare.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 hours ago

      IP whitelists are not terribly secure and are quite a hassle.

      Instead use a overlay VPN or some sort of extra security layer like mTLS or Authelia

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 hours ago

    Imagine opening all the windows in your flat. Then leaving them open for a month. What would happen? How many insects would make their new home in your home? How many critters and cats would do the same?

    Now, each window is a port. Your flat is your network. Each critter or cat is a bad actor. Each insect is a bot or virus.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 hours ago

      To expand on this a bit:

      A lot of attacks are automated since the goal is to compromise as many hosts as possible. These hosts are then used in a botnet or sold to people on shader websites to use as proxies.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    Presuming you have not limited edge port 22 to one or two IPS and that you are not translating a high port to 22 internal, the danger is that you are allowing the entire internet to hammer away at your ssh. If you have this described setup, you will most definitely see the evidence of attempts to break in in your SSH endpoint and firewall logs.

    Zero days for SSH do exist, so it’s just a matter of time before you’re compromised if you leave this open.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 hours ago

      This is security theater

      Flaws in SSH do happen but they are very rare. The solution to this is defense in depth which is different than security by obscurity.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    It just widens your attack surface for the ghost army of bots that roam the net knocking on ports, you don’t want to be someone else’s sap. I would imagine most home attacks fall into three categories: script kiddies just war driving, targeted attacks on someone specific, or just plain ol’ looking for sensitive docs for identity theft or something. Its still the net, man. If you leave your ass hanging out someone’s gonna bite it in a new way every time.

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    If you are trying to access several different services through the internet to your home network, you are better off setting up a home VPN then trying to manage multiple public facing services. The more you publish directly to the public, the more difficult it is to keep up with everything; It is likely needlessly expanding your threat exposure. Plus you never know when a new exploit gets published against any of the services you have available.

    • Dagnet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      Self hosted newbie here. What if those services are docker containers? Wouldn’t the threat be isolated from the rest of the machine?

      • oddlyqueer@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 hours ago

        it’s an extra hurdle, but it’s far from a guaranteed barrier. There’s a whole class of exploits called container escapes (or hypervisor escapes if you’re dealing with old-school VMs) that specifically focus on escalating an attack from a compromised container into whatever machine is hosting the container.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 hours ago

        Is your container isolated from your internal network?

        If I were to compromise your container, I’d immediately pivot to other systems on your private network.

        Why do the difficult thing of breaking out of a container when there’s a good chance I can use the credentials I got breaking in to your container to access other systems on your network?

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 hours ago

        No. Docker containers aren’t a full sandbox. There’s a number of exploits that can break out of a container and gain root access to the host.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 hours ago

          Yes and no

          Breaking out of docker in a real life context would require either a massive misconfiguration or a major security vulnerability. Chances are you aren’t going to have much in the way of lateral movement but it is always good to have defense in depth.

          • Technus@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            If someone’s self-hosting, I’d be willing to bet they don’t have the same hardened config or isolation that a cloud provider would.