I generated 16 character (upper/lower) subdomain and set up a virtual host for it in Apache, and within an hour was seeing vulnerability scans.

How are folks digging this up? What’s the strategy to avoid this?

I am serving it all with a single wildcard SSL cert, if that’s relevant.

Thanks

Edit:

  • I am using a single wildcard cert, with no subdomains attached/embedded/however those work
  • I don’t have any subdomains registered with DNS.
  • I attempted dig axfr example.com @ns1.example.com returned zone transfer DENIED

Edit 2: I’m left wondering, is there an apache endpoint that returns all configured virtual hosts?

Edit 3: I’m going to go through this hardening guide and try against with a new random subdomain https://www.tecmint.com/apache-security-tips/

  • SwissOS@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 hours ago

    Do you use an external DNS when accessing your subdomain? I can only guess that it’s the DNS leaking it.

  • TieDyePie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    If you do a GET / request against the IP (typically http too) does it yield a redirect to your proper fqdn? It shouldn’t return anything and remain stealthy as you likely dont want to expose anything directly on IP connections and rely solely on your vhosts.

  • toebert@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    I can’t say I know the answer but a few ideas:

    • did you access it with a browser? Maybe it snitches on you or some extension does?
    • did you try to resolve it with a public DNS server at any point (are you sure nothing forwarded the request to one)?

    You could try it again, create the domain in the config and then do absolutely nothing. Don’t try to confirm it works in any way. If you don’t see the same behaviour you can do one of the above and then the other and see when it kicks in. If it gets picked up without you doing anything…then pass!

  • foggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    https://crt.sh/

    When a CA issues an SSL/TLS certificate, they’re required to submit it to public CT logs (append-only, cryptographically verifiable ledgers). This was designed to detect misissued or malicious certificates.

    Red and Blue team alike use this resource (crt.sh) to enumerate subdomains.

  • eleijeep@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    You need to look at the DNS server used by whatever client is resolving that name. If it’s going to an external recursive resolver instead of using your own internal DNS server then you could be leaking lookups to the wider internet.

  • stratself@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    My guess would be NSEC zone walking if your DNS provider supports DNSSEC. But that shouldn’t work with unregistered or wildcard domains

    The next guess would be during setup, someone somewhere got ahold of your SNI (and/or outgoing DNS requests). Maybe your ISP/VPN service actually logs them and announce it to the world

    I suggest next time, try setting up without any over-the-internet traffic at all. E.g. always use curl with the --resolve flag on the same VM as Apache to check if it’s working

  • Fair Fairy@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Crawlers typically crawl by ip.

    Are u sure they just not using ip?

    U need to expressly configure drop connection if invalid domain.

    I use similar pattern and have 0 crawls.

      • mic_check_one_two@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        It can be both server and DNS provider. For instance, Cloudflare allows you to set rules for what traffic is allowed. And you can set it to automatically drop traffic for everything except your specific subdomains. I also have mine set to ban a IP after 5 failed subdomain attempts. That alone will do a lot of heavy lifting, because it ensures your server is only getting hit with the requests that have already figured out a working subdomain.

        Personally, I see a lot of hacking attempts aimed at my main www. subdomain, for Wordpress. Luckily, I don’t run Wordpress. But the bots are 100% out there, just casually scanning for Wordpress vulnerabilities.

  • Fedditor385@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    If you have browser with search suggestions enabled, everything you type in URL bar gets sent to a search engine like Google to give you URL suggestions. I would not be surprised if Google uses this data to check what it knows about the domain you entered, and if it sees that it doesn’t know anything, it sends the bot to scan it to get more information.

    But in general, you can’t access a domain without using a browser which might send that what you type to some company’s backend and voila, you leaked your data.

    • Derpgon@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Easily verified by creating another bunch of domains and using a browser that doesn’t do tracking - like waterfox

    • kumi@feddit.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      What you can do is segregate networks.

      If the browser runs in, say, a VM with only access to the intranet and no internet access at all, this risk is greatly reduced.

  • oranki@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Maybe that particular subdomain is getting treated as the default virtual host by Apache? Are the other subdomains receiving scans too?

    I don’t use Apache much, but NGINX sometimes surprises on what it uses if the default is not specifically defined.

  • eli@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    I run my webservers behind a pfsense firewall with ssl offloading(using a wildcard cert) with a static IP and use Haproxy to have sub-domain’s go to individual servers. Even though I’ve seen my fair share of scans, I only ever expose port 443 and keep things updated.

    Recently though someone on here mentioned routing everything over Tailscale via a VPS. I didn’t want to pay for a VPS and frankly can’t even find one that is reasonably priced in the US(bandwidth limits mainly), so I threw Tailscale onto my pfsense, setup split-dns on Tailscale’s admin panel with my domain name, and then reconfigured Haproxy to listen on my Tailscale interface. Even got IPv6 working(huge pain due to a bug it seems). Oh and setup pfblocker.

    My current plan is I’m going to run my webservers behind Tailscale and keep my game servers public and probably segment those servers to a different vlan/subnet/dmz/whatever. And maybe just have a www/blog landing page that is read only on 443 and have it’s config/admin panel accessible via my tailscale only.

    Anyway, back on topic. I run my game servers and I don’t advertise them out anywhere(wildcard cert) and do whitelist only, yet I still see my minecraft servers get hit constantly on port 25565.

    So not much you can do except minimize exposure as much as possible.

  • yeehaw@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    Reverse DNS? Or vuln scans just hitting IPs. Don’t need DNS for that.