gerdesj 4 days ago

I've run DNS servers in the past - BIND and pdns. I've now gone all in ... because ... well it started with ACME.

As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.

Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.

Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.

acme.sh is ideal for unix gear and if you follow this blokes method of installation: https://pieterbakker.com/acme-sh-installation-guide-2025/ usefully centralised.

simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.

PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.

Join the dots.

[EDIT: Speling, conjunction switch]

  • adiabatichottub 18 hours ago

    I'm a fan of uACME:

    https://github.com/ndilieto/uacme

    Tiny, simple, reliable. What more can you ask?

    • est 8 hours ago

      I ended up vibe an ACME client for my custom TLS server.

      It's a chat server but with curl. You can try it here

      curl -NT. https://chat.est.im/hackernews

      (Note: IPv6 only for the moment)

    • dwedge 14 hours ago

      > don't expect it to automatically set up your webserver to use the certificates it obtains.

      This makes me so happy. Acme and certbot trying to do this is annoying, Caddy trying to get certs by default is annoying. I ended up on a mix of dehydrated and Apache mod_md but I think I like the look of uACME because dehydrated just feels clunky

    • DaSHacka 18 hours ago

      Neat, I've used lego (https://github.com/go-acme/lego) but will certainly have to give uacme a look, love me a simple ACME client.

      acme.sh was too garish for my liking, even as a guy that likes his fair share of shell scripts. And obviously certbot is a non-starter because of snap.

      • adiabatichottub 18 hours ago

        Certbot has earned my ire on just about every occasion I've had to interact with it. It is a terrible program and I can't wait to finish replacing it everywhere.

        The new setup is using uAcme and nsupdate to do DNS-01 challenges. No more fiddling with any issues in the web server config for a particular virtual host, like some errant rewrite rule that prevents access to .well-known/.

        • Spivak 12 hours ago

          I mean certbot handles the just issue me a cert via DNS-01 and I'll do the rest flow just fine. Massive overkill of a program for just that use-case but it's been humming along for me for years at this point. What's the selling point for uACME?

      • catlifeonmars 10 hours ago

        Lego is solid. I use it with Route53 to do automatic renewal of LE certs for various endpoints before the certs expire.

  • rdevilla 14 hours ago

    > Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME.

    Are you certain? Not at a real machine at the moment so hard for me to dig into the details but CNAMEing the challenge response to another domain is absolutely supported via DNS-01 [0] and certbot is Let's Encrypt's recommended ACME client: [1]

        Since Let’s Encrypt follows the DNS standards when
        looking up TXT records for DNS-01 validation, you can
        use CNAME records or NS records to delegate answering
        the challenge to other DNS zones. This can be used to
        delegate the _acme-challenge subdomain to a validation
        specific server or zone.
    
    ... which is a very common pattern I've seen hundreds (thousands?) of times.

    The issue you may have run into is that CNAME records are NOT allowed at the zone apex, for RFC 1033 states:

       The CNAME record is used for nicknames. [...] There must not be any other
       RRs associated with a nickname of the same class.
    
    ... of course making it impossible to enter NS, SOA, etc. records for the zone root when a CNAME exists there.

    P.S. doing literally fucking anything on mobile is like pulling teeth encased in concrete. Since this is how the vast majority of the world interfaces with computing I am totally unsurprised that people are claiming 10x speedups with LLMs.

    [0] https://letsencrypt.org/docs/challenge-types/

    [1] https://letsencrypt.org/docs/client-options/

  • ozim 16 hours ago

    I think CNAME redirections being not supported is reasonable choice. Would make my life easier as well but it opens all kinds of bad possibilities that bad actors would definitely use.

    • dwedge 14 hours ago

      Can you give me an example where this is a problem? If someone can create a CNAME they can create a TXT (ignoring the possibility of an API being restricted to just one).

      Without CNAME redirect I wouldn't be able to automatically renew wildcard ssl for client domains with dns that has no API. Even if they do have an API, doing it this way stops me from needing to deal with two different APIs

      • rdevilla 11 hours ago

        GP comment is just vague distilled model AI slop.

  • 9dev 4 days ago

    Seconded. Don’t use certbot; it’s an awful piece of user-hostile software, starting from snap being the only supported installation channel. Everything it does wrong, acme.sh does right.

    • tryauuum 17 hours ago

      just installed yesterday the certbot on ubuntu 24.04, from the default repos, without any snaps

      • mediumsmart 16 hours ago

        same on debian trixie. certbot works fine for me. Zone records in bind, generate the dnskey, cronjob to re-sign it daily and your off to the races. no problems no snaps.

    • locknitpicker 16 hours ago

      > starting from snap being the only supported installation channel.

      This sounds like you are complaining about Ubuntu, not the software you wish to install in Ubuntu.

      • jcgl 13 hours ago

        Unfortunately, it's more than that: the Linux installation instructions on the certbot website[0] give options for pip or snap. Distro packages are not mentioned.

        [0] https://certbot.eff.org/

        • pgporada 12 hours ago

          Make a PR or an issue on the project.

          • jcgl 12 hours ago

            I feel no need to. I'm quite certain that the certbot folks are aware of the existence of distro packages and even know how to check https://pkgs.org/download/certbot for availability. One might guess that they only want to supply instructions for upstream-managed distribution channels rather than dealing with e.g. some ancient version being shipped on Debian.

Adachi91 7 hours ago

I've been running BIND for quite a long time now, and I've been very happy with it, very few issues other than my own folly, since I'm not on a static IP in the past 15 years my IP has changed 4 times (1 time due to a router change, 3 times due to Comcast outages), I didn't catch the last IP swap for over a month.

Which brings me to a rather big gripe about other resolvers not respecting TTL, 70% of https://www.whatsmydns.net/ reported it could not query A names, while 30% were like "Yeah here you go" from their cache.

I fixed the glue and got everything back up, I need to write an automated script to check every day if my IP has changed and alert me to update my glue record at my registar.

I use a lot of mix and match scripts to maintain other aspects like challenges for DNS e.g. Letsencrypt, I'll use their hooks to update my DNS, resign it (DNSSEC), complete the challenge, then cleanup. My more personal domains I don't use DNSSEC so I just skip right ahead.

I quite enjoy handling my own DNS records, BIND has been really good to me and I love their `view "external"` and `view "internal"` scopes so I can give the world my authoritative records, and internally serve my intranet and other services like pihole (which sits behind BIND)

stego-tech 8 hours ago

I've found that teaching DNS is an excellent gateway to learning about how the internet itself works, especially to "green" tech folks who go blank-faced when you get into protocols, IPs, etc.

Break out a piece of mail, connect the dots, and you see their eyes light up with comprehension. "Oh, so that's how my computer gets to google.com; it's just like how my postman knows where to deliver my mail!" Then a critical component is demystified, and they want to learn more.

Running a DNS server is honestly such a good activity for folks in general.

1vuio0pswjnm7 6 hours ago

You can also serve a root.zone on that DNS server and it does not have to a carbon copy of ICANN's root.zone. I have been doing this for over 15 years. I've tried many DNS server software projects over that time and I always come back to djbdns

Multiple comments in this thread refer to TLS certificates

Why is payment to and/or permission from a third party "necessary" to encrypt data in transit over the a computer network, whether it's a LAN or an internet. What does this phoney "requirement" achieve

For example, why is it "necessary" to purchase a domain name registration from an "ICANN-approved" registrar in order to use a TLS certificate

Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication". What purpose does _purchasing_ a registration serve. For example, similar to "free" Let's Encrypt certificates, domain names could also be "free"

Whatever "authentication" ICANN and its "approved" registries and registrars are doing, e.g., none, is it possible someone else could do it better using a different approach

This comment is not asking for answers to these questions; the question are rhetorical. Of course the questions may trigger defensive replies; everyone is entitled to an opinion and opinions may differ

  • m3047 4 hours ago

    Yes, you can have a different root zone which includes some or all of ICANN's root servers, or none of them. However if the root zone doesn't match ICANN's then DNSSEC will fail ("fruit of the poisoned tree"). But you could sign your alternate, custom root, and issue DNSSEC keys all the way down.

    You don't need ICANN for TLS or encryption. You can create your own CA and sign your own certs. In fact, this is typically how it's done to authenticate for example clients of a web server using certs (you install the cert in the browser).

    You can use your CA to sign a cert for your ICANN-registered domain and install it in the web server; there are no internet police who are gonna stop you. Web browsers will complain about this "self-signed cert", unless you install your CA's public key in your browser. (Security-wise, you probably shouldn't go around installing random people's CA certs in your browser. You need to trust them not to issue certs for e.g. google.com. On the other hand you need to trust China and Morocco not to do that already, so maybe you're willing to accept that risk.)

    > Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication".

    People make the mistake of conflating an FQDN or address with identity all the time. People point at resources in domains which don't exist (this includes DNS resources), and people register those abandoned domains and then click "forgot password" and take over whatever account was tied to that email address in that domain.

    I don't know that ICANN requires any proof. There are CAs which have enhanced identity verification, this applies to the certs they issue for both servers and clients / people.

    > What purpose does _purchasing_ a registration serve.

    Makes you a member of ICANN's club. There are pseudo-TLDs which are registered in ICANN's tree where you can register a (sub)domain, without interacting with ICANN at all.

    Rhetorically speaking, of course.

defanor 17 hours ago

I prefer and use the knot DNS server for authoritative DNS (and either knot-resolver or Unbound for caching DNS servers) myself: it is quite feature-rich, including DNSSEC, RFC 2136 support, an easy master-slave setup. Apparently it does support database-based configuration and zone definitions, too, but I find file-based storage to be simpler.

  • adiabatichottub 16 hours ago

    The database for configuration and zone data is strictly internal and not tied to an external relational database, like what's shown in the article.

BatteryMountain 14 hours ago

Get a mini-pc with 2x LAN ports + a mediatek Wifi 6/7 module. Install Proxmox. Make 3 VM's: OpenWrt (or router firmware of choice), unbound and adguard home. Plug your fibre into lan port, plug rest of network into other lan port. In proxmox, set pcie passthrough for one of the Lan ports and the wifi card. Setup openwrt to connect to your isp and points its dns to you adguard home server. Point your adguard home server to your unbound server as upstream. This is a good starting point if you want to get a feel for running your own router + dns. You don't need to use off the shelf garbage routers; x86/x64 routers are the best. On openwrt I configure a special traffic queue so that I don't have buffer overflows, so my connection is super stable and low latency. Combined with the adguard + unbound dns setup, my internet connection is amazingly fast compared to traditional routers.

Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.

  • ssl-3 14 hours ago

    That seems like a lot of steps that could be reduced to:

      1.  Run OpenWRT
      2.  Use it for the DNS of one's own choosing
    • BatteryMountain 9 hours ago

      Sorry had too much caffeine this morning before I typed that.

icedchai 9 hours ago

I've been running authoritative and caching DNS servers since the 90's. BIND is still my go-to because I am familiar with it.

vardalab 6 hours ago

I have run technitium for 4 or so years now, in a recursive mode, handles all my homelab needs and it is faster as well. Now that it has clustering support I have three instances in my proxmox cluster.

kev009 15 hours ago

unbound and nsd for me, always run my own recursor and authority.

WaitWaitWha 16 hours ago

Running DNSMasq on an old RasPI & USB SSD. No problems no issues. Just quietly runs in the background.

  • teddyh 9 hours ago

    DNSMasq is a DNS resolver, not a DNS server.

    • nine_k 9 hours ago

      It's both, and more, in a way. But it's primarily a DNS tweaking tool, and does not support things like zone transfers. Which you usually don't need with a small-scale personal setup anyway.

  • henrebotha 15 hours ago

    dnsmasq on an RPi Zero 2W is the backbone of my self-hosted setup. Combined with Tailscale, it gives me access from anywhere to arbitrary domains I define myself, with full HTTPS thanks to Caddy.

    • ssl-3 13 hours ago

      At home, I put all of my network infrastructure software in one basket because that seems like the right path towards maximizing availability[1]: It provides one point of potential hardware failure instead of many.

      For me, that means doing routing, DNS, VPN, and associated stuff with one box running OpenWRT. It works. It's ridiculously stable. And rather than having a number of things that could break the network when they die, I only have 1 thing that can do so.

      That box currently happens to be a Raspberry Pi 4 that uses VLANs as Ethernet port expanders, but it is also stable AF with a [shock! horror!] USB NIC. I picked that direction years ago mostly because I have a strong affinity towards avoiding critical moving parts (like cooling fans) in infrastructure.

      But those details don't matter. Any single box running OpenWRT, OPNsense, pfSense, Debian, FreeBSD, or whatever, can behave more-or-less similarly.

      [1]: Yeah, so about that. If the real-world MTBF for a system that relies upon 1 box is 10 years, then the MTBF for a system relying on 2 boxes to both keep working is only 5 years. Less is more.

fullstop 8 hours ago

I've been running tinydns for decades now. I don't even think about it anymore.

  • adiabatichottub 4 hours ago

    We did as well for about 20 years. It is a very solid program and does everything it promises. Unfortunately it lacks modern features, and development is sparse to say the least, so we ended up moving to Knot. I'd still recommend tinydns for really simple deployments, though.

dwedge 14 hours ago

I've been tempted by this because I self host everything else, but "adding an entry to postgres instead of using namecheap gui" is overkill, just use a DNS with an API.

Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.

I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider

  • zyberzero 8 hours ago

    I selfhost DNS as well, but I just use plain old bind zone files and it works well enough across a bunch of domains (and RFC2136 for my dynamic IP at home) that I haven't bothered to look into database stored records. I just need to remember the pesky serial number so that the changes get properly replicated :)

  • megous 10 hours ago

    DNS servers are manageable via standard utilities/protocol, with tools like nsupdate, if you enable it.

emithq 18 hours ago

One thing worth noting if you're using your own DNS for Let's Encrypt DNS-01 challenges: make sure your authoritative server supports the RFC 2136 dynamic update protocol, or you'll end up writing custom API shims for every ACME client. PowerDNS has solid RFC 2136 support out of the box and pairs well with Certbot's --preferred-challenges dns-01 flag. BIND works too but the ACL configuration for allowing dynamic updates from specific IPs is fiddly to get right the first time.

rmoriz 14 hours ago

Still running DNS without a database and immutable. Push-based deployment.

  • deltarholamda 10 hours ago

    I do this as well. I have a decent number of domains in my control, but not hundreds, so editing a text file and updating a hidden master is a perfectly reasonable workflow.

    • rmoriz 7 hours ago

      I built a ruby cli (back when Thor was a thing) that does most of the stuff, but I still edit the zone itself using vim („dns edit <zone>“ launches vim). PowerDNS has nice cli to deal with DNSSEC and bind (file) backend so I don’t have to deal with it by hand.

      Of course I am the only user. But YAGNI works for me.

micw 15 hours ago

I'd like to run my personal DNS server for privacy reasons on a cheap VPS. But how can I make it available to me only? There's no auth on DNS, right?

  • m3047 3 hours ago

    Let me address a sibling comment first:

    stub resolver (client) -> OPTIONAL forwarding resolver (server) -> recursing / caching resolver (server) -> authoritative server. "Personal DNS server" doesn't disambiguate whether your objective is recursive or authoritative... or both (there is dogma about not using the same server for both auth and recursion, if you're not running your resource as a public benefit you can mostly ignore it). If it's recursive I don't know why you'd run it in the cloud and not on-prem.

    You'll find that you can restrict clients based on IP address, and you can configure what interfaces / addresses the server listens on. The traditional auth / nonrepudiation mechanism is TSIG, a shared secret. Traditionally utilized for zone transfers, but it can be utilized for any DNS request.

    The traditional mechanism for encryption has been tunnels (VPNs) but now we have DoH (web-based DNS requests) and DoT (literally putting nginx in front of the server as a TCP connection terminator if it's not built in). These technologies are intended to protect traffic between the client and the recursing resolver. Encryption between recursing resolvers and auths is a work in progress. DNSSEC will protect the integrity of DNS traffic between recursives and auths. I don't know how big your personal network is, for privacy / anonymity of the herd you might want to forward your local recursing resolver's traffic to a cloud-based server and co-mingle it with some additional traffic; check the servers' documentation to see if you can protect that forwarder -> recursive traffic with DoT or you're not gaining any additional privacy; it's extra credit and mostly voodoo if you don't know what you're doing. I don't bother, I let my on prem recursives reach out directly to the auths. Once the DNS traffic leaves my ISP it's all going in different directions, or at least it should be notwithstanding the pervasive centralization of what passes for the federated / distributed internet at present.

  • albertgoeswoof 14 hours ago

    It can’t be fully secure but you can use a domain or path with a uuid or similar such that no one could guess your dns endpoint, over dot or doh. In theory someone might log your dns query then replay it against your dns server though.

    You could also add whitelisting on your dns server to known IPs, or at least ranges to limit exposure, add rate limiting / detection of patterns you wouldn’t exhibit etc.

    You could rotate your dns endpoint address every x minutes on some known algorithm implemented client and server side.

    But in the end it’s mostly security through obscurity, unless you go via your own tailnet or similar

  • thesuitonym 8 hours ago

    A personal DNS server provides no privacy. Even if you were using a caching resolver, it would barely even provide any obfuscation.

    If you want DNS that is only for you, edit your hosts file.

  • teddyh 9 hours ago

    The article is about running your own DNS server, which is, and must, always be available to everyone. What you are talking about is running a DNS resolver, but that is not the topic.

  • khoirul 15 hours ago

    You could run it within a Tailscale VPN network. In fact Headscale (Tailscale server) has a very basic DNS server built-in.

    • Xylakant 15 hours ago

      That assumes a device that can enter a VPN. I’d like to run a DNS server for a group of kids playing Minecraft on a switch. Since they’re not in the same (W)LAN, I can’t do it on the local network level. And the switch doesn’t have a VPN client.

      • ssl-3 14 hours ago

        Perhaps it seems obvious to some, but it's not obvious to me so I need to ask: What's the advantage of a selectively-available DNS for kids playing Minecraft with Nintendo Switch instead of regular DNS [whether self-hosted or not]?

        All I can think of is that it adds obscurity, in that it makes the address of the Minecraft server more difficult to discover or guess (and thus keeps everything a bit more private/griefing-resistant while still letting kids play the game together).

        And AXFR zone transfers are one way that DNS addresses leak. (AXFR is a feature, not a bug.)

        As a potential solution:

        You can set up DNS that resolves the magic hardcoded Minecraft server name (whatever that is) to the address of your choosing, and that has AXFR disabled. In this way, nobody will be able to discover the game server's address unless they ask that particular DNS server for the address of that particular name.

        It's not airtight (obscurity never is), but it's probably fine. It increases the size of the haystack.

        (Or... Lacking VPN, you can whitelist only the networks that the kids use to play from. But in my experience with whitelisting, the juice isn't worth the squeeze in a world of uncontrollably-dynamic IP addresses. All someone wants to do is play the game/access the server/whatever Right Now, but the WAN address has changed so that doesn't work until they get someone's attention and wait for them to make time to update the whitelist. By the time this happens, Right Now is in the past. Whitelisting generally seems antithetical towards getting things done in a casual fashion.)

        • Xylakant 13 hours ago

          Ok, why would I want to do that? Because when Microsoft bought Minecraft they decided to split the ecosystem into the Java Edition (everyone playing on a computer) and Bedrock Edition (Consoles, Tablets, ...) and cross-play is not possible on the official realms. That leaves out the option to just pay and rent a realm for the group.

          So we're hosting our own minecraft server and a suitable connector for cross-play - and it's easy to join on tablets, computers and so on because there's a button that allows you to enter an address. But on the switch, Microsoft in its wisdom decided that there'd be no "join random server" button. But there are some official realm servers, and they just happen to host a lobby and the client understands some interface commands sent by the server (1). Some folks in the community devised a great hack - you just host a lobby yourself that presents a list of servers of your choice. But to do that, you need to bend the DNS entries of a few select hostnames that host the "official" lobbies so that they now point to your lobby. Which means you need to run a resolver that is capable of resolving all hostnames, because you need to set it in the switchs networking settings as the primary DNS server.

          Now, there are people that run resolvers in the community and that might be one option, but I'm honestly a bit picky about who gets to see what hostnames my kids switch wants to resolve.

          Whitelisting networks is impossible - it's residential internet.

          The reason I'd be interested in running this behind a VPN is that I don't want to run an open resolver and become part of an amplification attack. (And sadly, the Switch 1 does not have a sufficiently modern DNS stack so that I can just enable DNS cookies and be done with it. The Switch 2 supports it).

          Sorry if this sounds complicated. It's just hacks on hacks on hacks. But it works.

          (1) judging from the looks and feel, this is actually implemented as a minecraft game interface and the client just treats that as a game server. It even reports the number of players hanging out in the lobby.

          • MayeulC an hour ago

            Thank you for the explanation, it was most interesting, I had no idea Bedrock could be coerced into talking to java servers.

            Here are a few ideas:

            1. Geoblocking. Not ideal, but it can make your resolver public for fewer people.

            2. What if your DNS only answers queries for a single domain? Depending on the system, the fallback DNS server may handle other requests?

            3. You could always hand out a device that connects to the WLAN. Think a cheap esp32. Only needs to be powered on when doing the resolution. Then you have a bit more freedom: ipv6 RADV + VPN, or try hijacking DNS queries (will not work with client isolation), or set it as resolver (may need manual config on each LAN, impractical).

            4. IP whitelist, but ask them to visit a HTTP server from their LAN if it does not work (the switch has a browser, I think), this will give you the IP to allow, you can even password-protect it.

            I'd say 2. Is worth a try. 4. Is easy enough to implement, but not entirely frictionless.

          • ssl-3 12 hours ago

            Thanks. I suspected that this is where things were heading. I don't see a problem with using hacks-on-hacks to get a thing done with closed systems; one does what one must.

            On the DNS end, it seems the constraints are shaped like this:

              1.  Provides custom responses for arbitrary DNS requests, and resolves regular [global] DNS
              2.  Works with residential internet
              3.  Uses no open resolvers (because of amplification attacks)
              4.  Works with standalone [Internet-connected] Nintendo Switch devices
              5.  Avoids VPN (because #4 -- Switch doesn't grok VPN)
            
            With that set of rules, I think the idea is constrained completely out of existence. One or more of them need to be relaxed in order for it to get off the ground.

            The most obvious one to relax seems to to be #3, open resolvers. If an open resolver is allowed then the rest of the constraints fit just fine.

            DNS amplification can be mitigated well-enough for limited-use things like this Minecraft server in various ways, like implementing per-address rate limiting and denying AXFR completely. These kinds of mitigations can be problematic with popular services, but a handful of Switch devices won't trip over them at all.

            Or: VPN could be used. But that will require non-zero hardware for remote players (which can be cheap-ish, but not free), and that hardware will need power, and the software running on that hardware will need configured for each WLAN it is needed to work with. That path is something I wouldn't wish upon a network engineer, much less a kid with a portable game console. It's possible, but it's feels like a complete non-starter.

            • Xylakant 12 hours ago

              Yep, I agree. It's essentially impossible given the contraints. I'm mostly responding to a post that says "just run it on a VPN" with an example that just can't run on a VPN.

              (3) would be easy to handle if DNS Cookies were sufficiently well supported because they solve reflection attacks and that's the most prominent. Rate limiting also helps.

              At the moment I'm at selectively running the DNS server when the kids want to play because we're still at the supervised pre-planned play-session. And I hope that by the time they plan their own sessions, they've all moved on to a Switch 2.

          • 0x00cl 12 hours ago

            You could run a DNS server and configure the server with a whitelist of allowed IPs on the network level, so connections are dropped before even reaching your DNS service.

            For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.

            But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.

            • Xylakant 12 hours ago

              They're all playing from home, connected to their residential internet. I don't know their IP addresses.

              • simoncion 11 hours ago

                Correct me if I misunderstand what you're trying to do:

                What you want to do is -on each LAN that has a Switch that you want to play on your specific Minecraft server- report that the IP for the hostname of the Minecraft server the Switch would ordinarily connect to is the server that you're hosting?

                If you're using OpenWRT, it looks like you can add the relevant entries to '/etc/hosts' on the system and dnsmasq will serve up that name data. [0] I'd be a little shocked (but only a little) if something similar were impossible on all non-OpenWRT consumer-grade routers.

                My Switch 1 is more than happy to use the DNS server that DHCP tells it to. I assume the Switch 2 is the same way.

                [0] <https://openwrt.org/docs/guide-user/base-system/dhcp.dnsmasq>

                • Xylakant 9 hours ago

                  I can do that for my network - but the group is multiple kids that play from their home. I'm not going to teach all of those parents how to mess with their network. There's just way too many things that can go wrong. Also, won't work if the kid is traveling.

          • akdev1l 6 hours ago

            From all this what I got is that Microsoft is connecting to some random servers not using TLS and then somehow outputting that data straight into the Nintendo Switch

      • albertgoeswoof 14 hours ago

        Why do you want to do this? What would you redirect / override on this?

  • mlhpdx 10 hours ago

    Run it over WireGuard? I have this setup — cloud hosted private DNS protected by NOISE/ChaCha20. Only my devices can use it, because only they are configured as peers.

  • Loeffelmaenn 15 hours ago

    I just use a VPN like tailscale or wireguard. You can normally also tell clients what DNS to use when on the VPN

  • slow_typist 15 hours ago

    SSH tunnels is a possibility.

bpbp-mango 11 hours ago

I run PowerDNS as both authoritative and recursing at my ISP job. Great piece of software.

tzury 14 hours ago

Just remember, if you run your own DNS, and you do so for a mission critical platform, the platform is exposed to a udp DDoS that will be hard to detect let alone prevent.

Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.

  • jcgl 13 hours ago

    Not that I disagree with the fact that these risks exist, but how is that different than running any other service for a mission critical platform?

    The main thing I can think of is DNS amplification attacks, but that's more your DNS server being used as part of a DDoS attack rather than being targeted for one. Also (afaik) resolvers are more common targets for DNS amplification than authoritative.

    • tzury 11 hours ago

      Large scale dns vendors have a multi million dollars worth of network layer traffic filtering equipment pipelined in front of their DNS servers (or in house solutions such as Google).

      • jcgl 6 hours ago

        Yes, of course. But my question was why are you focusing on DNS here? Everything you've said so far is true of setting up literally any public service. Considering how cheap DNS is to serve in the common case, running an authoritative DNS server seems no less risky than running, say, a web server.

      • mlhpdx 10 hours ago

        Does that mean running your own DNS in the cloud is a better answer? This is what I do.

      • megous 10 hours ago

        Virtual private cloud services where you host the DNS server may also include DDoS protection.

        • tzury 10 hours ago

          May or may not. You open the UDP ports, you get flooded, they block all incoming traffic, and this way or another your assets are not resolvable.

          One must distinguish between application layer attacks HTTP/S and UDP, cloud vendors won’t protect you implicitly for network layer attacks unless you purchased such service from them.

          • jcgl 6 hours ago

            Sure, but if the services are available, you can just purchase as-needed. If the problem never comes up, you're golden.

          • megous 9 hours ago

            So you buy it. I checked the prices at our provider, and it's something like $20+/month extra and they use some HW from https://www.riorey.com/

            Far cry from needing $1e6 HW ourselves.

deepsun 18 hours ago

How to make it DNSSEC?

  • jcgl 13 hours ago

    Knot (as suggested by others) is good. As are BIND and PowerDNS. These are the big authoritative resolvers I think of at least, and all of them allow for basically hands-free DNSSEC; just flip a switch and you'll have it. I've run DNSSEC with all three and have no complaints.

    And when using such turn-key DNSSEC support, I think there's very little risk to enabling it. While other commenters pointing out its marginal utility are correct, turn-key DNSSEC support that Just Works™ de-risks it enough for me that the relatively marginal utility just isn't a concern.

    Plus, once you've got DNSSEC enabled, you can at the very least start to enjoy stuff like SSHFP records. DANE may not have any real-world traction, but who knows what the future may bring.

  • adiabatichottub 18 hours ago

    If you don't absolutely have to, then don't.

    That is to say, if you misconfigure it, or try to turn it off, you will have an invalid domain until the TTL runs out, and it's really just not worth the headache unless you have a real use case.

    • deepsun 16 hours ago

      I consider it as basic security measure as SSL. Otherwise any MitM can easily redirect users to a phishing resource.

      Did DNSSEC for company website, worked with zero maintenance for several years. On a cloud-provided DNS. Would want the same on self-hosted DNS too.

      • 0x073 15 hours ago

        "Otherwise any MitM can easily redirect users to a phishing resource."

        Yes, but with nowadays https/tls usage it's almost irrelevant for normal websites.

        If bad actors can create valid tls certs they can solve the dnssec problem.

        • throw0101d 10 hours ago

          > If bad actors can create valid tls certs they can solve the dnssec problem.

          I think you have it backwards: by not running DNSSEC it can mean bad actors (at least a certain level) can MITM the DNS queries that are used to validate ACME certs.

          It is now mandated that public CAs have to verify DNSSEC before issuing a cert:

          * https://news.ycombinator.com/item?id=47392510

          So if you want to reduce the risk of someone creating a fake cert for one of your properties, you want to protect your DNS responses.

          • 0x073 7 hours ago

            If you mean MITM between DNS Server and CA (e.g. letsencrypt), thats on a level of BGP hacking (means for me government involved) and means they can just use a CA (e.g. Fina CA 2025 with cloudflare).

            I think the risk didn't change much (except for big corp/bank).

  • petee 8 hours ago

    If you're a masochist you can do it manually, just make sure you have a good grasp of whats going on first[1]

    Simplistically you need a DS record at your registrar, then sign your zones before publishing. You can cheat and make the KSK not expire, which saves some aggravation. I've rolled my own by hand for 10 yrs with no dnssec related downtime

    [1] DNSSEC Operational Practices https://datatracker.ietf.org/doc/html/rfc6781

karel-3d 12 hours ago

If you run powerdns auth, consider front-running it with dnsdist (also from powerdns).

(disclaimer: I contribute a tiny bit to dnsdist.)

justsomehnguy 6 hours ago

> writing zone files with some arcane syntax that BIND 9 is apparently famous of

gawd just install webmin ffs