Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Open DNS resolvers, from bad to worse (apnic.net)
98 points by zdw on May 13, 2022 | hide | past | favorite | 44 comments


In the past year I designed two UDP protocols (for connection measurements and a game server) and last week wrote a DNS server. In my own protocols, I always made sure that the sender has to put in more bytes until a >2^64-bit secret was echoed. Only with DNS this does not seem to be possible. At best, you can refuse a query in an equal number of bytes, but useful responses necessitate amplifying.

Every nameserver out there, from duckduckgo to hacker news, will send back larger responses because it must echo the query.

Does anyone know why this is not considered an issue? Are we just waiting for open resolvers to be eliminated and attackers to switch over to this lesser amplification factor before we start fixing it?

The only solution given the current protocol, considering reasonable compatibility, is to use rate limiting per source IP, which means that someone can use source IP spoofing to block benign sources. This problem can be mitigated with DNS cookies, but I don't know if those are universally supported enough to simply reject any clients that don't support DNS cookies yet. It also means state keeping per client (hello IPv6). If clients would just send back a slightly larger packet than the response they expect, and servers didn't have to echo the query, amplification protection would be much easier to implement.


There's no ultimate solution, but there are a few things being done in common DNS servers that can mitigate the issue:

* Large answers like ANY queries or large DNSSEC records should either be not supported or only supported via TCP (see RFC 8482 e.g. for ANY).

* DNS software can implement response rate limiting: https://kb.isc.org/docs/aa-00994

This doesn't prevent all amplification, but it prevents strong amplification, i.e. you'll be less valuable for attackers.


Thanks for the reply, it seems I've already done everything currently possible in my implementation then :/


Your idea for the secret to prevent spoofing is interesting and reminiscent of the verification secret in the SCTP packet header: https://en.m.wikipedia.org/wiki/SCTP_packet_structure


It's indeed a common technique, not my original idea. SCTP seems to be from 2000, but good old tcp also already does this with its sequence numbers (albeit 32-bit, so while it can't be relied on fully it does prevent amplification specifically).


> In my own protocols, I always made sure that the sender has to put in more bytes until a >2^64-bit secret was echoed.

I have a hard time parsing this statement. What do you mean?


Yeah I packed a lot into there, sorry. What I meant is that the client initially has to send more data into the connection than the server returns. That means sending some padding data in the client hello packet. This rule is only broken after the client echoed an unguessable random token, which has a similar effect to doing a TCP handshake (because a spoofed-source client wouldn't be able to echo this value), so I know they're legit and I can safely send big answers to small requests without the risk of DoS'ing an innocent third party. It also prevents a third party from trigger the rate limiting and thus blocking an innocent third party's access.


Presumably this could be mitigated by the DNS request needing padding?


Pretty sucky to have to bloat a fundamental protocol of the internet, and thus, all traffic, forever, to avoid amplification attacks.


Queries are already larger than necessary, and often two are sent where one is needed (which adds UDP, IP, Ethernet, probably also WiFi header overhead) to get both v4 and v6 while the DNS protocol technically supports multiple queries in one packet.

While I don't disagree with your point in principle, it must be said that if we cared about fourty extra bytes just so the most common records can be answered without amplification (a quite significant problem currently) then we should start by optimizing existing unnecessary inefficiencies.


The thing you find fault with was caused by exactly the kind of thinking proposed to do so it doesn't seem to make sense it can itself be justification for that thinking being alright.

Many people regret the dual lookup solution as it causes nearly twice the pps load on public servers these days to work around issues with servers of the time. In ~2005 it was seen as easier to implement as clients could opt in to do it with nothing needed from the server (plus how much longer could IPv6 deployment take right /s) and now in 2022 it seems impossible to un-implement as you'd need to make sure every authoritative server and client in the world stop relying on the behavior to do so. This in no way implies that going forward every solution should get a free pass on being inefficient it just means we allowed the easy bloating answer before and it bit us in a way we can't easily fix now.


Currently I'm using NextDNS [0] and happy with its ultralow server.

Fast. Block ads. Privacy. Even support the Anonymized ESC.

Sometimes it still chooses the wrong ultralow server, but in overall it's good and fit my case.

[0] https://nextdns.io/


I was a NextDNS user until recently. I now use the following setup:

- Pi-hole on a Raspberry Pi at home. It uses Cloudflare as its upstream DNS resolver and provides ad blocking for all devices connected to my home network. - RPi also runs Tailscale and I set pi-hole as the DNS resolver for my tailnet. All mobile devices are always connected to Tailscale, so they too indirectly use pi-hole.


I find it a little confusing.

Looking at their site, it says "all devices, all networks". Do you need to set up some sort of firewall rule internally to capture UDP 53?

It talks about controlling usage of apps etc on kids phones, but a savvy kid will set up DNS over HTTP with a mobileconfig profile (or the Android equivilant), the same way we all found ways to bypass filters as kids.

It seems to me like a remote pi-hole with a curated ACL list and the same anount of setup. Am I being too cynical?


Love it too. You set it up and it just works. Basically once a year PayPal notifies me I paid them $20, and in exchange I don’t see any ad, ever, anywhere.


That's awesome! What about yt ads? I'm using a pihole but I can't seem to cover the google ad servers in the region (northern Europe).


If im not mistaken, yt serves ads from its first party domain, making it hard to block from a dns perspective. Blockers such as uBlock do it by blocking elements or code, not dns lookups.


Cool, that makes sense. I have quite a few false positives with Google Ads.


Like the other person said, these are basically unblockable by DNS and if you block them by other means it causes a lot of “collateral damage”.

So my statement was not entirely correct :-)


What is it that Google, Cloudflare, OpenDNS, etc are doing exactly that make their open DNS resolvers acceptable?


The authors proposed solution is to reduce the number of open resolvers.

In 2009, someone else identified the amplifciation for DDoS problem, he associated it with DNSSEC not open resolvers, and proposed a different solution.

https://cr.yp.to/talks/2009.08.11/slides.pdf See page 43

I use dnscurve on the home network. I also disable EDNS0 when I run DNS servers. I have no problems as a result of these choices. I can remember the internet before the post-2008 campaign for DNSSEC and before EDNS0. From the end user perpsective, IMO the internet worked just fine without DNSSEC and EDNS0.

The need for DNSSEC is due the use of shared DNS caches run by third parties, e.g., Google, Cloudflare, OpenDNS, etc. As such, another solution to amplificiation risks is to stop using third party DNS caches. This obviates the need for DNSSEC.


> stop using third party DNS caches

What is the best way to go about this? AFAIU the root DNS servers will refuse requests for recursive resolutions. So I understand the need for a self-hosted caching resolver in the middle. But is it actually necessary? If I configure my iPhone DNS to use a root server IP [0], will it perform the recursive resolution locally? And if so, is there any downside, other than increased latency, to contacting the root servers directly?

I guess privacy and trust might be issues, when asking untrusted computers (any of the intermediate resolvers) to resolve an address for your client. You would certainly give the host of a domain more information when you contact it than you would by using a third party caching resolver as an intermediary. It’s basically a question of spraying small bits of activity to many resolvers, or all your activity to one resolver. Which is a smaller attack surface? Depends.

[0] https://www.iana.org/domains/root/servers


Since the cache poisoning days, I have always run a local custom root.zone. These days I use it to redirect traffic to a forward proxy that holds DNS mappings in memory, but one can put any DNS data into that zone, e.g., the A record for example.com. This reduces the number of DNS queries to resolve example.com (assuming it is not cached) to one instead of three or more. One can prevent any DNS queries from being sent over the network if one runs an authoritative DNS server serving a custom zone with the required DNS data listening on the loopback. In any event, an authoritative DNS server that functions as a root server does accept recursive queries.

For this approach, I require bulk DNS data. However I do not need DNS data for every domain on the internet. Consequentially, it is not a large amount of data. Contemporary computers usually have more than enough primary storage space to hold it.

When I request zone files from czds.icann.org or extract DNS data from publicly available UDP scans, two methods of obtaining bulk DNS data, I am not disclosing any information about what domains I plan to look up via DNS in the future. Obtaining bulk DNS data does not reveal as much as doing piecemeal lookups, not to mention doing them contemporaneous with HTTP requests to the IP address returned by the queries. Intermediary DNS caches will see all lookup activity, as would authoritative nameservers for a domain if I query them directly. Obtaining bulk DNS data generally does not suffer from this problem.

Even more, I am not sure I understand the privacy issue around an authoritative nameserver seeing one's requests, e.g., from a stub resolver or a local cache. If I am going to visit example.com, the the operator of example.com will see the HTTP request. Why would I care if the operator of example.com's nameserver sees the DNS requests. If the concern is that network observers, not the operator of example.com, will see the contents of DNS requests made to example.com's authoritative nameservers, then that is exactly what DNSCurve prevents. Encrypt the DNS packets.

In the end, I believe bulk DNS data served from the loopback provides the greatest privacy, not to mention the best performance.


"AFAIU the root DNS servers will refuse requests for recursive resolutions."

All the DNS data served by the "DNS root servers" is public and available for download via HTTP or FTP request.[FN1] As such, IMHO, there is rarely a reason to query those servers for the data they serve. I already have it.[FN1] For example, the IP address for the "a" com nameserver probably will not change in my lifetime. I am so sure of this that I have it memorised as 192.5.6.30.

A simple illustration of how to reduce DNS queries to one and reduce remote DNS queries to zero. Compile nsd as a root server using --enable-root-server. Add DNS data to a root.zone. Bind nsd to a localhost address, e.g., 127.8.8.8 and serve the zone. Configure the system resolver to query nameserver 127.8.8.8.

      mkdir /tmp/nsd-01
      cd /tmp/nsd-01
      echo . 86400 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2021101501 1800 900 604800 86400 > root.zone
      echo example.com 1 IN A 93.184.216.34 >> root.zone
      test ! -d /nonexistent||rm -rf /nonexistent
      groupadd nsd-01
      useradd nsd-01 -g nsd-01 -s /bin/false -b /nonexistent -d /nonexistent 
      chown nsd-01 /tmp/nsd-01 
      chgrp nsd-01 /tmp/nsd-01 
      printf 'server:\nipv4-edns-size: 0\nzonesdir: "/tmp/nsd-01"\nzone:\nname: "."\nzonefile: "root.zone"\n' > conf
      nsd -u nsd-01 -4 -a 127.8.8.8 -c conf -l log -P pid -f db
      cp /etc/resolv.conf resolv.conf.bak
      echo nameserver 127.8.8.8 > /etc/resolv.conf
      drill -oRD example.com 
      # cleanup
      mv resolv.conf.bak /etc/resolv.conf
      kill $(cat pid)
      rm -rf /tmp/nsd-01
      sleep 1
      userdel nsd-01
      
FN1. How to get the DNS data that the root servers provide, without using DNS:

   x=$(printf '\r\n');
   sed "s/$/$x/" << eof|openssl s_client -connect 192.0.32.9:43 -ign_eof 
   GET /domain/root.zone HTTP/1.1
   Host: www.internic.net
   User-Agent: www
   Connection: close

   eof
ftp://192.0.32.9/domain/root.zone

FN2. Except in emergencies where I have no access to locally stored root.zone DNS data. Also because I have the "a" root server IP address memorised as 198.41.0.4, I sometimes use it as a ping address.


DNSCrypt requires responses sent over UDP to never be larger than the question, or else the client must retry with larger padding in the question, or over TCP. That makes amplification impossible.


Oh neat, somewhat related, I wanted to use ANY but realized that recently per RFC 8482, most DNS servers will not respond to it meaningfully anymore, so you have to query each query type.

So I wrote digany(1) for the primary use case of backing up DNS records.[1]

[1]: https://github.com/andrewmcwattersandco/digany


Anyone use alternative DNS resolvers? (Apart from the usual suspects of):

    1.1.1.1
    8.8.8.8
    9.9.9.9


I used to use AdGuard's DNS (https://adguard-dns.io/). It looks like they're going to be launching an upgraded version that you can customize in the future.

Right now, I'm using NextDNS (https://nextdns.io). It lets me add ad/tracking block lists, block specific domains (especially ones that serve annoying embeds in many sites), and setup rewrites so that I can have my-fake-domain.com resolve to localhost. I can use their DNS-over-HTTPS with Chrome/Edge/Firefox and I can setup my router to to use their DNS. It's basically my own little PiHole without needing the RaspberryPi (which are very hard to come by these days).

(Anxiously awaits someone on HN telling me that I shouldn't be using NextDNS)


I just responded to another comment, I don't see the purpose of it. However (like many here I assume), I have home servers I can trivially run pi-hole or similar software on, so many it's for people who don't want to run/manage it?


Beyond running Adguard, I use OpenDNS (208.67.220.220/208.67.222.222) as I can setup some block categories like phishing/malware as a layer of last resort.

I could probably switch, but I've been using them since ~2010 (back before these other ones were available) with no issues.


Still one of the usual suspects but I prefer to use 1.1.1.2 for peace of mind.


Yup.

I've used them as backup DNS resolvers on all machines.


I have run a totally open 'unbound' instance in my personal infrastructure for many years now.

Anyone in the world can query it and get correct name resolution (if they scanned for it and found it).

Convince me that this is negative ...


This DNS likely participates in DoS attacks. Attackers craft queries with a maximum amplification factor and offer up the victim's IP as the query IP. This results in bandwidth from the attacker getting multiplied by amplification factor and the attackers IP is hidden from the victim.

Rate limiting, blocking abusive IPs, and blocking large transfers like zone transfers can help.


You could be party to a dns amplification attack: https://www.cisa.gov/uscert/ncas/alerts/TA13-088A

& a quote from the article:

>Our findings revealed that we can reduce the overall potential by 80% if we patch around 20% of the most potent amplifiers (see Figure 1).


Given DoH / DoT uses TCP anyway, maybe it’s just time for DNS clients to switch to TCP by default too.

You won’t be able to disable UDP on DNS servers for a long time, if ever, but you could then more aggressively rate limit/restrict UDP queries vs TCP.


Oh related though, there was a draft floating around for DNS over QUIC too, which ofc uses udp. I wonder if it has similar issues?



QUIC isn’t stateless so won’t have the same issues. It’s effectively TCP over UDP.


I use https://libredns.gr/ - great open source DNS service with DoT, DoH and adblocker.


[flagged]


Not sure you've understood the article, or at least I don't understand your question in relation to it. What would make a DNS server the "best" in this regard?

From my point of view, anything supporting cookies and RRL is going to be sufficient, which means any standard server out there is fine to use.


I don’t think the end user is the target audience for the article.


It's not an article for end users.


What does the article provide that the paper doesn't? Seems like it serves a marketing purpose more than a technical one.

https://link.springer.com/chapter/10.1007/978-3-030-98785-5_...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: