It is a common misconception that a DNS TXT record can only contain 255 bytes. This is not true, a TXT record can actually contain much more than 255 bytes.
rfc1035 defines the <character-string> string object, which consists of a length octet (byte), followed by n bytes of text. There is no null terminator. So, the maximum length of the <character-string> is 255 bytes of usable text.
However, a TXT record can contain one or more <character-string> objects, which the DNS client will stitch together into one long string. For a standard DNS record, the length is ultimately limited by the rdlength property of the resource record format (rfc1035, sect 4.1.3). The rdlength is 16 bit unsigned, so the maximum payload length of a resource record is 65,536 octets (64kb). Keeping in mind the overhead of the length octet in each <character-string> object, you'll end up with a maximum of 65,280 octets of usable characters in the TXT record.
This will work, but requires the DNS server to respond to TCP requests. UDP connections that are more typically used for DNS have a limit of about 1500 bytes in length.
> which the DNS client will stitch together into one long string.
Not necessarily. In practice, yes, this is what clients looking for long SPF or DKIM records actually do. But there isn’t anything guaranteeing this if you invent your own use for TXT records.
> UDP connections that are more typically used for DNS have a limit of about 1500 bytes in length.
UDP datagrams can be up to 64K using IP fragmentation, but the original DNS protocol limits UDP payload sizes to 512 bytes. Using EDNS, payloads may be as much as 4096 bytes.
DNS also has an overall message size limit of 65535 bytes, so if you try to create a max-size TXT record, the server will not be able to answer queries for it, nor transfer the zone to secondary servers.
Yes, if you spend time in SMTP land you will see lots of domains with multiple TXT records for things like SPF and DKIM. Also, services like Barracuda, Gsuite, O365, etc that use a TXT record to verify you control the domain. Let's Encrypt uses that mechanism as well.
The dns_cloudflare [1] also automates the cert generation and follow on renewals on a cron or via nginx proxy manager. Uses cloudflare API key, modifies the TXT record, issues/updates a cert and deletes the record.
Waaay back in 2012, I wrote a TCP tunnel-over-DNS-TXT as a mental exercise for myself, as DNS traffic was (is still?) allowed through the Captive Portals that corp/academia/soulless-hospitality throw up on their Wi-Fi networks - I had it preconfigured as a tunnel for RDP traffic into my home box (WS2003) - the bandwidth was appalling (only slightly better than 56K) but the feeling of stickin' it to the man is unbeatable.
It's just unfortunate now that there's no way you'll squeeze a semi-usable desktop experience through dial-up-tier RDP anymore. IIRC, I had to use 16-color mode on a 640x480-sized desktop (good enough for e-mail!).
Of course, today, people just whip-out their phone's tether/hotspot.
and now is fun with Wireguard: if they only access to the internet is the DNS udp, you can easily route all traffic on it :)
I set it up and it works, of course, then I tried to use it with public limited wifi (hotels, airports) but still have to find what I thought was more common: a connection where the only open DNS traffic is DNS
DNS as a password manager is kind of an own-goal, as you give the attacker the means to mount an offline attack against your encrypted passwords at the outset. They can simply retrieve the encrypted blobs over the network, and begin.
Putting that aside, as instantiated by the author, there are some problems with the encryption methodology:
1. CBC doesn't provide integrity
There's no guarantee given to you by the construction that the password you (successfully!) decrypted is the same password you encrypted. If you're symmetrically encrypting anything nowadays, you should be using some form of authenticated encryption.
2. `openssl enc`'s -pbkdf2 flag defaults to 10k iterations, which is off by more than an order of magnitude by today's standards for a comparable use case(protecting password vaults).
3. using PBKDF2 in the first place.
There are more modern KDFs nowadays(scrypt, argon2) that are resistant to more kinds of attacks[0] that should probably be used instead.
4. using `openssl enc` in the first place.
OpenSSL has always cautioned against using the `enc` command for anything serious, so I feel obligated to mention it.
Sorry if this seems like I'm talking out of school, but in case any one reading was inspired to use this methodology for encrypting their own passwords, I wanted to give a proper accounting of what I think the limitations are.
> OpenSSL has always cautioned against using the `enc` command for anything serious, so I feel obligated to mention it.
Can you talk more about this? I can't find anything in the OpenSSL wiki and my searching skills haven't revealed much except about the possibility of the ciphertext being modified when using AES-256-CBC.
I had thought it used to be all over the manpages for the openssl command, but I can't seem to locate it now. It's my understanding that the openssl command line tool wasn't(and isn't) designed for serious production cryptographic use. It was developed as a way to test the functionality of the library, not as a robust way to encrypt files/data on the command line.
Is this a serious issue for this use case? If the attacker tries to tamper with the ciphertext, chances are it will cause the decrypted plaintext to be gibberish with many invalid/unprintable ascii characters. That should alert the user that something's up.
> chances are it will cause the decrypted plaintext to be gibberish with many invalid/unprintable ascii characters.
Not necessarily. If the plaintext you're trying to modify is in the first ciphertext block(which in this scenario is likely the case), you can modify a byte in the IV(assuming the IV is stored alongside the ciphertext) to modify the corresponding byte in the first plaintext block without a trace.
> Is this a serious issue for this use case?
In my opinion this whole use case is an issue. Why give an attacker access to your encrypted passwords?
>you can modify a byte in the IV(assuming the IV is stored alongside the ciphertext) to modify the corresponding byte in the first plaintext block without a trace.
So with this vulnerability, the attacker can... cause you to enter a wrong password. That's kinda annoying, but at the end of the day it's a DoS attack, and even though using AEAD ciphers would prevent this specific attack, it won't prevent other DoS attacks (eg. blocking/mangling all DNS traffic).
The most fun I had with DNS was having the Great FireWall of China start blocking Walmart’s mx record IP addresses for their email (back when they just did an IP block based on DNS resolution of a blocked domain).
Ooh, that's pretty clever. I wonder if this might be useful for things like managing cloud servers or docker clusters. Apart of course from its more obvious use of managing a set of "personal" machines (I'm picturing a computer lab for example).
> With only four TXT recrods that’s ~1KB of compressed data. Some demoscene intros can be stored in TXT records.
A somewhat useful thing what can be stored in TXT is some some self contained script with the encoded payload eg to bypass firewall/proxy. You can even bootstrap it from one record and it can read the rest of payload itself from the other records.
I used to do things like this. Then my DNS host was bought by another company (after 18 years with them) and their new system started detecting my youtube iframe rickrolls for blindly copying/embeding whois http sites as "hacking attempts". I had to remove them or else I couldn't change my records.
Shades of the old NIS+/AUTH_DH/mech_dh scheme. That's a scheme that Sun used in 1987 for NFS security and for authentication. It goes like this:
- there is a name service called 'publickey'
(i.e., with a getent style API, a 'files'
backend, so /etc/publickey, and a NIS+
backend for domain-based authen.)
- each publickey(5) entry has:
- the name ("netname") of the entity
- a DH group identifier
- a DH public key
and
- the corresponding DH private key
encrypted in the entity's password
To login you in the system would prompt you for a username and password, lookup your entry in the publickey(5) name service, if found then it would decrypt the private key and confirm that the public key matches.
To authenticate to a remote service the system would find that service's publickey(5) entry, compute the shared DH secret, and send a message with the local and remote names, a nonce, and a proof of knowledge of the shared secret.
Anyways, you could store publickey(5) in DNS, naturally, if you wanted, though that was never implemented because mech_dh simply died of disuse.
It's worth noting that the DNS is mostly public. Yes, you can make zone iteration hard, but not impossible. So if you publish secrets encrypted in low-entropy keys (like typical passwords) then those will be subject to cracking.
I do think mech_dh, modernized with ECDH and PQ key agreement, could make a lot of sense to revive. I could totally see a new scheme that's a cross between mech_dh, Kerberos, and JWT.
Someone should make another one of these "unlimited data storage with youtube, discord, etc" videos, just with DNS TXT records. Globally distributed, unlimited data storage for (almost) free!
Idea for a password manager: alternative DoH server (with own root, it solves DNSSEC issues) with proper authentication that returns the username and password for each site as TXT records.
I was just asking https://www.perplexity.ai for some geek examples of what I can do with TXT Records, and as an answer I've got the summary of this HN page and link
(adding content to HN carry on some big responsibilities nowadays ... )
About 25 years ago I played an adventure game someone had written entirely in DNS TXT records. You may each move with nslookup (which shows how long ago this way)
> With an API, one could programatically update TXT records.
Just run bind yourself and you can update your records with very simply programming!
When you start using DNS as a generalized key/value store, there are some tuning / optimizations to be aware of:
* Production grade caching / recursing servers retry aggressively. There is debouncing in this implementation.
* Tune your EDNS packet size (in your caching server) to make sure you aren't triggering retries unnecessarily. (And frags are bad and Francisco Franco is still dead.)
* Empty non-terminals are rare enough in "happy eyeballs" use that (de) optimizations in the name of things like privacy are known to happen. You should contemplate disabling qname minimization if that's something the caching server does.
I guess it would be a very solid way to present shasums for binaries. Nowadays software often puts them on the same webpage that links to the binary. While that's better than nothing, having them in a TXT record makes it more secure.
Messing with DNS is so fun. I did a lightning talk on storing secrets in DNS [1] for my vpn tokens and also have a URL shortener that uses DNS as its storage mechanism [2]
rfc1035 defines the <character-string> string object, which consists of a length octet (byte), followed by n bytes of text. There is no null terminator. So, the maximum length of the <character-string> is 255 bytes of usable text.
However, a TXT record can contain one or more <character-string> objects, which the DNS client will stitch together into one long string. For a standard DNS record, the length is ultimately limited by the rdlength property of the resource record format (rfc1035, sect 4.1.3). The rdlength is 16 bit unsigned, so the maximum payload length of a resource record is 65,536 octets (64kb). Keeping in mind the overhead of the length octet in each <character-string> object, you'll end up with a maximum of 65,280 octets of usable characters in the TXT record.
This will work, but requires the DNS server to respond to TCP requests. UDP connections that are more typically used for DNS have a limit of about 1500 bytes in length.