Why .us fails to validate for some (and algorithm rollovers are hard)

Written by Roland van Rijswijk in category: General, Resilience

Wikimedia Commons

If you perform DNSSEC validation on your resolver you may have noticed lots of validation failures for the .us top-level domain since yesterday or early today (depending on the content of your cache). You’re probably wondering why this happens and what you can do about. Here’s a short explanation.

The maintainers of the .us domain seem to be working on an algorithm rollover from RSASHA1 to RSASHA256 (the signing algorithm used to sign resource records in the zone). To that end, they have introduced a new KSK yesterday with algorithm 8 (RSASHA256).

So far so good, right? Or is it? Well, it turns out that introducing this key has an unexpected side-effect. Section 2.2 of RFC 4035 states :

There MUST be an RRSIG for each RRset using at least one DNSKEY of each algorithm in the zone apex DNSKEY RRset.  The apex DNSKEY RRset itself MUST be signed by each algorithm appearing in the DS RRset located at the delegating parent (if any).‘.

Now what does this complicated statement mean?

Well, in effect, it means that all RRsets must be signed using all algorithms available in the DNSKEY RRset. Furthermore, the DNSKEY RRset itself must be signed by all the algorithms specified in the DS set in the parent zone.

The result is that if you want to perform an algorithm rollover you need to be very careful. Luckily, the folks at the IETF have provided an RFC with DNSSEC best practices that describes how to do all these difficult things: RFC 4641bis (still in draft).

One final word: not all resolvers handle this issue strictly. Unbound treats zones that fail to do this correctly as insecure whereas BIND accepts zones despite this error. So your mileage may vary…

No Comments

.nl signed, our clock is now ticking ;-)

Written by Roland van Rijswijk in category: General

Wikimedia Commons

SIDN – the registry for the .nl domain – announced today that the .nl domain has been signed successfully. We are of course very happy with this because this means that at some point in the near future we can submit a DS for our domains under the friends and fans programme that SIDN has also announced will be coming soon.

For us this means that the clock is ticking; our implementation has entered its final stages and we are about ready to start testing with the production setup.

No Comments

Cryptographic sanity: NSEC3 parameters

Written by Rick van Rein in category: Crypto, Security

Wikimedia Commons

One of the factors that delayed the adoption of DNSSEC has been the privacy of the information stored in it. This is a topic of debate, as DNS has always been designed as a public database, but the Internet of today cannot be reigned from purely technical motivations.

The problem is with securely denying a DNS record if it does not exist. If such denials were not secured (meaning, signed) it would still be quite possible to mount a DoS attack against a domain. Although signing does solve that problem, it introduces another. All other signing in DNSSEC can be done offline, but signing a denial that includes the name being denied would have to be done online, and that could introduce an opportunity for leaks of key material. How to solve that conflict of interests?

The solution is to sign the denial for holes, that is, to create offline signatures on NXDOMAIN answers stating that no names exist between X and Z. The offline signer simply sorts all names, determines what holes exist between them and signs for any such hole. This original solution was dubbed NSEC, but it was considered an invasion of privacy because it enabled “walking” a zone, that is hopping from hole to hole to determine all the names in a zone. To solve this problem, NSEC3 was created. NSEC3 does not sign holes between names, but holes between secure hashes of names. While it is still possible to “walk” a zone for the hashes of its names, it is not possible to derive the names from which these hashes were derived.

We emphasize that anything that is truly private does not belong in public DNS. You should setup an internal view on your zone, or define records in /etc/hosts, or perhaps define an internal-only zone, but you should not rely on this mechanism to guard your secrets. That being said, it would be good to discourage massive attacks on zones, such as may be done by spammers looking for entries that look like, or contain, email addresses. And to discourage harvesting of SSH server key publications and addresses. Or X.509 and PGP certificates.

The way NSEC3 works is by repeated hashing with a plain hashing algorithm, each time incorporating a “salt” into the hash. A salt is a bit of extra data that helps to scramble the outcome of the hashing operation. A first step to take is to use a different salt on each of the zones. Also, this salt should be as random as possible — it should not be easy to influence the process of generating the salt. Furthermore, entropy (which may be read as “randomness” in this case) can be as weak as its weakest link, so it is a good idea to use a long salt. Longer than the hash outcome would be silly though; so if the hash algorithm used is SHA1, take 160 bits worth of salt, and if SHA256 is used take 256 bits. If multiple algorithms are used, just follow the longest algorithm.

The question that remains is how often the hashes should be run over a bit of data. The trick with repetitions is that even weak hash algorithms become very strong when their number of iterations is taken high enough. Common hash algorithms run about 5 rounds over the data that is to be hashed, and doubling that already means a severe complication for any attacker. This is because the algorithms are designed to scatter the data that they are given as randomly as possible. If it takes N attempts to crack one pass of an algorithm then it takes N*N attempts to crack two passes. In general, the cracks become more difficult in an exponential relation to the number of passes made. This means that 5 passes really compares to raising the cracking effort to the fifth power! From the perspective of computational load on a secure resolver, 5 hashing passes are acceptable as well; their influence on the validation process then remains minimal in comparison to RSA validation.

Note that NSEC3 hashing does not only impact validating caches, but the authoritative name servers for a domain just as well. This is because an NXDOMAIN response can only be provided after establishing the hash of the name being rejected, which selects the NSEC3 “hole” that is to be sent from the authoritative name server to the secure resolver.

In short, given the relative public nature of DNS, a single pass should suffice; if you feel that this blog article leans over more to public DNS data than you can afford, you may alternatively choose to make two or all the way up to five passes and thus attain a quadratic up to a fifth-power cracking effort.

1 Comment

How many keys #2

Written by Roland van Rijswijk in category: Architecture, Crypto, Technical

For a paper I’m writing on state-of-the-art cryptography and applications of cryptography I’ve drawn a picture of the complete trust chain required to validate the answer to a query for www.surfdnssec.org (which is in one of our test domains and is a CNAME pointing to this blog). It really drives home how complex DNSSEC can be in a single picture…

1 Comment

Reloading signed zones into BIND

Written by Rick van Rein in category: Procedures, Resilience, Technical, Timing

WikiMedia Commons

In our signer, we use OpenDNSSEC to construct signatures and BIND as a hidden primary to reveal the outcome to the public authoritative name servers. We found a few interesting problems with this setup that we needed to work around.

As described under idempotence, we regularly upload lists of zones that need signing. These lists may vary over time, so we needed a way of telling BIND about the altered list of zones to publish. This is something that OpenDNSSEC 1.1.1 does not support — actually, varying zone lists will only be supported from version 1.2 on. In 1.1.1 there is a configurable command to reload zones into BIND though, and we used this hook to script around the elementary rndc reload for BIND.

The reason we wanted to respond to OpenDNSSEC’s notifications at the time a zone has changed, is that BIND configurations that refer to not-yet-existing .signed versions of zones makes BIND unstable; after being halted, it could not be started again until all .signed zones existed, so this could have jeopardised the continuous availability of previously signed zones.

Now, when we receive the notification from OpenDNSSEC (that is, when our notification script is run) we scan over all .signed files, generate a BIND configuration entry for it in a generated zone list, and run rndc reload. A further modification was needed to remove zones; we did that by removing .signed files as soon as they went missing in our uploaded zone lists. All fairly straightforward scripting.

A more interesting problem occurred when we noticed that BIND would not pickup signed zones in all situations. As it turned out, two consecutive rndc reload statements might lead to the second being ignored. The cause is almost certainly that BIND uses stat() internally to see if a file has changed, by checking its last-change timestamp. A second change within the same clock second would not be noticed.

The solution to this was straightforward, given that we found the problem. We surrounded our script with a lock and a waiting time, in such a way that a secondary zone upload would have to wait for 2 seconds (just to be sure) before commencing:

  1. Acquire an exclusive lock for this notification script (wait if needed)
  2. Recreate the zone list for BIND
  3. Notify bind with rndc reload
  4. Wait for 2 seconds
  5. Release the lock for this notification script

With this installed, we have not run into any more of these timing problems. Even if future versions of OpenDNSSEC support such facilities around the notification command, it will still be useful in cases where multiple sources can run the notifier script; for instance, we run it from the script that takes in the zone list as well as from OpenDNSSEC.

No Comments

Cryptographic sanity: Key sizes

Written by Rick van Rein in category: Crypto, Timing
Keychain

WikiMedia commons

In our architecture, we consider three levels of users:

  • End users who understand DNS at a conceptual level
  • Operators who understand DNS at an operational level
  • Security officers who are mindful about the cryptographic intricacies of DNSSEC

After initial setup has been done, a security officer only needs to oversee the secure operation of the system, and keep up to date with cryptanalytic advances by monitoring the security landscape. Any organisation which is security-aware has a few people running around who can fulfill that role.

The main responsibility of the security officer is to balance key sizes against validity periods of such keys. DNSSEC operational practices include two keys to help straighten that balance:

  • Zone Signing Key (or ZSK) for signing individual records in a zone
  • Key Signing Key (or KSK) for signing the ZSKs

The idea behind this separation is that there are only a few signatures to validate that were made with a KSK, so it can have a higher security qualification than the ZSK. Having a longer KSK means less rollover of these keys, and thus less of the complicated interactions with the parent zone. The result of a lower-grade ZSK is that it can help to quickly resolve quite a few DNS records, but the disadvantage is that it cannot be safely used for long periods. This means that the ZSK must be replaced every month or so. Software like OpenDNSSEC or ZKT is designed to automate such processes, and because it all happens internally to the zone it need not cause anxiety. The KSK is a different matter altogether; its validity is stated by the parent zone, which signs and publishes secure hashes of the KSK(s) that it has been told to trust through a secondary channel like EPP. The mere fact that other parties are involved in setting up or tearing down a KSK, thus making it more complicated to change, means that it is better to design that key for a higher level of security.

The first security choice to make about the keys used is their algorithm. The options are roughly DSA, RSA and Elliptic Curve algorithms. There is no established DNSSEC algorithm for Elliptic Curve signatures, although it is foreseen to be added at some point. Of the remaining options, DSA signatures take longer to verify than RSA, and the security cannot be upgraded as easily as RSA, where you can pick any key size which is a multiple of 8. So RSA is our public key algorithm of choice.

Now, how long are the ZSK and KSK? Generally advised lengths are 1024 bit for a one-month ZSK and “longer” for the KSK. Note that “longer” need not mean the double length: The search space doubles with every few bits added to a key, and in general the cracking effort grows exponentially with key size. Longer keys however, also waste resources on resolvers, which we prefer to keep as fast as possible. For a one-year period we would currently consider 1280 to be a good KSK key size. Alternatively, we could go for 2048 bit and simply state that it is intended for “several years to come” and document a key rolling procedure in so much detail that we needn’t re-invent it when the time comes to roll the KSK. General guidelines for this class of decisions was initiated by Lenstra and Verheul, with an online resource on key size estimates: This work gives a clear indication of the key sizes and how many years they ought to be reliable from a given start year. These estimates are generally considered conservative and excellent.

Before signing resource records with RSA, the data to be signed is first securely hashed. Since this could break the signature in spite of the strength of RSA, it is advisable to pick wisely. MD5 is known not to be good enough. SHA1 is also making cryptographers feel a little edgy, because advances are being made in cracking it. The job has not been completed, but it eventually will. The currently advised algorithm to use for DNSSEC signatures is RSA/SHA-256. This uses the SHA-256 algorithm, which produces a 256-bit hash so that even the easiest attack (a so-called birthday attack) would take an an incredible amount of computing power — beyond the limits of what is believed to be practical for many years to come.

The last point where cryptography comes in play, is with NSEC3. We will discuss that topic separately.

No Comments

Red-Hatted Trouble… IPv6 and BIND 9.7 on RHEL 5.x

Written by Roland van Rijswijk in category: Technical

We prefer to run our infrastructure on open platforms. For our DNS infrastructure we have chosen to run it on top of Red Hat Enterprise Linux version 5.x. Since we have deployed DNSSEC, we have run into a number of problems, and to save you the trouble of having to run into and solve these on your own, here’s a short breakdown:

IPv6

Since we operate a modern cutting-edge network, all our services have native IPv6 support. Security is of equal importance for us so we usually operate a two-tiered model where we have an ACL on both the switch as well as on the host-based firewall (so if there is an issue with one of the two, the other will hopefully still function). We did the same for IPv6 which means that we have an ACL on the switches behind which DNS servers sit and we used ip6tables on the servers themselves.

Unfortunately, this caused us some major headaches. First of all, IPv6 filtering is broken in Linux kernels ≤ 2.6.20. Red Hat uses kernel 2.6.18 for Enterprise Linux 5.x. This means that it is not possible to create stateful firewall rules (for TCP connections).

It is possible to work around this issue using some creative rule-writing skills. Unfortunately this triggered problem number 2: if you enable ip6tables this has disastrous effects on the MTU for IPv6. We only noticed this because we suddenly saw problems on our resolvers (that do DNSSEC validation). Suddenly, validations started failing because the resolver was unable to retrieve DNSKEY sets. When we traced back in our administration what had changed on the resolver, we noticed that the problems coincided with the enabling of ip6tables. Some further analysis by inspecting packet traces shows the cause: first of all, the MTU for IPv6 decreased from around 4K to around 1300 bytes (near to the minimum IPv6 MTU). This meant that not all DNSKEY answers would fit into a single packet. No problem, you would say, the packets will be fragmented. Well, as it turns out yes and no. Packets are indeed fragmented by the sender after path MTU discovery, but our receiving host failed to reassemble the fragments.

The only way to prevent these problems is to not use ip6tables. This would also be our advice to anyone running Red Hat Enterprise Linux 5.x, at least if you are using the server for DNS operations. Don’t use ip6tables, instead you should rely on the firewalling capabilities of your switches and other network infrastructure.

Update: from a number of e-mail exchanges with readers of this blog I have heard that everybody seems to be seeing different results. Some have no problems while others report that using ip6tables means that checks like the DNS-OARC Reply-Size Tester completely fail. So your mileage may vary but expect problems if you use ip6tables on your DNSSEC server system (resolver or authoritative).

Packaged BIND 9.3

A second problem we ran into more recently is the packaged version of BIND that comes with Red Hat Enterprise Linux 5.x. The version that comes packaged with the OS is a 9.3 variant of BIND with specific Red Hat patches applied to it. As an ordinary DNS server, this suffices, but if you want to serve out signed zones you run into the problem that it doesn’t support NSEC3.

The only solution is to take a Fedora package of a newer version and rebuild it. We have taken the latest BIND 9.7 release, which we found using rpmfind.net. This package requires some extra patching; here’s how you do it.

1. Download the source RPM

2. Don’t attempt to install it. Unfortunately, this will fail with the following errors:

# rpm -ivh bind-9.7.1-2.P2.fc13.src.rpm
...
error: unpacking of archive failed on file /usr/src/redhat/SOURCES/Copyright.caching-nameserver;4c57cc0d: cpio: MD5 sum mismatch

3. Instead, unpack it in a separate directory using rpm2cpio and cpio by using the following commands:

# cat bind-9.7.1-2.P2.fc13.src.rpm | rpm2cpio | cpio -i
15269 blocks

4. You will end up with a directory that contains all the files in the cpio archive. Now use the following commands to manually ‘install’ the source RPM:

# mv bind.spec /usr/src/redhat/SPECS/bind-9.7.1-2.P2.spec
# mv * /usr/src/redhat/SOURCES

5. Now you need to add a patch to the mix. The autoconf packages that come with RHEL 5.x are too old to support all the macros that are in the BIND 9.7 configuration scripts. This can easily be resolved by adding the following patch (download it here):

diff -u unpatched/configure.in patched/configure.in
--- unpatched/configure.in 2010-07-05 14:02:20.000000000 +0200
+++ patched/configure.in 2010-07-05 14:03:48.000000000 +0200
@@ -282,7 +282,8 @@
AC_C_INLINE
AC_C_VOLATILE
AC_CHECK_FUNC(sysctlbyname, AC_DEFINE(HAVE_SYSCTLBYNAME))
-AC_C_FLEXIBLE_ARRAY_MEMBER
+# RvR: this breaks things on RHEL5
+#AC_C_FLEXIBLE_ARRAY_MEMBER

#
# Older versions of HP/UX don't define seteuid() and setegid()

6. To add the patch, copy it to the /usr/src/redhat/SOURCES directory (we’ll assume you call it “bind97-ac.patch”) and edit the bind-9.7.1-2.P2.spec file in /usr/src/redhat/SPECS.

7. In the spec file, add the following line to the list of patches:

Patch300:bind97-ac.patch

8. Then add the following line in the %prep section:

%patch300 -p1 -b .97ac

9. You’re done. You can now build the RPMs (the package builds multiple RPMs) with the following commands:

# cd /usr/src/redhat/SPECS
# rpmbuild -bb ./bind-9.7.1-2.P2.spec

Once you have the RPMs, you can install the relevant ones just like you would install the regular package from Red Hat.

6 Comments

Cryptographic sanity: How many keys?

Written by Rick van Rein in category: Crypto, Resilience, Security

WikiMedia Commons

In our architecture, we opt for Hardware Security Modules (or HSMs) as secure key stores. This helps us with high-availability of key material, and thus of our signed domains, but it also poses us with some limitations. An HSM generally has a limited number of keys that it can store. Had we opted for a smart card, then this would have been an even worse problem.

The number of keys supported in an HSM ranges from hundreds to thousands, depending on the license accompanying the device. While this may suffice for many small-scale needs, it is not automatically sufficient for a registrar like SURFnet. Smart cards, which only store a few keys up to ten or so are easily outgrown by anyone. It would be especially troublesome if the popularity of DNSSEC could outgrow the capacity of the secure key store chosen.

An extreme option could be to use a single key for all domains. A few generations are likely to co-exist during rollover procedures, but other than that all domains would gradually move from commonly shared key A to commonly shared key B, and then all would roll to the same key C, and so on. This requires so few keys that a smart card may have sufficient resources to work that way. But this approach may cause extra concerns if private keys have to be shared during zone transfers; also, it might simplify the possibility of one domain owner falsifying the contents of another party’s zones. As a general rule it is better to avoid any need to depend on proper behaviour among clients. Finally, separating clients may improve the possibilities for legal agreements about key ownership and responsibilities.

The other extreme would be to assign an individual key for each zone. This is the approach that can easily outgrow the potential of an HSM, let alone a smart card, because domains are cheap and popular.

The middle ground on which we settled is to represent all zones of one connected institution of SURFnet with a single key. So all domains of the same university are signed with the same key, but zones from different universities are signed with different keys.

Looking at our choice of OpenDNSSEC, this choice translates to a requirement of a so-called OpenDNSSEC signing policy per customer, and the setting to share keys within each such policy. Moreover, it is advisable to save space by not storing public keys in the HSM, which will be an option in OpenDNSSEC 1.2 and beyond. This is useful for a number of reasons:

  • The HSM limits the number of objects, not keys. Removing public keys doubles capacity.
  • Private key objects are usually embellished with public key data.
  • Public key data is public, and present in DNS, KASP database, and who knows where else.

If we were really tight in key storage space, we could consider not storing the short-lived ZSK in the HSM, but since at least the private part of the ZSK is still a true security object, we consider that too drastic to be an attractive option. Sharing keys and dropping public keys have no security implications, and are therefore much better suited optimisations of HSM storage.

1 Comment

HSM backup considerations

Written by Rick van Rein in category: Architecture, Resilience, Technical, Timing

When you start to support DNSSEC, you are suddenly supposed to manage the keys used to sign the domain. This is a typical task for a security officer. Typical concerns are to conceal the private keys from outside-world prying eyes, and to avoid losing keys as long as the outside world needs them to trust your domain.

The market offers quite a range of technical solutions to manage keys securely, as this is a general cryptographic concern; the most common solutions are:

  • You can store keys on disk on a physically secure machine, possibly with password-based encryption
  • You can store keys on a cryptographic smart card, which is designed to conceal private keys
  • You can store keys on an Hardware Security Module (or HSM), which is a protected machine designed for secret key protection

These solutions vary in price and performance as well as in their level of attained security. Since SURFnet is not just responsible for its own keys but also for its connected institutions’, and because DNSSEC key management can have a direct effect on domain uptime, we have chosen to work with a fullblown HSM. Or more accurately, a pair of HSMs that act as one virtual HSM device in high-availability mode. So if one HSM fails we can replace it while the other picks up on all duties.

Cryptographic hardware (as well as software simulations such as the SoftHSM that is developed alongside OpenDNSSEC) is usually accessed over the industry-standard PKCS #11 API; in the case of a redundant HSM solution, all the high-availability issues are best resolved under that API so we don’t get to see the replication mechanisms, or even any failure of a single HSM. In a picture:

High-Availability pair of HSMs accessible as one PKCS #11 instance

Image Components by OpenClipArt.org

The hidden high-availability facilities mean that we can follow the HSM manufacturer’s instructions for any HSM-related emergency procedures, which saves us a lot of work.

We have opted for one more extension, which is a backup made on one HSM. The instant copies of an HSM are mainly to cover for hardware failure; backups have the added value of supporting the recovery from operational failures. The normal situation is one where the HSMs store the same values, so making backups in both locations hardly helps with data safety. However, if keys are backed up before they are first published, there is always a chance of recovering the vital material that makes DNSSEC tick. This can be a great asset when trying to protect the secure chain that DNSSEC builds. The complete picture now becomes:

One of the identical pair of HSMs will be backed up regularly

Image components by OpenClipArt.org

No Comments

HOWTO turn BIND into a Validating Resolver

Written by Rick van Rein in category: Procedures, Security, Technical, Users
Time to pin down the safety of DNS

WikiMedia Commons

This instruction explains how to setup DNSSEC validation with the BIND resolver for DNS. A companion article on Unbound also exists. Note that Unbound has been written for security from the ground up, and carries less history than BIND.

Install. We used BIND 9.7.1-P2 on Debian Linux. Variations should work; there even is a prebuilt binary for Windows. Aside from the general practice to always run the latest BIND for security reasons, it is specifically good to use 9.7 and beyond because it can keep up to date with root zone keys as they rollover.

The best option on Linux is to build it from source code (although pre-built packages are also available for several distributions; recent versions of BIND are also included in the ports tree of several BSD distributions). This can be obtained from https://www.isc.org/software/bind/.

The build is straightforward enough; by default it installs everything under /usr/local which we will assume because you may not like overriding any old setup in /etc and /usr. So you would do:

./configure
make
make install

Configure trust. First, find the the trust anchor for the root zone and verify that it is reliable. To obtain the root zone’s DNSKEY records, simply do:

dig . dnskey
...
;; ANSWER SECTION:

.			71410	IN	DNSKEY	256 3 8 AwEAAb...
                                                        QjHQ3F...
                                                        DNFv34...
                                                        8Icy19hR
.			71410	IN	DNSKEY	257 3 8 AwEAAagA...
                                                        FVQUTf6v...
                                                        bfDaUeVP...
                                                        X6RS6CXp...
                                                        W5hOA2hz...
                                                        Qageu+ip...
                                                        QxA+Uk1ihz0=

You need the one with value 257 behind the DNSKEY record type (the second key in the example above). The value indicates that this is a Secure Entry Point for the zone, or as we usually call it, the Key Signing Key. Keys with 256 are subordinates, known as Zone Signing Keys.

Save the key definition to a temporary file and run dnssec-dsfromkey on it to hash it into a DS record:

/usr/local/sbin/dnssec-dsfromkey -2 -f /path/to/keyfile .

Use the outcome of this command to verify the reliability of the selected DNSKEY. If it is reliable, edit the configfile for BIND in /usr/local/etc/named.conf. Aside from your usual configuration details for a resolving cache, you will have to
create a section and fill it with the DNSKEY fields, like this:

managed-keys {
        "." initial-key 257 3 8 "AwEAAagAIKlVZrp...";
};

Also set the following in the options configuration section:

dnssec-enable yes;
dnssec-validation yes;

You probably want to ensure that the older DLV setup options dlv-lookaside are disabled, unless you have decided to mix this alternate channel with the trust paths for the root zone. Also check that any other trusted-keys are gone, as should be the case in a pristine setup of BIND. If you have been relying on ITAR in the past, this is the time to clean up that temporary trusted key.

Run. Now fire up BIND:

/usr/local/sbin/named -c /usr/local/etc/named.conf

If no errors show up in syslog and the daemon starts to listen to port 53, it should perform as usual, except that it does now
actively seek DNSSEC approval on domains that are known to carry a chain of signatures from the root zone down.

The first sign that this is working is that BIND creates a file named managed-keys.bind holding the Key Signing Key currently in use by the root zone.

A successful reply would include an Authenticated Data or AD flag, which serves as an assurance to stub resolvers that are not DNSSEC-aware, in this case human eyeballs. A small session showing this flag would look like:

janneke$ dig @localhost +dnssec br ds
...
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1
...

Why query for DS records, you wonder? Like any parent/child transition in DNS, the TLDs in the root zone are present in both parent and child name servers. The DS record is the only one that is normally only present in the parent, so this answer is certain to come from the parent, which is the root zone because we are asking for a TLD name. Further down, things start depending on more complex constructions. More on that later!

And that’s all, you’re done. The only thing you may want to ensure is that the new signing keys are pulled in when the root zone rolls over its keys, and specifically, its Key Signing Key. The procedure should be automatic with the setup given
here, but it’s probably better to be safe than sorry.

2 Comments