Skip to main content

Where/Why has DNS gone wrong?

Slides IAB Workshop: Design Expectations vs Deployment Reality in Protocol Development (dedrws) Team
Title Where/Why has DNS gone wrong?
Abstract Jim Reid, Where/Why has DNS gone wrong?
State Active
Other versions plain text
Last updated 2023-02-07


For your consideration:

Where/Why has DNS gone wrong?

DNSSEC has largely failed. Although it works from a strictly technical
perspective, it has flopped. There are three main reasons for this market

1) The business incentives are perverse and the cost/benefit analyses don’t
make sense.

Signing incurs costs and introduces new risks. The benefits are unclear or
non-existent, except when TLD registries offer discounts for signed
delegations. And although ICANN can compel new gTLDs to use DNSSEC, that
doesn’t extend to the delegations in those gTLDs.

Validation introduces extra costs and risks too, also with poor or unclear
benefits. Almost no major ISPs do DNSSEC validation, though there are 2 or 3

Signers don't really benefit from signing, validators do. And validators
don’t benefit (much) from validation, those signing do. Or might do.

2) There are still no killer apps or compelling use cases

DANE seemed to be the last chance to get widespread adoption and use of DNSSEC.
It hasn’t happened yet. Maybe STIR will eventually drive uptake of DANE and
bring about increased use of DNSSEC.

3) DNSSEC is hard

It adds complexity to routine DNS operations and administration. This is not
well understood and rarely documented properly. DNSSEC tools for signing and
troubleshooting tend to be crude, clumsy or both. Rolling KSKs is difficult, as
is managing keys. There are few protocols or tools to help and there’s a lack
of a common set of procedures/protocols across registries and registrars to
make this work smoothly. These problems have largely been ignored. So DNSSEC

There are also some problems with the DNSSEC protocol. For example the lack of
a clear error code to indicate a validation failure is a serious shortcoming
and some corner case replay attacks are possible. Clients can’t easily tell
the difference between a SERVFAIL because of a DNSSEC validation problem or
some other reason. In fact they can’t tell the difference between a genuine
or a spoofed SERVFAIL response. That wouldn’t be the case if DNSSEC offered
transaction/message integrity rather than just data integrity. Oops.

The last mile issues and the lack of decent APIs are still to be resolved in
any meaningful way. [res_mkquery() and friends are 25+ years old!!] DoT and DoH
might solve the former. But these introduce their own baggage and problems.

With all that background, DNSSEC (and DNS) protocol development has largely
been shunned by key stakeholders such as vendors, major ISPs and registrars.
They’ve just not seen this as relevant to their business. Or they’ve been
unable to make progress and have given up.

But it gets worse. The IETF’s efforts on DNS and DNSSEC have not kept pace
with today’s problems and policy concerns - like privacy and GDPR. So ad-hoc
solutions like DoH emerge which (sort of) address some of these issues. But
they introduce even bigger concerns such as the aggregation and centralisation
of DNS/DoH service amongst a very small number of powerful, dominant players.
And once again, key stakeholder groups are rarely part of the discussion.
History might well be repeating itself with DoH - like when TLD registries
realised late in the day that DNSSEC-bis was unacceptable because of zone
enumeration and the IETF had produced a protocol that a significant chunk of
the DNS community was never going to deploy.

Of course protocols that emerge are by definition a result of those who
participate. The IETF can’t reasonably be expected to consider the views of
those who don’t show up or fail to voice their concerns. Even so, introducing
some form of impact analysis into the standardisation process might be
worthwhile. By way of example, the dprive WG is proposing to work on DoT
between resolving and authoritative DNS servers because it will offer better
privacy and security. However the WG has not consulted operators of busy
authoritative servers about whether this is a good idea or not, what the
operational impacts might be or if they would deploy this if/when an RFC gets

The IETF attitude to new DNS features is sometimes inconsistent.
ECS-Client-Subnet got waved through because it documented a common practice by
some CDNs. Yet Response Policy Zones, another widely used DNS feature, failed
to get picked up by the IETF. Views are widely used, not just in BIND, but this
isn’t documented in an RFC. This inconsistent behaviour is harmful because it
encourages the adoption of proprietary (or undocumented) solutions that make
interoperability more difficult. It might one day lead to forum shopping.

DNS has been around for a long time. But why has the IETF never come up with a
name server management/control protocol? Why does every major DNS
implementation ship with its own management tool and protocol? Why is there no
provisioning protocol for adding/removing zones from authoritative DNS servers?
Could/should the WGs be directed to work on these problems?

Who’s going to fix these concerns? How?
Dedr-pc mailing list