More Than 70% of DNS Root Queries Are Junk

3 hours ago 1

In short

  • Research shows that 72% of all incoming traffic to a DNS root server is unwanted.
  • Large Internet Service Providers, legacy Internet of Things devices, and botnets are the main culprits.
  • To improve the efficiency and resilience of the DNS, a coordinated, multi-layer effort involving resolvers, root operators, software developers, and standards bodies is required to clean up these queries.

Imagine if seven out of every ten emails in your inbox were spam. Now picture that happening every second, across every corner of the Internet, not just in your inbox, but in the very backbone of how the Internet works. That’s the reality facing the Internet’s Domain Name System (DNS) root servers, which receive an overwhelming majority of “junk” traffic.

Recent research shows that over 72% of queries to root DNS servers are unwanted, most of them asking about domains that don’t even exist. While this traffic might sound like harmless noise, it’s actually a serious challenge for the stability, security, and performance of the Internet.

This blog post unpacks what this junk traffic is, why it matters, and what can be done to reduce its impact.

What Are Root Servers and Why Do They Matter?

The DNS is the Internet’s phonebook. It translates human-friendly names (like google.com) into machine-readable IP addresses (like 142.251.46.174). At the very top of this system are the root servers, a globally distributed set of 13 logical servers that handle the DNS “root zone,” pointing queries to the top-level domain (TLD), such as .com, .org, or country codes like .uk.

Every time you type a web address, there’s a chance your query will interact, directly or indirectly, with the root servers. They are critical to the Internet’s smooth functioning, but they’re also under constant load. Unfortunately, much of that load comes from queries that serve no useful purpose.

What is the Source of Unwanted DNS Traffic?

Our analysis of traffic to the b.root-server revealed that more than 72% of all incoming queries were unwanted. Two major culprits stood out:

  1. Repetitive Queries
    These are frequent requests for the same TLDs in short intervals, usually caused by misconfigured systems or overly aggressive retry mechanisms.
  2. Queries for Non-Existent TLDs (NXDOMAIN)
    These dominate the unwanted traffic. They often come from:
    • Malware generating random domain names.
    • Typos and misconfigured software.
    • Legacy IoT devices with outdated DNS settings.
    • Malicious scanning or reconnaissance.

Since these TLDs don’t exist, the root servers can’t resolve them. Yet they must still process and respond, wasting precious computing power and bandwidth.

Why Should Anyone Care?

If most of this junk traffic doesn’t affect whether your favorite website loads, why does it matter? The answer lies in the resilience and performance of the Internet.

  1. Performance degradation
    Junk queries consume resources, which can add delays during peak loads or attacks. Even small amounts of added latency ripple across millions of users.
  2. Reduced resilience
    Root servers have finite capacity. The more bandwidth and processing wasted on junk, the less headroom remains for legitimate queries, especially in times of stress, such as cyberattacks or traffic surges.
  3. Higher costs and energy use
    Billions of pointless queries mean billions of wasted CPU cycles. This translates into higher operational costs and greater electricity use, raising sustainability concerns.
  4. Security risks
    Unwanted traffic can mask or foreshadow cyber threats. Sudden spikes in certain patterns, for example, may indicate new malware campaigns or botnet activity.

Who’s Sending All This Junk?

Pinpointing every source is difficult, but the data points to several recurring offenders:

  • Large Internet Service Providers with misconfigured routers or customer devices.
  • Legacy Internet of Things devices still running outdated firmware.
  • Botnets that generate huge numbers of fake domain requests.

When mapped to network-level data (Autonomous Systems, or ASNs), some providers stand out as responsible for disproportionately large volumes of NXDOMAIN traffic. That creates opportunities for targeted outreach and remediation.

From Junk to Insight

Ironically, while unwanted traffic is a burden, it also provides valuable insight. By studying patterns in junk queries, researchers can detect new threats, identify misconfigured systems in the wild, and guide improvements to DNS protocols.

In this way, junk traffic isn’t just noise. It’s also a diagnostic tool for the Internet’s health. But for that to work, we need ongoing monitoring, collaboration, and proactive measures to reduce the load on root servers.

Time to Clean Up the DNS

Root servers aren’t the only system required to clean up the DNS. A coordinated, multi-layer effort involving resolvers, root operators, software developers, and standards bodies is required.

Here are some concrete steps:

For DNS Resolvers

  • Validate TLDs locally: Use up-to-date lists of valid TLDs so resolvers can block invalid queries before they reach the root.
  • Implement Response Policy Zones (RPZs): Define custom responses for clearly invalid queries, such as returning 0.0.0.0.
  • Cache and rate-limit NXDOMAIN responses: This prevents repetitive floods of the same invalid requests.

For Root-Server Operators

  • Detect and mitigate abusive patterns: Large query volumes from a single subnet can be flagged and rate-limited.
  • Enhance caching and filtering: Smarter software can reduce overhead and improve efficiency.

For Standards Bodies and Infrastructure Planners

  • Improve DNS protocols: Insights from unwanted traffic can guide new standards at the IETF to make DNS more resilient.
  • Expand anycast deployments: More root-server instances in underserved regions help spread the load and reduce the risk of localized overload.

The fact that more than 70% of DNS root queries are junk is both staggering and unsustainable. Left unchecked, it erodes performance, resilience, and trust in one of the Internet’s most critical systems.

The good news is that solutions exist. By validating queries locally, filtering invalid requests, improving root-server operations, and pushing for smarter standards, we can make the DNS ecosystem more efficient and secure. My Pulse Research Fellowship aims to address this problem by proposing a new system/approach for DNS resolvers. This solution (to be detailed in a future blog post) would tackle most unwanted queries at the source before they even enter the larger DNS ecosystem.

The next time your browser loads a website in milliseconds, remember that a vast, unseen infrastructure made it possible, and much of that infrastructure is currently bogged down with digital junk. It’s time to clean it up for a faster, safer, and more resilient Internet.

Dipsy Desai is a Ph.D. Student at the University of Southern California and 2025 Pulse Research Fellow.

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of the Internet Society.


Photo by Claudio Schwarz on Unsplash

Read Entire Article