Latest news of the domain name industry

Recent Posts

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, 10:12:24 (UTC), Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.
ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.
His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.
That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.
Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.
The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.
Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.
Most Queried TLDs
If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.
Here’s the most-queried applied-for strings:
Most Queried TLDs
Chapin was quick to point out that big numbers do not necessarily equate to big security problems.
“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.
“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.
For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.
If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.
In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.
The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.
There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.
Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.
As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.
For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.
Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.
There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.
Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.
One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.
So what are Interisle’s recommendations likely to be?
Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.
For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.
It seems to me that if the second-level request data was available, more mitigation options would be opened up.
ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.
“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”
Without prompting, he addressed the risk of delay to the new gTLD program.
“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”
The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.
Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.
ICANN certainly likes to play things close to the whistle.

Tagged: , , , , , , ,

Comments (4)

  1. Acro says:

    ET … Call .home !

  2. Jay Daley says:

    1 billion DNS queries a day is not actually that much for a DNS provider. So it’s not the volume that will be the issue for the TLD, it’s what happens when people register domains in those TLDs that are already being queried. We can only assess the impact of that if we see those second levels. Let’s hope Interisle conduct that analysis.
    Paul Stahura makes some sensible points, there are plenty of examples where the same thing has already been handled well by the industry without an ICANN edict. Take wpad.* for example, where I suspect the aggregate global traffic would match the top invalid TLDs.

    • Kevin Murphy says:

      Interesting.
      A billion hits may not be much, but this study says that it’s one eighth of the volume of .com.
      That seems like a lot.

  3. Do keep in mind that figures of non-existent TLD’s will tend to be higher. First of all, there is the fact that non-existant TLD’s will only be cached for 48 hours, while information for existing TLD’s will be cached for 96 hours. So double as much requests for non-existant TLD’s will reach the root-servers, while those for existent TLD’s will be taking from the cache and not reach the root.
    And because of that same cache, requests for large TLD’s are somehow limited to the amount of recursive nameservers out there. Starting from the assumption that every recursive nameserver in existence actually gets at least one request for a .com domain name every 96 hours, every single one of them will send a request about .com every 96 hours, regardless of the fact that they themselves receive 1 or 1 million requests about .com domain names during that period.
    Being able to examine logs of some large recursive nameservers (google, opendns?) would probably should more exact numbers about what is going on at the end-user level.

Add Your Comment