IPv4 / 6 Statistics

So some population of IPv6 client(s) is flooding your server. In your position, I’d want to know more about how many different addresses, and see if they are also serving NTP, or, more likely, it’s some broken SNTP client you might be able to get fixed by the operator with help from their ISP and/or the responsible software developer. If you share some offending IP addresses here, others can see if they’re getting hit as well.

You can use packet capture, or if you happen to use ntpd/ntpsec, ntpq -ncmru | head -50 after disabling the firewall rule.

Where do you get these statistics from? My vantage point (DNS-level information) paints an entirely different picture (although Belgium is hard for me to research), with usually only about 5% of NTP servers operating IPv6. I suspect Belgium will be higher though.

Your personal traffic measurements are encouraging, though.

I get them from my MikroTik router, which reports packets/s and also shows a graph. I’ve seen combined IPv4/6 peaks of up to 2200 packets/s. My IPv4 speed is set to 1 Gbps relative to others but a few days ago I changed the IPv6 speed to 2 Gbps (from 1 Gbps) relative to others.

In this pool, Belgium currently has 22 IPv4 servers and 18 IPv6-enabled ones.

Here is what I’m seeing:

IPv4: Imgur: The magic of the Internet

IPv6: Imgur: The magic of the Internet

Our statistics for ntp[0-3].fau.de (resp. ntp[0-3].ipv6.fau.de) go back quite a long time, and although they have some holes or errors, I think they’re still quite useful. And they paint a dire picture: IPv6 percentage has not only stagnated, it seems to go down ever so slightly.

Regarding misbehaving clients (that run into our ratelimits): We see a lot less of these on IPv6, and I mean an order of magnitude less. They’re almost nonexistant on IPv6.

2 Likes

Excuse me, but… Where are the AAAA records for these FQDN’s ?

UPDATE:

Oh wait, you probably meant ntp[0-3].ipv6.fau.de ?

I wonder what would happen if you got rid of the ‘ipv6’ subdomain and just add the AAAA records to ntp[0-3].fau.de ?

2 Likes

Yeah, I was surprised to find that at first sight, those servers seemed not to support IPv6 even in 2025 (wouldn’t have been the only ones, though…). Took me a while to realize IPv6 addresses are behind a separate name. Where relevant, I now have two entries for this otherwise good (and NTS-enabled) upstream, rather than having just one entry and letting the daemon decide, or rotate over time. A bit of waste of resources on both ends.

Like Marco, I also wonder what the picture would look like if the more intuitive names would also serve IPv6, rather than requiring the user to grasp that IPv6 is behind a different name, understand what the choice is all about, and then explicitly decide to configure a client accordingly with what are ultimately redundant entries.

2 Likes

I also wonder what the picture would look like if the more intuitive names would also serve IPv6, rather than requiring the user to grasp that IPv6 is behind a different name, understand what the choice is all about, and then explicitly decide to configure a client accordingly with what are ultimately redundant entries.

It probably would not change much: Because most clients come from the ntppool, and NOT from our own hostnames.

Fair point. Unfortunately, the pool itself contributes to distorting the picture by only having one subdomain return IPv6 addresses.

2 Likes

EDIT: Regarding trustworthiness of NL figures, see EDIT after the figure. [end EDIT]

The Netherlands seem rather advanced in actual IPv6 usage of the pool, with IPv6 traffic share currently at about 40% for an instance I recently added there, vs. just 26% IPv6 adoption rate according to Google (though there is some skew in favor of IPv6 in this case as I enabled IPv6 full throttle well before enabling IPv4 as well, and then ramped up the netspeed setting slowly only for IPv4 to probe the conditions in this zone I didn’t have prior first-hand experience with). Maybe @marco.davids had a hand in promoting IPv6 NTP in the Netherlands despite the pool-side imbalance? :wink:

The NL zone seems to be extremely well served in general, with a bitrate/packet rate of typically less than 400 kbit/s / 500 pps for both IPv6 and IPv4 at full throttle being the lowest by some margin among the country zones I have/had servers in so far. The best so far had been Germany, with typically less than about 1.3Mbit/s / 1900 pps with IPv6 and IPv4 both at full throttle, and after being in the pool for some time.

EDIT: I need to revisit my numbers for the Netherlands. They are too good to be true. Triggered by some other oddities, I now realize that the provider seems to employ some sort of DDoS protection, and likely for IPv4 only. It started with three monitors on the production system not getting through anymore, which I initially put down to some routing/connectivity issues. But now with the server added to the beta system with its much higher number of monitors, even more of them fail. It looks suspiciously as if a portion of the IPv4 NTP traffic might fall prey to the DDoS system, noticeably impacting the absolute number for IPv4 NTP, thus potentially and significantly skewing the IPv6/IPv4 ratio in favor of IPv6. Need to dig deeper, e.g., contact the provider. Also, for a “reality check”, I’d be interested to hear from other people with servers in the NL zone as to how much traffic they see, for both IPv6 and IPv4 in absolute numbers, as well as what their derived IPv6 share is. [end EDIT[

Neither is anything in comparison to, e.g., some Asian zones. E.g., SG with sustained bursts easily exceeding 10Mbit/s at a 12Mbit setting for IPv4 (IPv6 essentially negligible in comparison, even at full throttle netspeed setting). I faintly remember South Korea being similar, or even higher.

Which still is nothing compared to what has been reported for China, where a 512kbit netspeed setting, the lowest officially* available, can apparently easily result in double- or even triple-digit Mbit/s actual traffic. Which would overwhelm any basic server not only in the Asia/Pacific region, where not only bandwidth, but especially traffic volume is generally more expensive than, e.g., in North America or Europe. It is interesting to see those differences in traffic costs based on cloud service provider offers. There is an older explanation by Cloudflare as to the why, but a web search will also yield more recent examples and explanations.

E.g., I had to take an instance out of the MY zone recently as both its bandwidth limit as well as monthly traffic quota were easily exceeded even at a 1kbit netspeed setting for both IPv6 and IPv4. Malaysia as such seems to have a relatively high IPv6 adoption rate, but there are only 3 or 4 IPv6 servers in the zone. Which results in them being overloaded much of the busier time of the day. (I’ve seen short sustained peaks of above 3Mbit/s, which artificial bandwidth limits aside is ok for typical cloud infrastructure, but the corresponding packet rate may be challenging, e.g., for typical consumer Internet routers.) I.e., the IPv6 servers are constantly phasing out of and into the pool so that there often is only a single server in the zone, or the pool even falls back to the Asia continent zone because there is no active server in the country zone. And obviously, when a server is the only one active in a zone, the netspeed setting is essentially irrelevant

I was surprised, though, to see that a 12 Mbit IPv4 netspeed setting in India yields an average of slightly above 2Mbit/s only (and almost unnoticeable traffic at a 1kbit netspeed setting), given that some older threads in this forum referred to the IN zone as being underserverd as well (but maybe I misunderstood/misremember). Unfortunately (and strangely), none of my instances in India comes/came with IPv6. Would have been interesting to see what the IPv6 traffic share is in India, given the generally high IPv6 adoption rate there.

Typically less than 200kbit/s / 200 pps for IPv6 full throttle in Japan, with an IPv6 adoption rate of about 55% (according to Google), but I don’t have IPv4 there, so can’t say what the IPv6/IPv4 relation is, nor what the NTP traffic situation is overall. I got 800GB monthly traffic quota (in+out) for pretty much the lowest-end VPS conceivable there, but at just a few Euro more per year, I could easily get (and have) upper single- or double-digit TB traffic volume, or even unmetered, in Europe or North America.

* I.e., from among the values offered by the server management page, without resorting to low-level access to pool internal mechanisms.

Another data point…

My server is in the UK, which apparently has around 51% IPv6 adoption and a similar level of ‘IPv6 preferred’. It has one IPv4 and one IPv6 address registered with the pool both with the default netspeed. I just monitored it for 10 minutes analysing the external NTP traffic (excluding traffic originating from my server or my local network). During that period there were:

8209 IPv4 requests
989 IPv6 requests

So for that period ~12% of requests were via IPv6.

1 Like

Another data point:

APNIC ipv6 capability

4 * 12 = 48% if all DNS records in the pool would have had an AAAA record instead of just one ? :thinking:

(or am I to optimistic now?)

1 Like

@marco.davids That is the real question! Sadly we don’t know the answer. Definitely worth IPv6 enabling another one of the pools I think. Maybe 3? Or 0?

1 Like

I’ve been monitoring this for a while now and overall the percentage of IPv6 requests is actually only 8%, so a bit less than for the much smaller sample I did earlier.

A bit off topic but something else that I noticed is that while ~54% of requests originate from the UK (which is where my server is located), around 29% come from China, which seems insane. I’m guessing lazy people and/or device vendors just using pool.ntp.org instead of cn.pool.ntp.org maybe.

Actually, from within China, they essentially amount to the same thing. pool.ntp.org will give a client servers from the country zone the pool’s DNS server thinks the client is in. cn.pool.ntp.org will give a client servers from the CN zone, regardless of where the client is.

Also note that the pool itself recommends for people to use the global zone rather than country zones (“In most cases it’s best to use pool.ntp.org to find an NTP server”).

Similarly for vendors of devices or software using the pool, whom the pool urges to register a vendor zone. Those vendor zones currently actually are kind of an alias for the global pool only, they just have their own name so that they could get different handling should the need arise. But such differentiated handling would yet need to be implemented.

In fact, one of the reasons for not yet having IPv6 in more zones is that some legacy vendor zones are explicitly set up to not have IPv6 in them, and adding IPv6 to more numbered subzones, or even the unnumbered zones, would implicitly change that. So the handling of vendor zones would need to be revamped to decouple them from the general zones before IPv6 can be added to additional general numbered subzones, or even the unnumbered general zones.

I guess this is for IPv4. The CN country zone is severely underserved, so I guess instead of getting very crappy service from servers in the local zone, people opt to explicitly solicit servers from better-served zones, even if those are far away.

1 Like

Again, with my poor math skills, I’d guess this would add up to ~32% if all pool names had an AAAA record.

Such a missed opportunity…

3 Likes

The clients are adapting to this reality. I find that recent Debian installs with chrony from various VPS providers tend to have default

pool 2.debian.pool.ntp.org iburst

so the end users are steered towards IPv6 when possible.

1 Like

Yes, fortunately, this is being picked up by more and more distributions, so at least new installations will have this.

At the same time, NTPsec shipped with Debian (and probably Ubuntu as well) uses all four numbered zones, with only a tos maxclock 11 stance limiting the number of servers solicited. So somewhat biased against IPv6.

Same for the standard BusyBox-based sysntpd on OpenWrt, which I believe is also picked up by the ntpd classic package, and the ntpdate package derived from the ntpd classic sources. The latter has all four numbered vendor zones as fallback, while chronyd on the other hand comes with its own IPv6-enabled default configuration.

Tasmota is using IPv6-enabled zones as default. Which is interesting as only a subset of devices has support for IPv6 enabled in default builds. Interesting as well that they don’t seem to have a vendor zone (see last paragraph below). And they don’t only use the global zone, but additionally the 2.europe.pool.ntp.org and 2.nl.pool.ntp.org continent/country zones, I guess to ensure the devices get time wherever they are in the world, even in severely underserved or somewhat broken country zones.

Would be interesting to see what systems coming with systemd-timesyncd use by default, probably depends on the distribution and how they customize this. I currently don’t have such a system at hand I could quickly check for this (replacing timesyncd typically is one of the very first things I do on a new system, will check next time this comes up).

Anyone installing OpenNTPD from the Debian repositories probably is getting pool.ntp.org by default (seems the package hasn’t been updated in quite a while, though…).

Default in ntpd-rs seems to be the un-numbered vendor zone as well, though they at least mention which numbered generic, i.e., non-vendor pool zone to use for getting IPv6 servers as well (though misleadingly mentioning this with reference to IPv6-only machines only), as the vendor zone does not support IPv6 at all (see next paragraph).

And new vendor zones, if one can be obtained at all, apparently are IPv4-only by default, and it seems somewhat difficult to get IPv6 added later on even on request. So if vendors take the vendor zone topic seriously, potentially a lot of new devices/software installations will be IPv4 only.

2 Likes

Interesting, I am learning things here that I didn’t know!

Shocked to hear this - why is it? :slightly_frowning_face:

I’ll give it a try anyway and just created this issue.

2 Likes