Gradually add/remove server to/from pool in parallel to score increase/decrease


I was wondering whether it would be possible, as well as make sense, to have the pool gradually add/remove a server to/from the pool (i.e., scale the inclusion in DNS responses) in parallel to the server’s score increasing/decreasing, vs. the current binary on/off at score 10?

I’ve been wondering about that as part of recurring discussions on this forum related to challenges in underserved regions, but am now experiencing that myself first hand (though not in a way that I couldn’t deal with it by own means).

E.g., instead of the netspeed setting being considered in a binary on/off fashion when the score crosses the 10 points boundary, more something like this:

  • 0 < score < 10: fraction of inclusion in DNS = netspeed * 0% (= 0, as now)
  • 15 <= score <= 20: fraction of inclusion in DNS = netspeed * 100% (= full “netspeed”, as now for the entire range 10-20)
  • 10 <= score < 15: fraction of inclusion in DNS = netspeed * (score - 10) / 5 (new, somewhere between 0% and 100% of “netspeed”)

One issue that was reported time and time again in various threads related to under-served zones was that adding a new server to such an under-served zone is a challenge because once the score crosses the boundary of 10 points, the server gets hit “right away” with the full traffic load corresponding to its netspeed share in that zone*. Enough to right away bring a less beefy server down again in the scoring, eventually/potentially leading to some kind of yo-yo effect for the server’s score, with related traffic pattern. Which in turn may have repercussions on other such servers in that zone, leading to a domino effect of servers dropping from the zone, as described a few times in this forum.

Having a more gradual “DNS share” increase does not solve the underlying problem of a zone being under-served, but might make it a bit easier to add servers to the zone, and might help in keeping the number of servers in such a zone steadier, helping all the servers in the zone.

@ask, you previously hinted at working on something like that, though at the time more in the context of dealing with “weird” server behavior. While I think the conclusion at the time was that the specific “tests” considered then might not have been useful (potentially triggering default rate limits of some server implementations), I think the functionality of such more nuanced inclusion in the pool would generally be useful in adequately-served zones as well (not only in under-served ones).

Namely by generally reducing exposure of clients to not-optimally scoring servers, be it because a server is on its (temporary) way out of the pool, i.e., while transitioning in -5 downward steps in scoring from high values to values below 10. Or be it because there are semi-persistent issues with a server, e.g., in connectivity or maintaining good offset, so the server’s score oscillates somewhere at the lower end of the 10…20 score range, or sometimes dips into the area below 10 points.

The above “formula” is just a proposal, trying to keep the general “in the pool above 10, out of the pool below 10” approach, limiting the gradual part to the lower half of the “in the pool” range to keep a sufficient score range where full “netspeed” is reached, and being relatively simple (linear relationship between score and DNS inclusion share in the transition area). But this could obviously be tweaked, e.g., as far as threshold values are concerned, especially after some potential real-life experience with such an approach.

* This description is a simplification, the actual process is a bit more differentiated, but still results in a potentially very steep/sudden increase in traffic load prone to cause issues, e.g., in my case “DDoS protection” outside of my control temporarily blocking all NTP traffic to my server.

1 Like

I really like this idea. Obviously it would need to be tested in practice to see how practical it is, but a graduated weighting makes a lot of sense if a host is drifting between 10 & 20.

1 Like

Zones are NOT under-serviced, you really have to stop making this point.
People should use and NOT local pools.

As the does have enough servers even for countries that have little or no own servers.
It doesn’t matter. It doesn’t.

@ask should point ALL local zones to the world-pool regardless.

In the past internet peers where expensive and not as good/fast as today, the is therefor obsolete.

In short, there is no such thing as area’s that are under-served, really there isn’t any country that is starving of time.

People that still use it, should stop doing so.

There is just 1 pool:

Stop using outdated links, unless you are a company and you have your own pool-url.

My 2cts Bas.


Sure, as in the past, I agree, and have myself previously advocated for that. But as previously noted, until you have reached out to each and every client out there, and convinced them to change their ways, server operators in certain countries (and I am explicitly not using the term “zone”) are confronted with the challenge of being assigned too many clients, and need to deal with that. In a dream world, we wouldn’t have the issue. In reality, we do, and as it looks, for some time still.

Also, use of the “local” zones are not the only issue, the current GeoDNS approach is. Even if you use the global zone, if the GeoDNS server gets an idea as to the country you are in, you’ll again end up limited to the servers of your country zone, even if you have configured the global zone, not a local one. See the sources mentioned here, and then try it yourself.

1 Like

Have you checked this? As typical the biggest WORLD servers get the most requests.

See my check for Belgium:

bas@workstation:~$ nslookup

Non-authoritative answer:

bas@workstation:~$ nslookup	name =

Authoritative answers can be found from:

Belgium isn’t starved. Also note, the last server is also Cloudfare.

Typical we are used as backup for major servers…does that matter? Nope.



A single query does not show you the entire set of servers that are allocated to a country zone, at least not if there are 4 or more servers in the zone. So it shows exactly nothing as far as the topic of under-served zones is concerned.

Then why do you bring it up? Because Belgium isn’t starved, other zones can’t be, either? I don’t understand.

1 Like

See the list of servers returned for from, and the number of times, when queried from Singapore over the last few minutes:

  4 IPv4 Address:
  9 IPv4 Address:
 10 IPv4 Address:
 17 IPv4 Address:
 22 IPv4 Address:
 23 IPv4 Address:
 26 IPv4 Address:
 29 IPv4 Address:
 31 IPv4 Address:
 34 IPv4 Address:
 36 IPv4 Address:
 37 IPv4 Address:
 51 IPv4 Address:
 51 IPv4 Address:
 52 IPv4 Address:
 78 IPv4 Address:
180 IPv4 Address:
382 IPv4 Address:
515 IPv4 Address:
529 IPv4 Address:

Doesn’t look like the world to me. Even with the short sample period, “the world” should have more servers than that.

Will run the test overnight, let’s see tomorrow if there are significantly more servers in that list.

1 Like

If I run the same test from a location in Germany, I get 150+ servers after less than two minutes of running it. Above 175 since writing the previous number, and still rising…

Would be interesting to see what the view from within Belgium looks like.

200+ and still rising, even if a bit slower now…

Singapore at 20 now for a while.

1 Like

Well I got curious now and did dig +short >> pool.txt in a loop with a short sleep in it. From Norway:

113 NO, Norway
100 NO, Norway
98 NO, Norway
91 NO, Norway
85 SE, Sweden
82 NO, Norway
55 NO, Norway
38 NO, Norway
33 NO, Norway
31 NO, Norway
31 EE, Estonia
29 NO, Norway
19 NO, Norway
19 NO, Norway
9 NO, Norway
5 NO, Norway
4 NO, Norway
4 NO, Norway
3 NO, Norway
3 NO, Norway
2 NO, Norway

I got my own servers 198 times :smile:


Cool, thanks for sharing!

A bit less than the number of servers currently reported active for Norway, but running the test a bit longer would likely increase the number.

I’m at 34 and 355 for Singapore and Germany now. More than the active servers reported for Singapore, less than reported active for Germany.


By the way, how did you get the nice geolocating for the IP addresses?

1 Like

Had ChatGPT write me a little script :grin:


# File containing the sorted unique IPs

# Output file to store the results with country codes

# Read each line from the file
while IFS= read -r line
    # Extract the IP address from the line
    ip=$(echo $line | awk '{print $2}')
    count=$(echo $line | awk '{print $1}')

    # Perform geoiplookup on the IP, extract the country code
    country=$(geoiplookup $ip | awk -F: '{if (NF>1) print $2}' | awk '{print $1, $2}')

    # Append the result in the format "count IP country"
    echo "$count $ip $country" >> "$output_file"

done < "$input_file"

echo "Data with country codes collected in $output_file."

sorted_pool.txt would be the output from sort and uniq

Want me to do that?

Ah, smart, hadn’t thought about that. :smile: Thanks!

Would add some first-hand/hands-on insight into the topic, complementing/underlining the systematic findings in the paper/blog referred to earlier. So in that sense, not strictly needed, but nice if not too much trouble.

But also potentially opens up another can of worms: Why does one see more servers in a zone than reported active for it? What if one sees (significantly) less? May touch upon the controversial topic of DNS TTL settings and DNS caching, and others.

I’m just lazy :sweat_smile:

3512 IP samples now, up from 1112. 26 unique IP addresses, 3 more than before, but still not all 30 in the “no” zone for IPv4.

314 NO, Norway
310 NO, Norway
299 NO, Norway
282 NO, Norway
275 NO, Norway
259 SE, Sweden
155 NO, Norway
124 NO, Norway
119 NO, Norway
117 NO, Norway
114 EE, Estonia
109 NO, Norway
96 NO, Norway
42 NO, Norway
38 NO, Norway
36 NO, Norway
25 NO, Norway
16 NO, Norway
13 NO, Norway
8 NO, Norway
7 NO, Norway
6 NO, Norway
4 NO, Norway
4 NO, Norway

Got my own servers 582 times, 16.6% :smiley: Granted, they’re on max net speed.

Thanks! Very interesting. I don’t have a reference at hand, with details, but seeing servers from neighboring countries is somehow baked into the system as well, I think. Like one server from Sweden and Estonia in your case, my client in Singapore gets servers from Japan, Hong Kong, and even the USA. Plus the global Cloudflare anycast addresses.

So one is not entirely limited to servers of one’s own zone, but also a far cry from having access to the global set of servers.

Need to re-read the paper/blog, to understand why (if I recall correctly) they really had zones where they got only one server. As per our finding, I would have expected for a very small number of servers from neighboring countries to seep in. But maybe in case of the main example in Africa, there aren’t any servers in neighboring countries sufficiently “close by”. Or maybe going through a recursive resolver, vs. directly to the origin server, makes a difference.

Anyway, something for another day. But either way, an overhaul of the current GeoDNS approach would be quite welcome…

One thread I had in mind seems to indicate that I mis-remembered. I.e., according to one statement, in this example referring to the German country zone:

But the thread is also for the inverse problem, why does a server see clients from another country when it isn’t part of that country zone.

But, that’s a topic for another day…

The Swedish server is part of multiple zones

The Estonian one is just in the Norwegian zone, so the geoip database in Debian is probably just wrong on this one

Ah, true, thanks! I guess it was a bit too late already last night…

Here now the outcome of having my tests run overnight:

Nameserver Domain Client set ECS Unique servers no 24 no 51 yes 49 no 24 no 152

So is better than the country zone in this case - at least with certain nameservers. It looks like is causing the mixing in of a few servers from neighboring countries, I guess maybe due to their design for privacy, trying to hide clients’ locations from upstream nameservers. The country zone really only returns servers from that zone, even with And the continent zone stays somewhat short of the full amount of servers that supposedly are in that zone. I guess the more servers there are in a zone, the more smaller ones (lower netspeed) kind of get “squeezed out” by the ones with larger netspeeds/shares.

Below the country mix for the servers returned by for the global zone.

So looks like the recommendation for the global zone is still somewhat right, though it depends on the nameserver used, not really much better, and certainly way below the large number of servers in the global pool overall. I.e., certainly not the silver bullet some of us, including myself, were hoping for/expecting. Continent zone might be a bit better as far as number of servers are concerned. But that also yielded some servers with delay greater than 300ms, and noticeable offset values of 30+ ms.

@bas, let’s hope we don’t need to wait too much longer to see our dreams of an improved GeoDNS service come true. Though I fully trust @ask is aware of the various issues, as hinted at in various threads in this forum, where he describes his ideas for changes, e.g., also addressing the concerns raised in the paper and illustrated by these tests. And that he is working diligently on addressing them, balancing his available resources with the priorities, first one being keeping the pool service stable, as in available.

1  JP, Japan
2  SG, Singapore
2  JP, Japan
2  JP, Japan
3  JP, Japan
3  HK, Hong Kong
4  HK, Hong Kong
4  JP, Japan
5  HK, Hong Kong
7  AU, Australia
7  HK, Hong Kong
9  HK, Hong Kong
16  JP, Japan
17  HK, Hong Kong
19  JP, Japan
30  SG, Singapore
30  JP, Japan
31  SG, Singapore
37  SG, Singapore
37  JP, Japan
42  SG, Singapore
49  JP, Japan
64  SG, Singapore
69  SG, Singapore
74  HK, Hong Kong
76  JP, Japan
80  SG, Singapore
102  SG, Singapore
129  SG, Singapore
147  SG, Singapore
213  SG, Singapore
245  HK, Hong Kong
428  HK, Hong Kong
458  SG, Singapore
466  SG, Singapore
472  JP, Japan
488  SG, Singapore
494  SG, Singapore
504  US, United States
518  US, United States
521  SG, Singapore
553  JP, Japan
564  SG, Singapore
573  HK, Hong Kong
881  SG, Singapore
924  SG, Singapore
962  HK, Hong Kong
3284  SG, Singapore
4139  SG, Singapore
8531  IP Address not found
8552  IP Address not found
1 Like

Yes and no, the nameserver can be a big deal on the number of changes makes.
If you use a nameserver that caches a long time then you will end up with the same results more often.

The nameservers I typical use, also the fastest: and

I only use others for backup.

As for squuezed out, that is correct, when you lower the netspeed you get less requests. As should :slight_smile:

How many servers do you want/need? As you do not know if a zone is having problems with time-keeping.

I doubt any of them do.

Again, I fail to see your point.

Also, many ISP’s have their own NTP-servers and broadcast those via their modems/routers if you don’t change anything.

It’s too simplistic to count servers and presume there are time-serving issues.

My 2 cts,


That is where the multiple-monitors come into play, they test from all over the planet and servers that are bad are removed from the pool until they are good again.

Before there was just 1 monitor and the monitor ITSELF was unstable, as such marking servers bad while being good.

This isn’t the case anymore.

The pool only gives good and stable servers, and it really doesn’t mater if they are 50 or 500km away.

I use servers from all over Europe as reference to make sure my time is correct and not wrong.
However they are hand-picked stratum 1 servers, just for reference to my GPS.

Thanks for sharing this. I have a couple of comments that may be relevant.

I suspect all these nameservers are anycast, as are Cloudflare’s and their IPv6 equivalents. That means the server you reach depends on where your queries are coming from. Essentially, you are querying the closest instance, not as the crow flies, but based on internet routing. @ask has described how the pool DNS service accounts for the locale of the query source, as seen by the authoritative nameservers. This requires any methodical survey of DNS to spread its queries around the world.

Moreover, I am aware of a project which by design attempts to harvest all the pool server IP addresses, and in supposedly-rare circumstances, query a substantial portion of them. I have expressed strong disapproval of this strategy for fear it would eventually destroy the pool as we know it. It seems conceivable that there are intentional or unintentional mitigations already in place that make such harvesting very difficult.