Deduplication of pool servers

I was doing some experiments with detection of duplicated NTP servers in the pool. It seems that about 30% of IPv4 addresses have an IPv6 duplicate and about 75% of IPv6 addresses have an IPv4 duplicate. There are also some servers that have multiple IPv4 or IPv6 addresses. One server seems to be accessible under 7 different IPv4 addresses. I’m wondering what was the reason for doing that. Give it more bandwidth than is allowed by the highest setting (1000 mbit)?

Do you think it would make sense to try to deduplicate servers in responses from the pool.ntp.org DNS? It could theoretically improve reliability and accuracy for NTP clients, at least for the initial set of NTP servers they get on start before an address is replaced for being unreachable. Each address in the pool could have a server ID, which would be periodically updated, and the pool DNS wouldn’t return multiple addresses with the same server ID.

The program I’ve used is here:

1 Like

Not before the management system can provide bandwidth settings greater than 1000Mbps (say 2.5G/5G/10Gbps). Some admins may wish the pool can utilize their provided time service capacity to the maximum. Even my 16yrs old Celeron 1G can run 500Mbps in tw+hk+mo pool… :imp:

My suggestion was just to deduplicate servers in individual DNS responses, not to disable them permanently in the pool. For example, if an NTP client is resolving 0.pool.ntp.org on start in order to get four NTP servers, it shouldn’t end up with just two servers, one having three votes in the source selection algorithm, and the other having just one vote.

However, I think it’s actually a good question whether pool members should be able to increase traffic to their servers by adding multiple addresses. NTP clients are a finite resource. If I add multiple addresses of the same server to the pool, other servers in the zone wil get fewer clients, right? Maybe it would help if there was an easier way to get traffic from zones where the number of servers is too small.

That’s really interesting (and not totally surprising).

An idea I’ve had (for after working on filling in underserved zones) would be to also make something so a particular user, servers in a particular ASN/subnet/something, etc would be limited to no more than X% of the total capacity of a zone, to minimize this sort of issue.

Hi Miroslav,

This is an interesting experiment.
Could you explain in view words how do you detect duplicate servers ?
I am not a python guru to understand your program.

Kind regards
Hans

It sends NTP requests to the servers and then it compares some values from the responses, like reference timestamps, reference ID, stratum, etc. If two different IP addresses respond with the same set of values, they are probably different addresses of the same server.

The most specific value is the reference timestamp, which is the time of the last measurement that updated the server clock. With some exceptions like hardware stratum-1 NTP implementations, it’s very unlikely two servers have reference times matching to 1/4 of a nanosecond.

Hi Miroslav,

Many thanks for your answer.
So it’s relatively simple. I tested it with my server, yes, it works.
You have to run N times N per subdomain.
Now comes my seconds question: How do you get all these IP addresses ?

Kind regards
Hans

I have a script periodically resolving the pool name and collecting the addresses. In smaller zones it doesn’t take that many DNS queries to get most of them. In larger zones it takes a while. I’m not sure if there is a better way.

OK. I do the same.
I thought maybe I learn a better method.

// Hans

Hey, did you ever get around to implementing something like this ?