I am facing a challenge with how NTP Pool servers are selected for clients. It appears that clients are sometimes directed to servers that are geographically distant or have higher latency; even when there are closer /faster options available.
This issue is particularly noticeable in regions that are not densely covered by the NTP Pool. As far as I understand, the DNS-based load balancing mechanism should prioritize servers that are geographically closer to clients; but it doesnât always seem to function as expected. This brings up a few questions regarding how the current system handles server selection; especially for regions with sparse server coverage.
Are there any plans to improve the accuracy of the client-to-server mapping to reduce latency and enhance time synchronization accuracy? Additionally; is there a way for server operators to provide feedback / be more involved in fine-tuning the selection process to better serve underrepresented regions?
I am interested in any ongoing developments or future plans that aim to address these issues. Insights into better server configuration practices o; strategies that can help mitigate these problems would be greatly appreciated.
I have checked https://community.ntppool.org/t/about-the-pool-development-category/22-cpq guide about the development of the NTP Pool system but still need help .
Just to make it a little more concrete â which regions / countries are you seeing this in? What are the closer options that the system skips?
Yes, there are plans. They depend on more code being written primarily, but Iâm slowly making progress on it.
Generally speaking the system currently prioritizes balancing the load according to server operator preferences over better handling of the situation you mention.
In addition to what @ask has written:
Take a look at GitHub - abh/geodns: DNS server with per-client targeted responses and improve it if you can
But I guess esp. latency is hard to estimate by the geoDNS, sometimes geographically close locations do not have direct links and thus a server farther away may have better latency. But I donât see a way to predict that from the view point of the geoDNS server.
On the other hand, what will you do in underserved regions, youâll want to return 4 IPs and thus have to add what is available somewhere. A DNS response with 4 close but totally overloaded servers is not helpful for the client either.
I set up my own stratum 0 source for <$50 and use time.nist.gov, which are all stratum 1. You never know what stratum you will get with us.pool.ntp.org.
Anyone running an enterprise should be running their own internal NTP server.
Not only does that make you a good citizen by not bombarding public NTP servers, but ensures all clients are using the exact same time, even when WAN is down. Also prevents miscreants from profiling what services you may be running based on client public NTP polls.
@ask I think that along with contributing server address and specifying its âspeed divisorâ server operators SHOULD be given optional ability to provide ~10-15 IP ranges for their servers to be enlisted also or exceptionally just for. Not AS number, as its networks may be spread geographicly, but certain ranges that user finds by traceroute be close. Server operators usually are tech savvy ppl, some may have multihome (dual) ISPâs and I think itâll make huge logical impact on traffic. DNS probe you provide to be included in webpage is actually already a solution, but what I offer will make it good as well.