The ntpd “pool” option has been working well since ntp-dev 4.2.7p249 in January of 2012, over a decade ago. It didn’t hit ntp-stable until 4.2.8 in December of 2014. It has been in NTPsec since the project began in 2015.
This is a better option for both users and pool server operators than multiple “server #.pool.ntp.org” lines for several important reasons. From a user perspective, pool associations in ntpd are preemptible, meaning ntpd will drop them automatically if they fail to contribute to the time solution, whether because their clock is off, the delay is too variable, or they simple stopped serving NTP. Moreover, ntpd will replace those preempted (discarded) associations with other pool IP addresses, requerying DNS as needed. This provides better time service to the clients as they automatically gravitate to servers that work and provide good service from the user’s perspective.
This is also a win for pool server operators, as clients using “pool” rather than multiple “server” lines will stop sending traffic to their IP address when it stops serving NTP, rather than continuing to bang on it with traffic for as long as the ntpd is running, sometimes years.
If you run a pool server and aren’t using “pool”, please give it a whirl and see for yourself how well it works. Below is a sample configuration for pool clients. For
non-stratum 1 pool servers, it also works well.
So how about updating the “Use the pool” directions on ntppool.org to encourage this better and kinder alternative to multiple “server” lines?
=== ntp.conf ===
driftfile /etc/ntp.drift
# Tight restrictions for the public, but loosen them for servers
# we use for time. Note the lack of nopeer on "restrict source"
# is important, otherwise pool associations will not spin up.
# These restrictions do not allow the public to query via ntpq (noquery)
# or set up event traps used by network monitoring tools to keep tabs
# on remote ntpd instances (notrap). "limited" and "kod" refuse to
# respond to clients that query too often, by default more than once
# every 2 seconds in a burst or more than once per 8 seconds long term.
# Adding kod sends occasional "kiss of death" responses to clients
# exceeding the rate limit, providing no useful time and requesting
# the client to back off their polling interval, which they will if
# using ntpd and their maxpoll allows.
restrict default nopeer noquery nomodify notrap limited kod
restrict source noquery nomodify notrap limited kod
# Allow status queries and everything else from localhost and local net.
# If there are hosts used as time sources in the local network, they
# will be subject to the "restrict source" restrictions above, so they
# will not be able to use ntpq with this host.
restrict 127.0.0.1
restrict ::1
restrict 192.168.0.0 mask 255.255.0.0
# Require at least two sources in reasonable agreement before adjusting
# the clock. The default minsane is 1 "for legacy purposes." Lower
# maxclock from the default 10 to a more reasonable number for the pool.
tos minsane 2 maxclock 5
pool pool.ntp.org iburst
Yes, but my lack of markup skills make that first one have a very ugly example ntp.conf compared to this one. I did comment several times on the github pull request. I just hope someone finally updates the how to use the pool instructions to point to the much better way for users and server operators.
That first one was flagged as spam by several folks undoubtedly due to the unintentional screaming, so was hidden. I reposted here and would be fine with hiding the other one if I knew how.
Hi, NTP Pool is effectively a one man band as far as making changes goes, so the chances of getting anything done, however sensible and appropriate, before the next ice age is as close to zero as makes no difference…!
A “bus factor” of 1 is a serious information security concern. No project in that state has ever passed any of the security audits I’ve done. It’s a concern on its own, but it makes other critical security practices impossible (separation of privileges, encryption key management, ransomware remediation, disaster recovery, etc.)
Have none of the vendors ever raised that concern during calls/meetings?
From Wikipedia "The “bus factor” is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel. "
I would say the NTP pool has been in this state for years ---- but the scattered DNS servers not so much. @ask is it and needs to share the project.
I suspect the vendors who support the project … Who at they? … Don’t know or don’t care
Happy to help improve things, but as I said in the other thread I’ve tried to raise the issue with @ask in private and then in public many times, but I get no response. Zero, nada, nothing…
It’s very embarrassing to see the emails from vendors behind the scenes saying “Hey my project is going live shortly, I’ve emailed sixteen times, can I get an answer please?!” and @ask being the only one who has admin access to do whatever’s required, I decided the only thing left to me to do was a) withdraw my volunteer support to try and help force a crisis, b) point out now and again that the pool is a one man band and what a HUGE risk that is…
What @ask has done is fantastic but keeping it all to yourself and having a very small amount of time to commit is a recipe for disaster.
Red flags should be waving, and sirens sounding!! Use NTP Pool at your own risk with your eyes open!