Number of configured servers

so, I have 6 good stratum1 timeservers where connected with delay 3-15 ms. Can I add prefer to one server or I no need do this? What does mean “prefer”?

grep server /etc/ntp.conf | grep -v ‘#’
server -4 ntp3.*** iburst
server -4 ntp2.*** iburst

Should I add prefer for server “national time standart” or I no need this? Like this
server -4 ntp3.*** iburst prefer
server -4 ntp2.*** iburst

?

It can be add more clock offset stability?

Hi, personally I’d leave ntpd to make it’s own mind up, because you can guarantee once you set one as prefered it, or the link to it, will go down! :wink:

“prefer”: http://doc.ntp.org/4.2.6/prefer.html#prefer

The latency of the internet is always going to be the biggest factor. Depending which flavour of daemon you’re running it may be worth looking at things like temperature compensation in chrony, …or investing in a local GPS receiver.

You really shouldn’t use the ‘prefer’ keyword on a production machine. I’ve only used it for testing to help force a local time source faster. You really should just let the ntp selection algorithm do its thing.

Do not use iburst, many servers see that as abuse.

Prefer means the line you want to use above others.
It means is switches to other a bit later then normal.

I use prefer also, as it tells your NTP what line you trust the most.
It doesn’t mean it will stick to it.

For example I want PPS from my GPS to be the most important, then prefer is normal.
If you do not have own stratum0 sources, then do not use prefer, let NTP work it out.

Stratum1 servers you connect to, many kick you off if you use iburst.

If a internet-serving NTP server is blocking iburst then it might be misconfigured. Quoting from the manual:

iburst
When the server is unreachable, send a burst of packets instead of the usual one. This option is valid only with the server command and type s addresses. It is a recommended option with this command. Additional information about this option is on the Poll Program page.

and

burst
When the server is reachable, send a burst of packets instead of the usual one. This option is valid only with the server command and type s addresses. It is a recommended option when the maxpoll option is greater than 10 (1024 s). Additional information about this option is on the Poll Program page.

If ordinary always-connected clients utilize burst then it will make unnecessary load to its server. But iburst should be okay.

Why would you want to burst at all?
The only thing you do is put extra load on the server.
Regardless the one you use or even both.

Trust me, many Stratum1/2 servers do not want people to burst.
Or simply poll too often.

Try yourself, poll a lot and (many) servers will stop responding.

You have to remember the age of NTPD… ‘burst’ was created back in the days of flaky dial-up connections when packet loss and random latency was a more serious issue. I would not use it, nor recommend it in modern day unless you are on a dial-up in the middle of nowhere or doing some sort of development testing in a closed environment.

1 Like

True, but why then is the command still available in the latest version of ntpd ?
Surely it should have been deprecated a while back.

LOL… Take a look at the NTPD source code, it is a horrendous mess. Nothing gets depreciated or removed. There’s support for non-existent technologies like Omega, GOES, LORAN, and others. OS and hardware support that haven’t been used in decades from companies long defunct… All there…

There’s lots of knobs that a typical user shouldn’t (and usually doesn’t) touch with NTPD, but the options are in there for advanced users and special use cases… You have to give props to the NTPD documentation, it is very thorough.

Yeah I know…I’m from the BBS ages…that nobody remembers.
NTP at the time was unheard off and timekeeping was a manual thing of a wristwatch :slight_smile:

Funny is that they never ever fixed the RTC to make it tick right, even today it ticks wrong…LOL

At the time it was filesharing via a single system that could afford a dial-up internet connection.

Latency? You where happy if the connection didn’t break at all.

FWIW, https://www.ntppool.org/en/use.html makes no mention of iburst, while it provides an example configuration that does not use it, and it advises "don’t play tricks with burst".

I don’t think iburst is terrible, but it’s not officially confirmed to be polite to use it with the Pool.

When it comes to rate limiting, the big issue is, IMO, when a large number of clients are behind NAT and potentially using the same Pool servers. Multiple clients using iburst looks abusive.

Meanwhile, the standard rules for stratum one or two servers listed on the NTP wiki say that iburst is cool and burst is verboten. (But servers on the lists are free use other rules.)

I noticed recently that iburst and burst are the defaults on Cisco IOS: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/bsm/command/bsm-cr-book/bsm-cr-n1.html#wp3294676008

Can this be verified? iburst is okay… burst is bad… I’m really hoping that’s a documentation error…