Thinking of joining, is load balancing okay?

Hi Folks,

I’m shifting our NTP service from using core switches/routers to 4 dedicated hosts running chrony. I figured that it might be worth while giving back to the community and joining up to the NTP pool.

However I’m thinking of setting up a single virtual IP that load balances across the four stratum 2 hosts. Would it cause issues if I submit the load balanced vip to the pool?

The load balancing algorithm would the something like ‘least packets’ and I probably would leave persistence off - this will mean each request will be answered by a different host. Alternatively I could ‘sticky’ it to a single host unless it goes down - but that doesn’t help with load distribution.

Thanks in advance.


Its possible to do it either way, but NTP is more intended for separate per-server IPs.

No two servers have exactly the same time. If you randomly load balance between four servers, clients will think you have one server with a clock that randomly drifts back and forth however many microseconda.

Hashed/sticky load balancing would be better, though not perfect, and ought to result in virtually equal load distribution.

If you have separate IPs, load will average out to basically equal, but they’ll have spikes at different times. If each server can handle its spikes, that’s not an issue, though.


I was wondering if the slight changes in time would cause a problem. I’ll test a few different LB configurations and see what works best.


Agreeing with @mnordhoff, I’ll also add this: There’s no real reason to do virtual IP load balancing like that for NTP. Just set the pool speeds for each address to get your desired traffic load. I’m guessing that your new dedicated chrony boxes can handle more traffic than you’ll want to spend the network bandwidth on.

Unless you don’t have the routable IPs available, I would agree too that the LB idea would not be the most optimal for more reasons that I would care to type. It doesn’t gain you anything load-wise, but it can create potential complications for polling clients depending on how close and consistent the time is between the group.

I would just add each individually, that way the Pool system monitors each individually and if one goes out of whack it will be removed from the available pool till its score goes back up.

1 Like

Thanks for the input guys.

I’ve split it into two vips, ‘ntp1’ goes to inside host 1 unless it’s down, then it uses host 2. ‘ntp2’ is configured the same but uses hosts 3 and 4.

This way regular weekly patching of the hosts will only cause the time to jump a little when the primary host behind each vip is restarted.

I’ll test for a while to ensure it stays consistent.

The other alternative is to just stagger the updates and have all four systems in the pool. Unless the downtime for updates is more than a minute or two, it shouldn’t affect your score most of the time (and when it does, it shouldn’t have much effect), and it functionally won’t affect service much (clients will just try again later).

1 Like

I thought about your comment last night and realised that there’s no reason I can’t assign IPs to the backup ‘vservers’ and have all four responding while still having the first two fault tolerant.

Putting NTP behind load balancers has the same issues as using NTP over anycast. Please see about why this is not preferred.

1 Like

There’s absolutely no need to be concerned about patching and such for pool servers. NTP clients have built-in controls for dealing with unreliable servers, and a short patching window is totally expected for pool servers. There’s no need for building fault tolerance into your service precisely because of this client-side behaviour. Well-configured clients will automatically try a new server if the one they’ve been using has stopped responding.


Dear friend!
You complicate the process.
You can work through a router, as well as without a static IP.
The main thing is to keep the line, this is my experience and I do not impose it on anyone.
Good luck to you :slight_smile:

For the record, the NTP Best Current Practice is now out of draft, and can be found at: