I have access to a few servers in Shanghai, and this is what it looks like (hostmasks changed because this isn’t a reflection on the various NTP servers):
So there are some good sources in there, but reachability to all sources is a serious problem.
To me, it makes sense to have a monitoring server in China dedicated to just the China pool. Scores outside of China would not apply to the cn zone, and the China monitoring server scores would not apply to other zones.
For the others with in consistent reachability then I wonder if it’s the great firewall dropping the UDP packets.[/quote]
It was a v4 address. I regret not saving it anywhere.
But here’s another IP that’s behaving the same. I can reach it no problem from multiple places outside China.
From Shanghai:
$ ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
6 Jan 17:42:32 ntpdate[8336]: no server suitable for synchronization found
$ sleep 90 ; ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
6 Jan 17:44:08 ntpdate[8339]: no server suitable for synchronization found
$ sleep 300 ; ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
6 Jan 17:49:57 ntpdate[8351]: no server suitable for synchronization found
From Hong Kong:
$ ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 1, offset 0.004057, delay 0.18932
6 Jan 17:50:50 ntpdate[30102]: adjust time server 203.135.184.123 offset 0.004057 sec
From Singapore:
$ ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 1, offset 0.013801, delay 0.23509
6 Jan 17:50:41 ntpdate[30840]: adjust time server 203.135.184.123 offset 0.013801 sec
The number listed in continental page includes both IPv4 and v6 servers. In countries with near zero IPv6 connectivity, the v6 servers are mostly useless, but they are still there…
I am experimenting in CN zone with Net speed alternating between 3 Mbit and 10 Mbit, but I’m seeing spikes of up to 150 Kpps and my small (A0) Azure VM starts loosing packets (even to my zabbix server, not only ntp).
Are you doing some special tuning of ntp and/or linux kernel for such high loads?
Or it shoud work out of the box? Because right now I’m not sure if it is insufficient tuning on my side, or just a consequence of being in Azure cloud (maybe they are throttling the bandwidth somehow).
Here is graph from zabbix: https://www.dropbox.com/s/fakkjqlydedb458/azure-ntp-spikes.png?dl=0
BTW I see many clients (using “mrulist”) with average interval between packets under 5 seconds.
Currently I get around 5-10Mbit of NTP traffic. I am impressed with well my little Raspberry Pi NTP server has coped with it. The CPU is utilized between 30 and 50%.
However - I had to give my software based firewall (OpnSense) a lot more memory to cope with the many extra states and add 2 more processors to it (It is a VM running on a VMware ESXi server). I am using NAT to the internal network where the NTP server is). The OpnSense community has helped tune it a little and even offerered to add some tuning options, so I can drop states faster, so hopefully I’ll be able to get more performance out of it when I get my new LeoNTP server tomorrow.
I don’t use connection tracking, I have just increased net.core.somaxconn to 1000 and net.core.netdev_max_backlog to 5000 and it seems to helped a bit, so today I increased it again to 10000/50000 and will see. I didn’t find any buffer/network tuning parameters in NTP itself…
I already opened a support ticket with Azure, however they will answer only on weekdays. I have a public IP on Azure, so basically that is 1:1 NAT, so there shouldn’t be any connection tracking issue either.
Yes. Add them and email server-owner-help (the full address should be on the manage site). Reference this post in the email in case John or Arnold take the ticket.
That’s not unusual to see, Microsoft might even have hard packet per second limits set on their VMs. This would affect all traffic even before it hits your VM.
net.core.rmem_max / net.ipv4.udp_mem can help that
There don’t seem to be many resources for tuning NTP for high packets per second. That would be useful to create. Having a multithreaded server would be a huge help too.
The capacity available to the China pool has gone from practically nothing to pretty reasonable over the last week and a half. Hopefully we’re getting to where it’s possible for an operator in China to add a server and manage the load by configuring the net speed.