Adding servers to the China zone

I have access to a few servers in Shanghai, and this is what it looks like (hostmasks changed because this isn’t a reflection on the various NTP servers):

Shanghai server 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- 0.cn.pool.ntp.org             2   7   175    20    -14ms[  -14ms] +/-  267ms
^- 1.cn.pool.ntp.org             2   7    31   216    -70ms[  -70ms] +/-  205ms
^- 2.cn.pool.ntp.org             2   6   377    84  +5121us[+5121us] +/-  154ms
^- 3.cn.pool.ntp.org             1   6   175    60  +7624us[+7624us] +/-  101ms
(non pool.ntp.org servers omitted)

Shanghai server 2

MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- 0.cn.pool.ntp.org             2   6   377    48    -52ms[  -52ms] +/-  208ms
^+ 1.cn.pool.ntp.org             2   6   377    45  -1601us[-1690us] +/-   22ms
^? 2.cn.pool.ntp.org             0   8     0   10y     +0ns[   +0ns] +/-    0ns
^* 3.cn.pool.ntp.org             2   6   377    31   +662us[ +573us] +/-   10ms
(non pool.ntp.org servers omitted)

So there are some good sources in there, but reachability to all sources is a serious problem.

To me, it makes sense to have a monitoring server in China dedicated to just the China pool. Scores outside of China would not apply to the cn zone, and the China monitoring server scores would not apply to other zones.

1 Like

Thank you for looking this up and reporting the results.

Was that an IPv6 IP?

For the others with in consistent reachability then I wonder if it’s the great firewall dropping the UDP packets.

[quote]Was that an IPv6 IP?

For the others with in consistent reachability then I wonder if it’s the great firewall dropping the UDP packets.[/quote]

It was a v4 address. I regret not saving it anywhere.

But here’s another IP that’s behaving the same. I can reach it no problem from multiple places outside China.

From Shanghai:

$ ntpdate -qu 203.135.184.123 
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
 6 Jan 17:42:32 ntpdate[8336]: no server suitable for synchronization found
$ sleep 90 ; ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
 6 Jan 17:44:08 ntpdate[8339]: no server suitable for synchronization found
$ sleep 300 ; ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 0, offset 0.000000, delay 0.00000
 6 Jan 17:49:57 ntpdate[8351]: no server suitable for synchronization found

From Hong Kong:

$ ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 1, offset 0.004057, delay 0.18932
 6 Jan 17:50:50 ntpdate[30102]: adjust time server 203.135.184.123 offset 0.004057 sec

From Singapore:

$ ntpdate -qu 203.135.184.123
server 203.135.184.123, stratum 1, offset 0.013801, delay 0.23509
 6 Jan 17:50:41 ntpdate[30840]: adjust time server 203.135.184.123 offset 0.013801 sec

I should mention, there is no single one “great firewall”, it’s a bunch of edges all over China.

It can also be a capacity problem as well as a dropped on purpose problem.

It could also be a network closer to the NTP servers that’s dropping traffic.

For various reasons networks around the world often consider China traffic as “bad” and indiscriminately drop traffic from Chinese sources.

So it could be a bit from Column A and a bit from Column B :slight_smile:

Here is a list of IPv4 NTP servers I have spotted in cn zone this week.

http://leobodnar.com/LeoNTP/ntp_servers_cn.html

3 Likes

If it is still relevant. Feel free to add: http://www.pool.ntp.org/user/tmberg into the mix.

2 Likes

Very nice, thank you!

BTW: on the “Asia” zone servers list the number of servers in China is quoted as 48: China — cn.pool.ntp.org (48). Presently there are 33 active servers in this zone, according to the “China” zone servers list. So there is a discrepancy.

Perhaps I should actually ask @ask?

1 Like

The number listed in continental page includes both IPv4 and v6 servers. In countries with near zero IPv6 connectivity, the v6 servers are mostly useless, but they are still there… :imp:

2 Likes

I am experimenting in CN zone with Net speed alternating between 3 Mbit and 10 Mbit, but I’m seeing spikes of up to 150 Kpps and my small (A0) Azure VM starts loosing packets (even to my zabbix server, not only ntp).

Are you doing some special tuning of ntp and/or linux kernel for such high loads?

Or it shoud work out of the box? Because right now I’m not sure if it is insufficient tuning on my side, or just a consequence of being in Azure cloud (maybe they are throttling the bandwidth somehow).
Here is graph from zabbix: https://www.dropbox.com/s/fakkjqlydedb458/azure-ntp-spikes.png?dl=0

BTW I see many clients (using “mrulist”) with average interval between packets under 5 seconds.

1 Like

I’ve played a bit with the connection tracking limit on my boxes.

Maybe you could ask the azure support if they throttling the badwidth.

1 Like

Currently I get around 5-10Mbit of NTP traffic. I am impressed with well my little Raspberry Pi NTP server has coped with it. The CPU is utilized between 30 and 50%.

However - I had to give my software based firewall (OpnSense) a lot more memory to cope with the many extra states and add 2 more processors to it (It is a VM running on a VMware ESXi server). I am using NAT to the internal network where the NTP server is). The OpnSense community has helped tune it a little and even offerered to add some tuning options, so I can drop states faster, so hopefully I’ll be able to get more performance out of it when I get my new LeoNTP server tomorrow.

1 Like

I don’t use connection tracking, I have just increased net.core.somaxconn to 1000 and net.core.netdev_max_backlog to 5000 and it seems to helped a bit, so today I increased it again to 10000/50000 and will see. I didn’t find any buffer/network tuning parameters in NTP itself…

I already opened a support ticket with Azure, however they will answer only on weekdays. I have a public IP on Azure, so basically that is 1:1 NAT, so there shouldn’t be any connection tracking issue either.

1 Like

Does azure has a DDoS protection ?

I had a provider where i triggered it mostly.

1 Like

This one can be added to the CN pool.

http://www.pool.ntp.org/scores/212.47.249.141

2 Likes

Is it possible to add servers to the China zone only? e.g. ones that aren’t currently in any zone?

If so I’ll setup a couple of VM’s just for CN.

Yes. Add them and email server-owner-help (the full address should be on the manage site). Reference this post in the email in case John or Arnold take the ticket.

1 Like

Please add all my 8 ntp servers to the cn zone:
http://www.pool.ntp.org/user/iocc

They are all 1000 Mbit.

3 Likes

That’s not unusual to see, Microsoft might even have hard packet per second limits set on their VMs. This would affect all traffic even before it hits your VM.

For instance: “EC2 classic instance[s], which [are] limited to 200 kpps” - Linux Performance in Cloud: 2 Million Packets Per Second on a Public Cloud Instance

somaxconn is for TCP connections, so it won’t affect UDP. netdev_max_backlog should help though.

The output of “netstat -su” can help pinpoint problems as well:

5932 packet receive errors
RcvbufErrors: 19

or

0 packet receive errors
0 receive buffer errors
0 send buffer errors

net.core.rmem_max / net.ipv4.udp_mem can help that

There don’t seem to be many resources for tuning NTP for high packets per second. That would be useful to create. Having a multithreaded server would be a huge help too.

2 Likes

Done, thank you.

The capacity available to the China pool has gone from practically nothing to pretty reasonable over the last week and a half. Hopefully we’re getting to where it’s possible for an operator in China to add a server and manage the load by configuring the net speed.

select date,count_active as active,
count_registered as registered,
netspeed_active as netspeed
from zone_server_counts
where zone_id = 60 and ip_version = 'v4'
and date > '2016-12-01';

+------------+--------+------------+----------+
| date       | active | registered | netspeed |
+------------+--------+------------+----------+
| 2016-12-02 | 1      | 12         | 10000    |
| 2016-12-03 | 0      | 13         | 0        |
| 2016-12-04 | 2      | 12         | 11000    |
| 2016-12-05 | 0      | 13         | 0        |
| 2016-12-06 | 0      | 12         | 0        |
| 2016-12-07 | 3      | 12         | 200384   |
| 2016-12-08 | 2      | 12         | 200000   |
| 2016-12-09 | 2      | 12         | 110000   |
| 2016-12-10 | 1      | 12         | 1000     |
| 2016-12-11 | 0      | 12         | 0        |
| 2016-12-12 | 0      | 12         | 0        |
| 2016-12-13 | 1      | 12         | 10000    |
| 2016-12-14 | 0      | 12         | 0        |
| 2016-12-15 | 1      | 12         | 384      |
| 2016-12-16 | 1      | 12         | 10000    |
| 2016-12-17 | 0      | 12         | 0        |
| 2016-12-18 | 1      | 12         | 10000    |
| 2016-12-19 | 0      | 13         | 0        |
| 2016-12-20 | 1      | 14         | 100000   |
| 2016-12-21 | 1      | 14         | 100000   |
| 2016-12-22 | 1      | 14         | 1000     |
| 2016-12-23 | 4      | 14         | 300384   |
| 2016-12-24 | 2      | 14         | 100384   |
| 2016-12-25 | 2      | 14         | 110000   |
| 2016-12-26 | 1      | 14         | 100000   |
| 2016-12-27 | 2      | 14         | 110000   |
| 2016-12-28 | 3      | 14         | 120000   |
| 2016-12-29 | 2      | 14         | 100384   |
| 2016-12-30 | 3      | 14         | 120000   |
| 2016-12-31 | 2      | 14         | 101000   |
| 2017-01-01 | 5      | 14         | 210896   |
| 2017-01-02 | 2      | 15         | 1100000  |
| 2017-01-03 | 2      | 16         | 150000   |
| 2017-01-04 | 6      | 17         | 1160896  |
| 2017-01-05 | 32     | 42         | 12331384 |
| 2017-01-06 | 29     | 42         | 9743896  |
| 2017-01-07 | 33     | 43         | 10615396 |
| 2017-01-08 | 31     | 44         | 8385896  |
| 2017-01-09 | 33     | 44         | 10361896 |
| 2017-01-10 | 32     | 44         | 10351896 |
| 2017-01-11 | 28     | 44         | 9321408  |
| 2017-01-12 | 31     | 46         | 10132408 |
| 2017-01-13 | 34     | 47         | 9668408  |
| 2017-01-14 | 37     | 47         | 10765408 |
+------------+--------+------------+----------+

After adding the iocc servers the last numbers are:

2017-01-14	44	55	19745408

(So the existing servers should see a big decrease in the load and maybe be able to tweak their net speeds up a bit … :slight_smile:)

3 Likes