Adding servers to the China zone

I can confirm that its firewall problem on our end and we are gonna try to find good solution for that on next week.

This MUST be done at global level. Only shifting .cn traffic to other Asia countries will cause these already unhealthy zones fail altogether.

Please add our server to CN pool

https://www.pool.ntp.org/scores/185.255.55.20

I think faelixā€™s proposal would be an easy stopgap to implement. Ntpd can deal with inaccurate NTP servers, itā€™s certainly much worse to provide only 3-4 overloaded servers than 50 servers that might not be totally accurate.

An hour ago, the zone had 0 active server, now it has 2, this is the worst situation. Simply serving all the servers in the zone would be better than what weā€™re doing now, until we can implement a long term solution.

Simply serving all the servers in the zone would be better than what weā€™re doing now, until we can implement a long term solution.

I started monitoring servers in China a few minutes ago. Iā€™ll upload results here at this time of tomorrow.

Didnā€™t you just explain it yourself?

ā€œMonitoring packets do get lost because they are in contention with other 400mln requests per hour being taken care of.ā€

The purpose of the monitoring server is not to draw pretty graphs. Itā€™s just a side effect. The real purpose of the monitoring server is to make sure the NTP servers in the pool are healthy. If they seem to be dropping packets, they are clearly not healthy and should be marked as such. If you disagree, I would like to hear your reasoning for why you think it would be OK for NTP servers to drop requests.

You seem to be maxing out your 100Mbit connectionā€™s capacity. I would try to reduce the number of incoming packets to a level that your connection can handle without dropping packets. This may mean that you would be serving, like, 350 million requests per hour, but with the difference that you would be in the pool constantly instead of dropping out every now and then. With this arrangement you would actually get more requests served per day. I donā€™t know your netspeed setting in the pool, but I would recommend the lowest setting (384kbit). Sadly the situation in the pool is that even with that setting you can still get request rates that exceed 100Mbit/s. In any case, the netspeed setting would be the first one to check. Actually, seeing that you serve the @ zone it looks like your netspeed setting is at least 768kbit/s.

If setting that setting to 384kbit/s is not sufficient, you could ask the pool admins to move that server to serve exclusively .cn traffic, ie. remove ā€œ@ā€, ā€œukā€ and ā€œeuropeā€. That would also reduce the request amount a bit, so that you might better be able to stay under 100Mbit/s.

Edit: This same response also applies to @Hedberg.

1 Like

I have to point out that UDP is not a guaranteed delivery service. UDP packets can get dropped and will get dropped at any bandwidth settings due to collisions. This is stochastic and unavoidable process.

Chinese zone now has three/four servers, all chock full, serving users at wire speeds. Downgrading one or two of them to the level when monitoring station can reliably get through will require reducing their traffic to probably 20-30% of what it is now.
Please explain how forcefully reducing two out of four serversā€™ load will help the situation?

This is unbelievably backwards. To keep one single monitoring station happy I am supposed to reject 200-300mln requests per hour just to make sure the line is not busy when LA is calling.

I am happy to run servers at full 100Mbps speed. I am only asking to stop dropping me out. You are suggesting to instead reduce server traffic and everything will be fine. But where will rejected users go?

Also, it does not matter what the speed settings servers have if there are only 4 of them in the zone. All four of them will be dished out in the same DNS query.

Leo

2 Likes

@avij
The kbps setting doesnt work very well and not at all when there is only a couple of hosts left in a zone.

Maybe itā€™s not usefulā€¦?
The NTP request from china also kill my VDSL router almost .


The request is not dropped by server , but dropped by router .

I donā€™t know why my server got so much NTP request from chinaā€¦
even my server is not in china zone
https://www.ntppool.org/scores/106.104.162.193

It is in the asia zone and when .cn crashes all traffic goes to asia.

You sure about that @Hedberg ?
I have a server in asia, singapore, i donā€™t see any more traffic than usual, with a bandwidth setting to 500Mbit.
https://www.ntppool.org/scores/209.58.172.142

There have been significant increase in the number of .cn servers in the last two hours.
http://www.leobodnar.com/balloons/files/ntppool/cn_pool_rotation.html
Letā€™s see how long .cn will last this time.

Note that none of the servers is managing to stay above 10 points mark for too long.
http://www.leobodnar.com/balloons/files/ntppool/ntp_servers_cn.html

Leo

P.S. These pages are not dynamic, I have copied them from my local host.

Leo, what are the funky bars on the pool rotation page? Is is supposed to be country flags or something?

I believe that means the server was returned by some of the cn pool domains (with colors below).

Looks like 4 servers (including mine) were serving the whole zone in the beginning, and returned by all subdomains. The load was about 130Mbps so probably all 100Mbps servers were kicked over and over again. The load of my server is only ~30Mbps now, hopefully thatā€™s the maximum since I have set my server bandwidth to 1000M.

I just purchased a year in hong kong at hostus.us for $25 that I will dedicate to this purpose. It is pending at the moment, I imagine to be setup or something but I will let you know, if they donā€™t gank me.

Noah
https://logiplex.net
tick.nj.logiplex.net

This right here. Half a Tibyte on 1 gig and 256 MiByte of memory. Itā€™s not much but it should be able to handle a few thousand requests a second.

Got some fresh data directly in China, raw data here.

Server listed here all (were) a part of cn pool, I ran a simple script to monitoring them in China. @gfk You may want look this.

1 Like

@LeoBodnar: Please, for the sake of everyone, stop demanding. The pool is run by volunteers, including the monitoring system code and hosting. If you are right, explain that you are right in a polite, respectful, and constructive manner, and people will listen.

At the risk of repeating @avijā€™s post but not doing a good job of it, letā€™s talk about your figures: your graph above shows your server topping out at 110K rps. All clients which query your server when youā€™re at maximum rps will get dropped - not just those which get dropped due to occasional link congestion as is the case with all UDP. If the monitoring station is one of those, it will get dropped. If itā€™s other clients they will get dropped. So from the perspective of those clients, your NTP server isnā€™t responding. And thatā€™s what we see in the CSV record for your server - occasional I/O timeouts.

What you seem to be arguing for is a special exception for your server, that it should be just left in the pool all the time, regardless of its ability to serve clients, because those clients will try again later, and some of them will likely get through (because with a random-ish distribution of requests, different clients will get dropped each time).

This might seem reasonable to you, but can you see how it might not be ideal for those clients who were hoping to get their NTP requests serviced? If there are 3 hosts which are not dropping packets and yours is, itā€™s much more reasonable to simply drop yours out of the zone when it reaches saturation point, and put it back in when traffic drops a bit. Alternatively, you could tune your bandwidth session down a bit so that your server is just below the threshold of dropping packets. (I recognise this might be hard with the bandwidth levels currently available in the pool management interface, but thatā€™s something that should be relatively easy to improve in the code.)

Manual handling of individual servers isnā€™t practical for a project like the NTP pool. If you can think of a better approach that doesnā€™t require manual handling, but can be implemented automatically by the monitoring system, Iā€™m sure everyone would be keen to see your patches to the pool code.

1 Like

Ok that VPS I mentioned is setup. Itā€™s a lousy openvz container but IPv4 offset is showing .007 from the USA and IPv6 is showing an offset of .03 from the USA. Not too shabby and itā€™s up and doing nothing but serving NTP for a year. Too bad I couldnā€™t do a bunch of fancy scheduling on the machine to amp things up a bit but it shouldnā€™t need it since ntp is all it is doing.

Itā€™s currently in the asia and hk zones and will have to wait for its score to rise. The address:

tick.hk.china.logiplex.net

I added it at 10mbit initially and I will test loads because I hear so much craziness in that area of the world.

Let me know if there is anything else that I can do.

Edit: the HK server averaged 4.32 MiByte a second on a 100MiByte file test from logiplex.net in NJ/USA so this should suffice but I still need to test load.

Edit: can I take this out of global? Iā€™d like it to be just for china since that was the initial projection of this thread. How do I modify that. Asia might be alright.

Noah
https://logiplex.net

Dropping powerful host just because overloading is NOT a good idea when there are very few servers available. In recent weeks tw zone has only ONE ipv4 server working, and it got slammed out of the pool during evening busy hours. Clients querying tw zone at this time will not have any server replyed to them anyway.

Dropping one server when there are only 4 servers means other 3 servers will bump with 33% more traffic. This is not a small amount regarding large zone like cn.

3 Likes