Adding servers to the China zone

Ok, fixed.

For the moment the CN pool seems stable with around 20 servers.
I have experimented with different bandwidth settings and right now
with 20 working servers I get approx this amount of traffic for each
setting below:

100 Mbit/s - 2000 KB/s
250 Mbit/s - 2800 KB/s
500 Mbit/s - 3500 KB/s

Around 4000 KB/s the VM starts to drop packets and cant keep up with
the packetrate so I will stay on 500 Mbit/s for now.
Its good to to have margins as spikes might come.

1 Like

On my main CN zone server, I am seeing around 10mbps at the 1000mbps setting. Before the 20th of March, I was seeing as much as 30mbps. Current packet numbers are about 20kpps down from around 40kpps before the 20th.


1 Like

For anyone who interested about this. This is some monitoring result from China yesterday. Thanks for your help, CN zone running well.

The problem now is delay: most server has delay more than 100ms. Still need server in China to solve it.

Theoretically we can have pool servers from other east Asian zones supporting China zone to achieve a lower latency for China users, however most Asian zones are in lack of pool servers, so it might not be a practical solution. I guess we still need to have a monitor inside China before we can really have pool servers inside China.

I guess Iā€™m hard to please, but ā€¦ This is about one of my servers, https://www.pool.ntp.org/scores/173.255.246.13

As you can see the score is (as of this writing) a perfect 20 so itā€™s not dropping any packets.

That server is configured to serve the .cn zone only and the bandwidth setting was at 384kbit/s. Last week I noticed that at the current rate I would not reach my bandwidth cap so I thought Iā€™d increase its bandwidth setting in the pool. Raised it to 512kbit/s, waited a day, nothing seemed to happen. Then I raised it to 1 Mbit/s, nothing. The speed setting has been at 1000 Mbit/s for a few days now and Iā€™m still getting pretty much as much traffic as I had at 384 kbit/s. Stats: http://biisoni.miuku.net/stats/ntppackets.html (see the Monthly graph from week 13 onwards).

While I think itā€™s generally good that Iā€™m not hitting my bandwidth caps like I used to, Iā€™m not sure if everything is working as it should if I canā€™t increase the amount of queries I get when I increase my bandwidth setting in the pool. I also have stats of when my server is included in the .cn zone and itā€™s showing expected results: http://biisoni.miuku.net/stats/cnparticipation.html

Has the situation in China improved this much recently? For the record, there are currently 20 servers in the .cn zone, which seems like a healthy amount.

I would think with 20 servers changing the BW setting should have some effect, but you have to realize that is still drastically underserved relative to the demand. So it could just be that no matter what the setting it is going to get utilized at a certain level to try and cope with all the queries.

Or Ask could have made some code changes specific to the CN zoneā€¦ That is always a possibility.

Updated my `nf_conntrackā€™ to support a large amount of connections and https://www.ntppool.org/scores/192.99.68.38
seems to be keeping in the pools thus far and responding to about 15kqps at an 11 score just fine. This is causing a load average of about `.20ā€™

Noah

Unless you are behind a NAT router, there is no need to up your nf_conntrack if you use the iptables raw notrack

sudo iptables -t raw -A PREROUTING -p udp --dport 123 -j NOTRACK
sudo iptables -t raw -A PREROUTING -p udp --sport 123 -j NOTRACK
sudo iptables -t raw -A OUTPUT -p udp --sport 123 -j NOTRACK
sudo iptables -t raw -A OUTPUT -p udp --dport 123 -j NOTRACK

@kennethr

Thanks. I use conntrack to allow related inbound connections, as I use that VPS for other small things.

I will read on that when Iā€™ve got time.

I upped the spoken of VPS to 250MiBit and will monitor it throughout the day.

Noah
https://logiplex.net

You can still use conntrack as normal, the samples he gave above was just to explicitly exclude tracking inbound & outbound NTP packets so it doesnā€™t fill up your conntrack table.

@NoahMcNallie

It will only be notrack for ntp, all other things will have conntrack.

But you made a good point, if some use the server for more services and see the conntrack table is full, you need to up it :slight_smile:

For others, here is the command to up it to example 262k

 sudo sysctl -w net.netfilter.nf_conntrack_max=262144

then edit the /etc/sysctl.conf
and add this below

net.netfilter.nf_conntrack_max=262144

OK thanks, I had previously allowed a hash size and nf_conntrack_max to 1 mil but I will add the raw tables

For noting also, if you increase your nf_conntrack_max then you should increase your hashsize with something to the sort of:

echo 1049600 > /sys/module/nf_conntrack/parameters/hashsize

But donā€™t quote me on this

image

My Montreal VPS entered the pools again, and Iā€™m using the following to watch the PPS:

Sun Apr 7 22:07:52 EDT 2019: 14777 packets captured
Sun Apr 7 22:07:53 EDT 2019: 15293 packets captured
Sun Apr 7 22:07:54 EDT 2019: 15435 packets captured

Will give feedback

Edit: pretty steady 24k atm
Edit: ovh ddos filtures just killed everything.
Edit: I contacted ovh that I was part of the NTP project and should be receiving ~30kqps port 123/udp, and they wanted me to email them a traceroute from myself to the VPS.
Edit: I chewed ovh out, and it looks like they may have removed my filter limit. I also increased the default `nofileā€™ to 409600. Iā€™m going to keep monitoring this one. Itā€™s currently responding to about 10-15kqps and still serving an offset of 0.002890 sec to my new jersey vps at 30% cpu load on the 10MiBit setting in the global and cn zones.
Edit: vps made it to 11.9 and no ovh ddos email - looks to be a wrap and currently paid for half a year.
Edit: Iā€™m going to go to 25MiBit and see what happens.
Edit: I rescheduled it to a `niceā€™ of -20.
Edit: It appears to be doing fine at 50MiBit and is back up around 25-35kpps, give or take. Iā€™ll wait an hour and chug her up to 100.
Edit: I put another year on it. Itā€™s a good performer for the price.

This is from one of my Hong Kong vps while Montreal is under 30% cpu load:

[root@tick2 security]# ntpdate -q montreal.ca.logiplex.net
server 2607:5300:201:3100::509d, stratum 0, offset 0.000000, delay 0.00000
server 192.99.68.38, stratum 2, offset -0.005333, delay 0.24536
6 Apr 18:03:41 ntpdate[19745]: adjust time server 192.99.68.38 offset -0.005333 sec

These kvm VPS, logiplex.net and Montreal.ca.logiplex.net, are paid for more than a year. Theyā€™ve been modified to support high concurrency with network and process scheduling, `ulimitā€™ increases, bypass nf_conntrack problems and tested to apply across reboot.

Update 201908041337EST5EDT: Made it to 13 today and rising, doing about 25kpps/12.5kqps at the 250MiBit setting. China seems to be doing problem free at the moment.

Ovh ddos filters are still evident. Iā€™ve contacted them five times regarding it, and they fail to respond.They only become active about a fourth of the time before the VPS falls out of the pools (with no sign of what causes it opposed to other times when they donā€™t become active). Maybe the filters will learn. There is nothing more I can do.

Put yet another year on it
Montreal is paid until Nov. 2022
When itā€™s not in the pool or in mitigation, it is currently idling around 4-6kpps.

Update 201908150711EST5EDT: As of today, I am receiving the following with a 10.9 score on the 500MBit setting, which is where I will keep it in case of the event that I may not be around for a while and so to be safe:

Noah

@iocc
hi, please move this server to CN zone.
https://www.ntppool.org/scores/157.230.35.12
Thanks.

Ok, added to the .CN zone.

Thanks. Can you also remove it from .sg zone? (157.230.35.12)

Okey, you got it. Removed.

@iocc
please add https://www.ntppool.org/scores/159.69.6.238 to cn and remove from de.
Thanks .

Ok, added CN and removed DE.
Now the CN pool actually got 28 servers for IPv4.
Good job everyone :slight_smile: