Default zones for new servers

After my ISP changed my connection, hoping to solve some line stability issues, I have been given new IP addresses. I added them and scheduled the old ones for deletion.

However, I just noticed that the newly added IP addresses are also assigned to the global zone. The old IP addresses were assigned only to Europe and NL. I’m not aware of having done anything differently. Going through the process of adding a new IP address again, I didn’t see that I could change anything wrt to the automagically assigned zones except for leaving a comment.

I had set net speed initially to 100 Mbit for both the IPv4 and the IPv6 address. The IPv4 address is receiving a lot more traffic than before (without the global zone) while the IPv6 address hardly receives any. Before the distribution was roughly equal.

I have now lowered net speed to 50 Mbit for IPv4 and increased to 250 Mbit for IPv6. I’ll see how that changes things, but I may want to remove both IPs from the global pool. Can I do that myself or do I need to put in a request?

Is there some information regarding the global zone wrt the amount of traffic it adds to individual servers and what the user base is for the global zone? I wouldn’t mind providing services to some underrepresented regions.

If you set the netspeed low enough the server will be removed from the global zone. Or you can email the address listed on the page and one of us will take care of it!

Holland has respectively 6% and 7% of the global “capacity” for IPv4 and IPv6, but only about 1% and 2% of the queries.

I’m working on an update to how the zones are built to better distribute traffic, in particular for underserved zones. It should make the “netspeed selector” more predictable in how it will have more queries sent.

1 Like

Thanks for the info. I may tinker with the net speed settings some more.
Currently I have:

IPv4 50 Mbit: 5 KB traffic, 70 packets/s (average)
IPv6 250 Mbit: <1 KB traffic, <10 packets/s (average)

It’s the big discrepancy between IPv4 and IPv6 that baffles me. Before
that address change they were roughly on par.

I guess the majority of queries are still IPv4 since the mojority of DNS
pools are IPv4 only.

I’m working on an update to how the zones are built to better distribute traffic, in particular for underserved zones. It should make the “netspeed selector” more predictable in how it will have more queries sent.

Please do because in asia ph there is a ton of misconfigured IPv4 CPE going berserk. We are alone in there with Cloudflare and the lowest setting still results in a full DDOS 50Mbit/s of junk on IPv4. IPv6 is better are there seems to be more modern CPE that used NTP in a sane fashion.

asia ph needs much more granular speed setting. Even 512 Kbit results in 50Mbit/s traffic

2 Likes

@ask just an idea. Why not ask servers that are on low netspeed if they want to take part in zones that don’t have many servers and then devide a small part of the load to them.
I mean, my servers are set to low load, but I would be more then happy to help serving zones that hardly have any servers. Just help them out until they have enough servers themselves.
E.g. you double such a load where 1mbit is set, but you then (temporary) assign 1mbit to the own-zone and 1mbit to the starving zone. That way those zones have more servers to get time from.

As here in Belgium we have pretty good upload and high-speed dataconnections, we could easily help out for some time. Give the choice to the ntp-sysop when it has low load set.

Just an idea.

Bas.

I am new here having returned to the pool after a few years off doing other things but I am not sure that would work as you expect. Happy to be proved wrong.

I have a server running in Sydney for a month or so. It has ipv4 set to 50Mbit and it receives about half a million requests per hour.
This morning I started a server in Singapore - a starved zone. It is currently set to 512kbit and in the last hour it received 1.4 million requests - last few hours just over a million each hour. (iptables counts of outgoing packets).

A setting of 1Mbit in one region (Sydney) is not the same as 1Mbit in other regions (Singapore).

If someone in Europe had their system set to 1Mbit, then doubled it with 1Mbit from their own region and 1Mbit from Singapore then they may be flooded with a lot more requests than expected.

As I said happy to update my understanding of how the whole system works if the above is wrong.

Beware if you set a server to 1mbit or more, it will be added by the global pool as well.
So it can get a lot of extra requests as it has more DNS-responses for serving NTP-requests.

If you do not want this, set it too 512kbit and it will stay in your area and not global.
For this reason all my servers are set to 512kbit, so they serve Europe only.

I see little point in service e.g. Asia as the distance is too far to get much accuracy.
There are better servers that run global to do that, like Cloudfare to name one.

Distance isn’t the important factor, jitter is. A client in asia might have a stable, low-jitter path to your server despite a very high latency (delay). Yes, in general, higher delay tends toward higher jitter, but it really depends on the specifics of the particular path(s) between two endpoints.

Even jitter doesn’t matter if the packets are (hardware) timestamped, but most are not.
A lot of NIC’s can stamp at all, not even in software.

My NIC:

ethtool -T enp1s0f0
Time stamping parameters for enp1s0f0:
Capabilities:
	hardware-transmit
	software-transmit
	hardware-receive
	software-receive
	software-system-clock
	hardware-raw-clock
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
	off
	on
Hardware Receive Filter Modes:
	none
	ptpv1-l4-sync
	ptpv1-l4-delay-req
	ptpv2-event

While most motherboard NIC have no such capabilities:

ethtool -T eno1 
Time stamping parameters for eno1:
Capabilities:
	software-transmit
	software-receive
	software-system-clock
PTP Hardware Clock: none
Hardware Transmit Timestamp Modes: none
Hardware Receive Filter Modes: none

In short, they are not as accurate and thus even worse when jitter happens.

However one must enable (hardware) stamping in Chrony.
By adding this:

# Hardware timestamping on NIC, either hwtimestamp eth0 or * for all
hwtimestamp *   

It’s not enabled by default.

Then if you check the status, it should show this:

mei 16 11:41:18 server chronyd[477009]: Enabled HW timestamping (TX only) on enp1s0f0

Ergo it will counter jitter a lot better and also improve accuracy for the client.

One can set it anyway, but the line will not show if it can’t be enabled/used.

1 Like

Hardware timestamping can reduce the jitter caused by CPU load, but for a machine that isn’t busy it won’t do much. Typically the bulk of the jitter is coming from the variability in the network delay between the client and server.

3 Likes

Yes, your understanding is correct.

I’m a little surprised by the ratios you see though.

Australia has about 1% of the world wide queries for IPv4 NTP servers and 3% of the server capacity. Singapore has 0.75% of the queries and 1% of the capacity (so it’s strictly speaking not underserved!)

I think what’s happening is that the number of servers in Singapore is so low that the load balancing according to the specified netspeed works poorly.

Another explanation, as I think @Bas mentioned, is that the SG server might get queries from other Asian countries whereas the Australian server gets queries from “Oceania” which has less countries without local servers; the new system I’m tinkering with will fix this as well.

1 Like

I would love to see the option where I can select ‘starved’ countries to be used as e.g. backup to give them an extra server.

I mean, I don’t want to serve the world, but I might want to help out Australia or Greenland to name a few.
Maybe an option to select an extra continent wouldn’t such a bad thing.

Now it’s @Global, @Europe, @Country etc…

Why not give the option to select e.g. @Europe and @Asia? But not the world.

Where it will still autoselect, but you can select a continent or (1 or more) country as extra service.