Assigning a “reasonable limit” isn’t easy, and depends on the clients.
Clients sitting behind a large NAT may be served by a local caching DNS server. the NTP Pool DNS load balancing is little help. Individual clients can sometimes be recognized by the studying the requests in details: UDP source port, IPv4 Identifier, etc.
There are plenty of misconfigured / malfunctioning clients. FortiGate clients have been plaguing the NTP Pool for nearly three years. Network problems (e.g., L2 / L3 loops) may be short lived or persist for some time. There is currently a bug with Linux systemd-timesyncd that sometimes causes high request rates. The Wikipedia article has some historical examples
Setting the rate limit low may facilitate a denial of service attack. Client A sends low rate rNTP equests to the NTP server. Attacker B sends high rate NTP requests to the same server, but sets the source IP address to that of client A. The NTP server throttles or shuts down responses to client A, so that client A is no longer synchronized to this NTP server. I think you’ll find that chrony and ntpd take different approaches to rate limiting.
Some major NTP servers implement rate limiting. NIST web site mentions: “All users should ensure that their software NEVER queries a server more frequently than once every 4 seconds.”
Honestly, I couldn’t care less about CGNAT. It may be a matter of fact, but I have nothing to do with a business running out of IP addresses. That’s what IPv6 is for. I’ll stick with the principle of 1 client per IP address as has always been the intent of the TCP/IP protocols. I have no sympathy for this not jiving with the business plans of huge corporations.
It’s not just about businesses. A lot of ISPs run their customer connections behind CGNAT, because IPv4 space is scarce. Those end customers often cannot do anything to improve their IPv4 situation, and are stuck with what their ISP gives them.
IMO it’s important to care about serving clients behind NAT properly. It’s not how the internet was originally designed, but it’s the reality we need to serve. Denying them service is not going to make any meaningful difference towards improving the situation, so why deny service needlessly?
ISPs are businesses. And, if their business model is hampered by the scarcity of IPv4 addresses, they have had a huge incentive to move to IPv6 for over 5 years. No one was caught by surprise by the dearth of IPv4 addresses.
If they are not properly served by my abiding to the design of the TCP/IP protocols to limit abuse of my resources because they violate the design, it’s their problem, not mine. On the contrary, my situation is indeed improved.
ISPs are businesses yes - but their business is largely not hampered by the lack of IPv4 addresses, because CGNAT exists and works very effectively to serve very large numbers of customers via only a small IPv4 allocation.
NTP does not use TCP, it uses UDP. And it works just fine with NAT; doing so does not “violate the design”.
If you are going to set your limits so low that they are only suitable for one client behind any given IPv4 address, why are you even in the NTP pool? That hasn’t been the reality for a very long time, and if you’re setting your limits that low, all you’re doing is denying service to a whole lot of entirely legitimate users who cannot do anything about it. You don’t achieve anything else, and the limit you’ve chosen is far lower than simply what is needed to prevent abuse of your resources.
The whole point of putting servers in the pool is to help facilitate people getting accurate time. If you’re going to use it to make political statements at the expense of actually fulfilling the purpose of the pool, then IMO go do that elsewhere. I don’t think it’s appropriate to be compromising the pool for it.
But they can do something about it: their NTP clients can determine that ebahapo’s server is not working, and switch to one of the thousands of working servers in the pool. It’s no different from any other network path failure.
I’ll just say that the standard NTP client is designed according to the principles of the TCP/IP protocols (among which UDP is found), which NAT, CG or not, violates.
All this thread is about what is the QPS, QPH limit is reasonable from a given IPv4 address and that amount should not be considered misuse of a server. I suggest to wait the research result of @marco.davids’ graduate. On the meantime let’s improve the IPv6 reachability, not forgetting about the strategic goal of phasing out IPv4.
I meant that they often cannot do anything about their being behind NAT. Certainly their NTP client will generally have the ability to fail over to another server that actually answers. My point is that an IPv4 address being for just one client is an invalid assumption these days, and to implement throttling rules on that basis is to deliberately withold service from pool clients - and serving those clients is the whole reason the pool exists in the first place.
Why include a server in this project, if you aren’t going to allow it to actually do the job it’s intended for?
UDP is an IP protocol. It has nothing whatsoever to do with TCP. NTP is a protocol which runs on top of UDP. It, too, has nothing whatsoever to do with TCP.
Whether or not you consider NAT to violate the spirit of IP design is irrelevant. The reality is that a vast number of clients are behind NAT. If you have placed your server in the NTP pool, then IMO you have an obligation to provide at least a reasonable level of service to most pool clients who ask. If you’re refusing service to large numbers of behind-NAT clients (which, if your intent is simply to prevent abuse of your resources, you do not neeed to do), then IMO you have no business being in the pool. Either serve the clients you have implicitly agreed to serve, or get out.
Like I said before, if you want to make a political statement, the NTP pool is not the place for it. Improving IPv6 uptake is a great goal, and I will happily support those who campaign for that in a more appropriate venue.
To determine what an actual good limit is, yes. To determine that there is often more than one client behind an IPv4 address… no, we don’t need to wait for research to find that out; it’s already well known that there are many such addresses. The question that research will help answer is how many users might be behind such an address, not whether it’s a thing.
The reality is that the standard NTP software assumes that a client has a unique IP address, as standard practice in TCP/IP. NAT is an abuse and I’ll treat it as such. Enough said.
Please be advised that the NTPpool doesn’t properly support IPv6. It is just 2.pool.ntp.org that has an IPv6 address. That should change, yes. But it hasn’t so far.
Our graduate collected 24 hours of NTP-queries on our 30 globally ‘anycasted’ NTP-nodes, resulting in 13.67 billion NTP queries from ≈158.7 million unique clients. The data was anonymised and then analysed.
One of the findings is that a number of clients send way more NTP-queries than they are expected to do according to the RFC’s and the polling interval. And that by far this is not just abusive traffic. Some clients send as much as ~100 qps on average over a 24 hour period. But bear in mind that queries can come in bursts, going as high as 2500 qps or more. Besides clients being broken, or abusive, we concluded that a considerable amount of the ‘overly active’ clients are simply benign clients sitting behind (CG)NAT.
As a result, many queries, including numerous valid ones, are not being answered due to rate limiting. This behaviour almost exclusively occurs with IPv4 clients - wheres virtually all IPv6 clients behave properly (and are treated as such). This is yet another indication that (CG)NAT is sitting in the way, perhaps more than we would hope for. In particular SNTP clients, who simultaneously tent to burst at regular intervals, suffer from this.
For me this leads to the conclusion (as was already mentioned by others in this thread), that a reasonable rate limit (for IPv4) is hard to determine. There are simply too many legitimate requests from clients sitting behind (CG)NAT that cannot easily be distinguished from wrong, abusive ones. But capping at 10.000 queries per hour is definitely too harsh. But default, recommended settings are also causing problems for IPv4 clients in combination with large scale (CG)NAT. Allowing a significant burst before blocking seems like the best approach.
That needs to be fixed as it can be easily exploited for DoS attacks on clients. Also, that recommendation to use ntpd seems out of date. I don’t remember anyone complaining “it’s not working” with other software, but I remember a large number of security issues that ntpd had and still has not fixed.
Correct and it’s also written for world wide best options and you have to remember there are a GREAT number of area’s where a 1Mbps line is still more then most of us pay for 1Gbps. It’s very dated since NTP isn’t the only software these days and in many linux distributions is no longer included by default anymore.
Correct if you read the current IETF guidelines for NTP it states and I quote “Kiss-o’-Death (KoD) packets can be used in denial of service attacks.
Thus, the observation of even just one RATE packet with a high poll
value could be sign that the client is under attack.”
There is no way in the TCP/IP stack to sift what is behind NAT or not. Through and through, including the standard NTP server and typical firewall tools, it is assumed that for each end there’s an unique IP address and vice versa.
Raising the rate limits to accommodate CGNAT opens up pool servers to being overwhelmed, at best, or to DoS attacks, at worst.
The solution to the IPv4 shortage is IPv6, not NAT.