Howto deal with bad NTP-Client / Attacker?

Hi all,

currently i have a Problem with a 1&1 Customer in Germany. The Client attacks (I assume it’s a broken NTP client) with ~75req/s one of my NTP Server. I wrote to the Abuse-Team 14 Days ago but never got back an reply. The Client is still there, changing its IP every 24h.

Until yesterday i blocked the single IP, which means adapting the fw-rule every day because of the changing ip. As of today I’m blocking the whole /24 which is a bit overkill.

I already have a rate limiter of 4 req/s with a burst of 128. But I’m not sure if it is useful to reduce the rate limiter to >4req/s because of CG-NAT.

How do you guys handle such a broken NTP-Client / Attacker?

Greetings
Matthias

2 Likes

This is a boring answer, but personally, I just enable rate limiting in the NTP daemon and proceed to mostly ignore abuse unless it’s really extreme.

Abusive traffic isn’t great, but the NTP daemon can drop it efficiently enough – again, unless it’s really extreme.

4 Likes

Not all NTP servers implement rate limiting. E.g., the appliances.

1 Like

You might want to consider delisting your IPv4 address from the pool, where most abuses, among which I include CG-NAT, are found, and keeping only the IPv6 one.

Short of it, try this restriction line:

restrict default non-ntpport ignore

This way, you only serve well behaved servers and block buggy clients and the NAT hack.

1 Like

How long does it do that for?

If it’s brief, I wouldn’t worry about it.

I have an automated ruleset on my edge firewall that deals with the larger offenders:

1 Like

That’s not a good idea for a public server. Most clients use a random source port. RFC 9109 recommends NTP clients to do that.

5 Likes

And IPv4 NAT means you can’t predict much of anything about the public source port of queries.

3 Likes

It’s up to the server operator to decide who is his public.

Thanks for the answers so far. It’s very interesting to know how other handle such problems.

@ebahapo
Sure it’s up to the operator. But as the Pool is open to everybody, this should also be in the operators mind. IMHO it’s pretty useless to have a public pool when not all people are served.

1 Like

Besides, last time I looked, probes from pool monitoring servers came from non-123 ports. With “restrict default non-ntpport ignore” you would get dropped out of pool DNS rotation very quickly. Therefore “non-ntpport” is a very bad idea for a pool NTP server.

Unless additional restrict lines are added to allow for the AS where the monitors reside. :smirk:

That’s a way among many of regarding this service. The fact of the matter is that people running the ntp or the chrony daemon default to using port 123 and could benefit from the pool unhindered by the restriction above. As for the buggy and malicious clients, such as the one reported by the OP, no such luck.

Giving any kind of preferential treatment to monitor servers is another bad idea. And with the new distributed monitoring system there are monitor servers in a number of networks around the world, making this approach even less useful.

I don’t think this statement is accurate.

1 Like

I only suggest restrict version to filter out queries from clients using obsoleted NTP version (v3 or older).

1 Like

My understanding is special-casing the response to pool monitors is specifically frowned upon because the monitors are supposed to reflect the experience of J. Random User, many of which are stuck behind NAT on IPv4-only ISPs, and more will be as time progresses and the price of IPv4 addresses continues to rise and large ISPs are more likely to use CGNAT.

Please reconsider whether your server is a candidate for pool.ntp.org. There are (poorly-maintained) public server lists you could add your server to. In that case it’s entirely up to you who you claim to and who you actually serve.

2 Likes

It is for ntpd and ntpdate, which use the same socket for both client and server functions:

The official port number for NTP (that ntpd and ntpdate listen and talk to) is 123.

As I said above, my first recommendation, which I myself follow, is to add only an IPv6 server to the pool, which avoids most bad and buggy actors. IPv4 is just too much of a headache for a small pool server operator.

1 Like

If we notice that a server works better / differently for the monitors than for a “random IP” then that’d be a reason to remove that server from the pool.

It’s appropriate to limit unreasonable request patterns, but deliberately blocking reasonable clients following the RFC requirements and recommendations isn’t appropriate for a pool server.

Given infinite time to do all the possible projects, I’d love to have RIPE Atlas query all the pool servers now and then as an extra check beyond the regular monitors that things are working okay.

2 Likes

Yes, agreed.

The monitors do a configurable number of queries to each NTP server. It helps get better results (picking the “best” answer) and it’s also to make sure clients aren’t (very) unreasonably rate limited (a rate limit answer to any of the queries will “fail” the server).

It’s not used yet, but one of the changes in progress (with the validation system) is for the system to decide a “netspeed” up to the configured netspeed to have some more nuance to partially “take out” a server if it’s behaving weird; so for example the monitor could once in a while send 15 queries (over ~30 seconds) instead of 3 or 4 to encourage operators to configure their systems to allow that to work.

That’s totally reasonable! The NTP Pool Project generally don’t run NTP services (ironically, I guess!) to focus on the other services that makes the system work; but for testing I do have a few IPv6 IPs in the system…

That would exceed the default rate limiting of ntpd. The default rate limiting is designed to allow iburst which sends a handful of queries 2s apart. It was originally 8 and is now 6 queries in a burst at startup.

The default rate limiting rejects queries less then 2 seconds apart outright, and uses a leaky bucket to limit queries to an average of 8 seconds apart. When the rate limiting is enabled via a restrict address limited configuration, violations of the rate limit are simply dropped without response. If both kod and limited restrictions apply to the querying address, a KoD response is occasionally sent, to minimize the chance of ntpd being used to reflect KoD responses to a 3rd party via forged IP source address requests.

So to stay within the default rate limits, I’d suggest never sending more frequently than 2s apart and no more than 8 per 64 seconds.

2 Likes