A recently puplished paper with the title mentioned above was shared on the mailinglist of the IETF ntp workinggroup yesterday: https://arxiv.org/pdf/2602.12321
I have read it, and all I see is DDOS attack.
Mainly a lot of figures, numbers and unrelated stuff.
They talk a bit about inserting wrong time, but not how.
Also they left Stratum1 servers alone, why?
We have been discussing overloaded zones/servers for some time.
What are they trying to proof with this paper? I don’t get it, it doesn’t say anything we didn’t know already.
BTW, I googled for ‘Monopoly Attack’ and such a attack name doesn’t seem to exist ![]()
Even AI doesn’t know and suggests several things it could be.
Other weird thing, one of the first paragraphs it states: NTP Pool hereafter called pool…yet is never referred as pool but kept being named as NTP Pool. Why?
Strange paper…very strange.
They do, starting on page 12 under “C. Monopolization Attack”. They calculate how many "malicious” servers you’ll need to insert into various zones to capture 50% of the queries and potentially answer with the “wrong” time.
That also answers your question about the attack name, essentially contribute enough servers to control the majority of ntp requests in a given zone. Then you can potentially skew the time in that zone.
But they left out a bit of the discussion of the monitoring system there, instead discussing the possibility to remove your server from the pool and still receive ntp queries for a while.
So I don’t see an immediate threat for the pool there, even if in their example for the .hu zone, the number of servers you’ll need to control is quite small.
Sounds to me like a number of academic papers, sure there is the potential for misuse but you’ll need a bit of ressources for a questionable outcome. If I wanted to profit from skewing the time of eg. a stock exchange I would hope that the stock exchange doesn’t rely on the pool to timestamp their transactions ![]()
It is a bit like the papers attributing the majority of tor exit nodes to the CIA or other three letter agencies…There is always a grain of truth but we’ll see how it pans out in the future.
Perhaps a simple measure taken away from this paper is to salt DNS replies with hosts outside the country’s zone, both from the continental and global zones, to mitigate an attack directed at a country. Something like, out of the 4, only 2 servers would be from the same country as the client and the other 2, one would be from the same continent and the other, from neither in the global zone.
The problem is that won’t work. As the monitors will see it. Also does chrony when you have peers and it will label them false-tickers.
I fail to see how the pool can be affected as it won’t allow false-tickers being spread to clients.
You can not simply insert false-tickers. Looks to me some paper for politicians to get funding over a non-issue.
They also said they did it in Hungary…yeah right, you can’t. That is what I meant, they never describe how they did it, as you have to hack the entire DNS in Hungary, then direct all NTP’s to your servers etc. Impossible task. Or hack some backbone-router. But then the pool-monitors will also notice it and take them all out.
Like I said, impossible ![]()
Hey Bas,
they wrote in their paper how they did it. No need for hacking anything. Just join like 10 servers to the pool in the .hu zone, serving the correct time to get included into the ntppool DNS.
Then schedule the servers for deletion but as you know and the authors confirmed, the clients traffic doesn’t stop instantaneously. Only the monitoring ignores the servers now, because they are to be delete anyway, but you can serve any time you want now. So after you deleted your 10 servers from the .hu zone, just serve time from the past or future as you want to the clients “stuck” on your server. Even if the monitors notice, what can they do? Your servers aren’t included in new DNS resonses anyway since you scheduled them for deletion, but for a time you can serve any time you want, and if you have like 10 servers all serving the same “wrong” time, there is a good chance that clients from that small region have 2 or more of your rogue servers in their solution for the “right” time and track your “wrong ticking” servers.
They did wrote that, correct.
But an NTP-client notices the drift, Chrony does.
So you have to do it over a very long time.
However, that makes the clients being dumb and not check DNS for long periodes of time.
This is not a pool problem as the pool rotates very fast.
That means that poor clients have a problem. But they state the pool can be attacked, that is not the case.
Yes, we both know that. But there are enough old clients out there (ntpd without the “pool” directive, IOT devices with sntp, systemd-timesyncd, heck even Windows by default uses SNTP).
Just look in this forum or the old mailinglists where people complain that their Windows AD or database replication broke because one server dished out the wrong time for a short period of time.
People just look at the settings and complain to the ntppool when their thermostat has the wrong time and switched the heating off. It’s not a ntppool problem per se, but that’s where they complain. The only time manufacturers get contacted is when server operators notice some strange behaviour or very high query volumes (see Fortigate or some russian? IOT speaker/device in recent times).
So in people minds it is the pool to blame for all woes when the time is wrong. And it’s easy to make that assumption even if we know that we try to mitigate such problems as much as possible, but here we are, stuck with supporting an old protocol that relied on the correct client configuration to derive the correct time but now firmware developers just copy some old code (SNTP) into their devices and ship it.
The pool is partly to blame as well for this, because it still recommends very ancient settings (server instead of pool directive):
https://www.ntppool.org/en/use.html
Mentioned already many moons ago.
Yeah I know
There is still this open PR on github ( Update HowTo on using the pool by penguinpee · Pull Request #218 · abh/ntppool · GitHub ) and if I someday learn a bit more about github I might try to improve the situation.
But here we are now, with our old protocol that needs to be backwards compatible and evolved into different configuration syntax between versions and vendors.
So if anyone feels like improving it, the content of the website is there for all to see and improve it by creating a pull request.
Not quite, well I have only chrony to test for this (don’t use NTPd myself).
If I use the pool parameter it takes ALL presented ntp-servers that are offered for that section of the pool.
When I use the server token, it takes only the first(?) listed ntp-server that works(?), but in any case it takes just 1 server.
So in fact, it should not matter other that the pool parameter takes all offered, what I tested 4 of them and then selects the best one.
Maybe NTPd is different, but I fail to see how using 1 server instead of pool 4 servers makes a difference on requests. If anything, the pool parameter gives more traffic, not less.
Please correct me if I’m wrong on this, I only tested in chrony.
The old way takes only 1 server that is offered, so how can it be better then pool that uses 4 servers.
This is DNS caching matter on old devices, or some other way, where they keep getting the same NTP-server-IP over-and-over-and-over and over until the device resets, where as the pool rotates all the time.
I fail to see how this is a problem of the pool.
In my opinion we should find a way to include fail2ban to kill/autoban abusers and report it back to the pool. Then if many of the same IP-ranges keep popping up, investigate.
Fail2ban is perfect for this type of inspecting of abuse, fully automatic. You just need to feed it logs, so maybe NTPd and Chrony can make a suspicious.log for Fail2Ban and let it inspect it.
Where feeding a massive log will put a lot of pressure on our systems.
I mean, chrony already know big requesters…shouldn’t be to hard to block them at the firewall for some time.
I do the same for spammers and attackers.
No, the “server” directive in ntpd used to resolve the hostname once on start-up and keep trying the IP for as long as your ntpd was running. That was no foul play by caching DNS resovers, just the architecture of ntpd. For your convenience you could put a hostname or IP there, and it was expected to be a long living time server. And since it was a server line with one server expected, it took just one IP even is multiple were returned by a DNS lookup. So the approach of resolving the hostname once on start-up made sense.
Then the “pool” directive was introduced later when the ntppool was well established and more and more people ran servers and the distros shipped default configuration files. This directive
a) takes all IP adresses from the lookup and add them as sources
b) replaces non-responsive servers after I think 1h by new IPs resolved from the hostname in the config file
c) removes duplicate IPs (if you have several pool lines for example and they include a common server)
Unfortunately the “pool” directive came late in the grand scheme of things, so not every ntpd (especially on LTS distros or embedded devices) understands it yet and still uses the old “one static IP/server per line” approach.
Ok, that is bad. didn’t know that.