Server abuse in Philippines

I came to check upon my servers and found out that my VPS in Philippines is suffering too many requests, even beyond the provided bandwidth allocation (30 Mbps). What’s worse, the incoming requests aren’t subsiding after I lowered the speed or dropped completely to monitoring only mode which usually works elsewhere. Stopping chrony makes it even worse. The network graph was suspiciously flat before that too, usually there are some spikes of traffic at certain whole minutes but not here. Did someone just hardcode IP of my server to config of whole ISP? But it’s not one ISP, it’s several from quick check of the most abusive source IPs. Look at it:

chronyc clients -p 10000000
Hostname                      NTP   Drop Int IntL Last     Cmd   Drop Int  Last
===============================================================================
dsl.49.148.156.90.pldt.n>  173041258  12413  -5  -5     0       0      0   -     -
customer.mnlaphl1.pop.st>  26012634  51889  -5  -5     0       0      0   -     -
152.32.90.151.convergeic>  11079607  24249  -5  -5     0       0      0   -     -
143.44.164.103-rev.conve>  610531294  35896  -5  -5     0       0      0   -     -
221.121.96.12              585883306  39976  -5  -5     0       0      0   -     -
222.127.248.227            48055551   6812  -5  -5     0       0      0   -     -
dsl.49.148.143.232.pldt.>  178630730  62955  -5  -5     0       0      0   -     -
216.247.24.180             30503932  35313  -5  -5     0       0      0   -     -
112.208.68.234.pldt.net    89931592  45527  -5  -5     0       0      0   -     -
119.94.108.190.static.pl>  245991730  35968  -5  -5     0       0      0   -     -
dsl.49.149.109.74.pldt.n>  26105255  26588  -5  -5     0       0      0   -     -
138.84.126.219             36496398  51731  -5  -5     0       0      0   -     -
126.209.18.242             33968244  21940  -5  -5     0       0      0   -     -
customer.mnlaphl1.pop.st>  12873982  42708  -5  -5     0       0      0   -     -
150.228.189.18             19686200  36567  -5  -5     0       0      0   -     -
165.99.250.25              87288148  19445  -5  -5     0       0      0   -     -
173.87.29.120-rev.conver>  21062944  36029  -5  -5     0       0      0   -     -
58.69.163.116.pldt.net     35736402  33794  -5  -5     0       0      0   -     -
138.84.115.193             56171978  35733  -5  -5     0       0      0   -     -
dsl.49.144.194.107.pldt.>  46044192  34481  -5  -5     0       0      0   -     -
138.84.114.216             10480342  57686  -5  -5     0       0      0   -     -
120.28.194.245             11423169  22707  -5  -5     0       0      0   -     -
180.190.7.125              40073423  17689  -5  -5     0       0      0   -     -
dsl.49.148.227.91.pldt.n>  18104657  35843  -5  -5     0       0      0   -     -
196.79.158.136.convergei>  20587174  54135  -5  -5     0       0      0   -     -
32.83.158.136.convergeic>  11273016  31335  -5  -5     0       0      0   -     -
120.28.194.14              62105389  21907  -5  -5     0       0      0   -     -
143.44.196.16-rev.conver>  15534873  58877  -5  -5     0       0      0   -     -
139.135.192.168.converge>  101897193  10607  -5  -5     0       0      0   -     -
dsl.49.145.208.242.pldt.>  25932534  24984  -5  -5     0       0      0   -     -
dsl.49.149.205.220.pldt.>  180861011  57512  -5  -5     0       0      0   -     -
120.28.199.153             14405156  28541  -5  -5     0       0      0   -     -
112.200.160.90.pldt.net    225447547   3134  -5  -5     0       0      0   -     -
103.161.61.77              376803247  10680  -5  -5     0       0      0   -     -
150.228.189.242            15093627  22217  -5  -5     0       0      0   -     -
139.135.77.238             50047575  20703  -5  -5     0       0      0   -     -
dsl.49.149.99.232.pldt.n>  12527978  47221  -5  -4     0       0      0   -     -
203.177.193.196            38676294  22509  -5  -5     0       0      0   -     -
103.91.142.72              24573736  39882  -5  -5     0       0      0   -     -
126.209.13.45              89343040  16204  -5  -5     0       0      0   -     -
180.195.158.118            28361202  18941  -5  -5     0       0      0   -     -
222.127.169.33             27864001  31288  -5  -5     0       0      0   -     -

Normal traffic

After turning down the netspeed:

After putting server in monitoring mode and stopping chrony:

It would help if you show the pool-server page to all of us.

I bet the served %% is way high,

It’s this one but already in monitoring mode: pool.ntp.org: Statistics for 165.154.233.197

1 Like

Your answer is here:

Client distribution

Global points 3.66‱
Top Countries
ph 355.22 ‱
id 3.40 ‱
br 0.39 ‱
cn 0.29 ‱
vn 1.41 ‱

And your speed-setting isn’t the lowest one: Zones: asia ph

As you show Asia as well.

Then see here: pool.ntp.org: NTP Servers in Philippines, ph.pool.ntp.org just a few servers…so YES your server is getting a lot of requests.

Those a high numbers. It’s a bug in the pool-dns-distribution, reported many times, including by me for Belgium.

@ask We hope it will be fixed soon, else servers will stop joining the pool.

I noticed the same, as do many others.

Set your speed to 512kbit for now, it may help.

As I wrote, the server isn’t currently participating in the pool, in dns queries, it’s set as monitoring only. The stats are irrelevant at this moment. Yet the incoming traffic isn’t getting any smaller.

Was it ever in he pool? As it takes days to weeks to get rid of all clients.

It won’t change overnight.

Sure it takes time but usually in my experience clients start to drop soon after lowering the netspeed. Not all but noticeable portion of them, even in high demand zones. Too bad many or most clients don’t respect KOD or dropped packets.

Hi @jnd

Yes having a server in the Philippines can be an “interesting” experience. I have had one for a number of months that has always been on the lowest speed setting and it gets hammered with incoming requests. The vnstat monthly figures -

The hosting company does not seem to mind all this incoming traffic so I have just left it running. The server replies to about 140 million requests per day and drops millions of requests each hour. Some ip addresses are sending thousands of requests per second from the same ip and source port - no idea why but they hit the firewall and get dropped.

A solution to the “problem”? Not sure I have one. The best thing I have found, as you are doing, is to keep the speed setting low so your incoming traffic does not completely fill the port speed and therefore block the monitor tests. Apart from that I am not sure there is much else that can be done.

@ask, the solution to the problem is that the pool needs to divide the load to other servers that serve region or worldwide.
But it doesn’t. Because of this BUG countries with few servers are hit hard, very hard.
In fact, I wish you could select countries that you want to serve as well, like I have a lot of servers, but I can’t make them to join Belgium where the same problem happens.

In my opinion there should be a threshold on numbers where other servers are included.
So far this only happens for Country and World, as far as I can see…all others are ignored.

I got hammered too, I know the feeling. Ask should fix this soon. Servers should not be overloaded with requests…and YES he knows as the numbers SHOW IT. Please fix this soon.

The pool is flawed for this. I want several servers to join BE, mailed a lot, they are not serving BE.

It’s simply not cool to keep hammering servers.

Something that may be of interest to anyone reading this thread -

My server in Manila does a DNS lookup to various servers of pool.ntp.org every 15 minutes and stores the results. Looking through the output I can see that a number of the ip addresses returned are outside the Philippines (based on the round trip times recorded).

For example the last 40 results from a lookup with the fields being

Unix Time, Server doing the DNS lookup, ip address the lookup was sent to, DNS named looked up, ip address returned, round trip time to that ip address (-1 = no response)

$ cat 2025-12*ph-l1-pool-dns.csv | grep -v asia.pool | tail -n 40
1765495404,ph-mnl-l1,127.0.0.1,pool.ntp.org,222.127.1.24,-1
1765495404,ph-mnl-l1,127.0.0.1,pool.ntp.org,162.159.200.123,1
1765495404,ph-mnl-l1,127.0.0.1,pool.ntp.org,222.127.1.21,-1
1765495404,ph-mnl-l1,127.0.0.1,pool.ntp.org,162.159.200.1,1
1765495404,ph-mnl-l1,1.1.1.1,pool.ntp.org,222.127.1.24,-1
1765495404,ph-mnl-l1,1.1.1.1,pool.ntp.org,162.159.200.123,1
1765495404,ph-mnl-l1,1.1.1.1,pool.ntp.org,222.127.1.21,-1
1765495404,ph-mnl-l1,1.1.1.1,pool.ntp.org,162.159.200.1,1
1765495404,ph-mnl-l1,8.8.8.8,pool.ntp.org,160.119.216.202,312
1765495404,ph-mnl-l1,8.8.8.8,pool.ntp.org,196.10.54.57,238
1765495404,ph-mnl-l1,8.8.8.8,pool.ntp.org,102.64.113.152,379
1765495404,ph-mnl-l1,8.8.8.8,pool.ntp.org,102.130.49.195,145
1765495404,ph-mnl-l1,9.9.9.9,pool.ntp.org,58.71.12.13,24
1765495404,ph-mnl-l1,9.9.9.9,pool.ntp.org,222.127.1.18,-1
1765495404,ph-mnl-l1,9.9.9.9,pool.ntp.org,162.159.200.1,1
1765495404,ph-mnl-l1,9.9.9.9,pool.ntp.org,162.159.200.123,1
1765495404,ph-mnl-l1,104.248.145.172,pool.ntp.org,162.159.200.1,1
1765495404,ph-mnl-l1,104.248.145.172,pool.ntp.org,162.159.200.123,1
1765495404,ph-mnl-l1,104.248.145.172,pool.ntp.org,58.71.12.13,24
1765495404,ph-mnl-l1,104.248.145.172,pool.ntp.org,165.154.233.197,18
1765496637,ph-mnl-l1,127.0.0.1,pool.ntp.org,58.71.12.13,23
1765496637,ph-mnl-l1,127.0.0.1,pool.ntp.org,162.159.200.123,1
1765496637,ph-mnl-l1,127.0.0.1,pool.ntp.org,162.159.200.1,1
1765496637,ph-mnl-l1,127.0.0.1,pool.ntp.org,222.127.1.22,-1
1765496637,ph-mnl-l1,1.1.1.1,pool.ntp.org,162.159.200.1,1
1765496637,ph-mnl-l1,1.1.1.1,pool.ntp.org,58.71.12.13,23
1765496637,ph-mnl-l1,1.1.1.1,pool.ntp.org,222.127.1.23,-1
1765496637,ph-mnl-l1,1.1.1.1,pool.ntp.org,162.159.200.123,1
1765496637,ph-mnl-l1,8.8.8.8,pool.ntp.org,198.137.202.32,184
1765496637,ph-mnl-l1,8.8.8.8,pool.ntp.org,162.159.200.123,1
1765496637,ph-mnl-l1,8.8.8.8,pool.ntp.org,66.187.4.132,195
1765496637,ph-mnl-l1,8.8.8.8,pool.ntp.org,138.89.14.60,-1
1765496637,ph-mnl-l1,9.9.9.9,pool.ntp.org,58.71.12.13,23
1765496637,ph-mnl-l1,9.9.9.9,pool.ntp.org,222.127.4.114,19
1765496637,ph-mnl-l1,9.9.9.9,pool.ntp.org,162.159.200.1,1
1765496637,ph-mnl-l1,9.9.9.9,pool.ntp.org,162.159.200.123,1
1765496637,ph-mnl-l1,104.248.145.172,pool.ntp.org,162.159.200.123,1
1765496637,ph-mnl-l1,104.248.145.172,pool.ntp.org,162.159.200.1,1
1765496637,ph-mnl-l1,104.248.145.172,pool.ntp.org,58.71.12.13,23
1765496637,ph-mnl-l1,104.248.145.172,pool.ntp.org,165.154.233.197,17

You will notice that Google (8.8.8.8) tends to return the most non Philippines servers so I guess it depends on who you ask for a pool ip.

And to @jnd your server appears in the top 10 but not at the top

$ cat 2025-12*ph-l1-pool-dns.csv | grep -v asia.pool | awk -F, '{print $5}' | sort | uniq -c | sort -rn
    286 162.159.200.1
    281 162.159.200.123
    182 58.71.12.13
    103 222.127.4.114
     53 165.154.233.197
     28 222.127.1.24
     27 222.127.1.23

I think you might be confusing this with the global @ zone.

This can happen in any zone, but I have found that for some reason, the PH zone is hosting a certain number of especially persistent misbehaving clients. I’ve previously removed servers from (non-PH) zones in Asia because they would still get bombarded by misbehaving clients from the PH zone hard to get rid of.

At some point, the PH zone also used to have the problem that some servers in the PH zone would respond to a sufficient number of monitors to stay in the pool, but would not actually serve large swaths of the client population. There’s been various discussions on that in this forum, don’t know though whether that particular issue still exists. As clients might decide to pick other servers when currently assigned ones aren’t reachable, that might to some extent indirectly drive up the load on the other servers in the zone.

First, I would suggest waiting a few days to see if the situation improves. Yes, in many regions the traffic will drop very quickly when the netspeed setting is reduced, but not all the regions are the same.

If the number of requests slows down with your current “monitoring only” mode you could try to raise the netspeed to 512 Kbit/s again to see if the network traffic is tolerable at that setting.

There have been rare cases in the past where the pool has “stuck” to serving the same set of NTP server IP addresses for a few hours, due to backend database problems or similar. However, I don’t see an indication of this in my own statistics right now, so this is probably not the case here.

Another source of non-pool NTP traffic might be that someone has added your NTP server’s IP address to some other DNS name, like ntp.example.com. It’s hard to figure out if this is the case, but as you seem to be running a HTTP server you could check your web server’s logs if there are bot requests to some unexpected hostnames (ie. apart from *.pool.ntp.org, which should have stopped at around the time you switched to “monitoring only” mode). Sure, NTP and HTTP are entirely different protocols, but sometimes the HTTP server logs can give hints about these kind of oddities.

I set up a server in PH so I wouldn’t miss the fun. Various stats can be found from its own webpage. I may adjust the netspeed settings up and down in the next few days, so take all the stats with a grain of salt. I’m aiming for something like 4TB/month of sent traffic, and as of now it looks like I could increase the netspeed significantly. I’m doing the increases gradually as a precaution.

As for received traffic, it’s indeed fairly wild. I can confirm some observations. Dropping away from the pool (either by setting the monitoring mode or otherwise) does not reduce the incoming traffic, at least immediately. During the warmup period my server reached the score 10 threshold and got added to the pool. Then the pool shuffled some monitors between Testing and Candidate lists (so that the Testing monitors had lower RTT values), causing the recent median score to momentarily drop slightly below 10. This is all fine, it’s how the algorithm works. As of now the situation has stabilized and my server’s score will be consistently 10+ from now on.

The above gave me an opportunity to observe what happens when the server gets removed from the pool. I have some graphs for you, from the time just prior to getting re-added to the pool:


The above graph shows when the server got dropped from the pool at around 06:15 UTC. Note how the incoming traffic did not drop.


The above graph shows the wild percentage of dropped requests, which even increased when the server was not in the pool.

I’m currently using “ratelimit interval 3 burst 8 leak 4” configuration in chrony to limit the rates.

Note that I have my usual “if an ICMP unreachable is received, don’t send any responses to that IP address for 100 seconds” firewall rule in place, so not all the requests sent to my server are recorded in chrony’s NTP request graphs. The effect of this firewall rule does not seem to be particularly significant in PH. Maybe around 1.5% of requests get dropped because of this.

I can also confirm the observation of some clients sending zillions of requests from the same source port. I captured 1M packets to/from UDP port 123 when the server was out of the pool and here are the top ip/port combinations:

cut -d" " -f3 ph_idle.txt | sort | uniq -c | sort -rn | head
 238085 49.145.240.186.28380
 235615 120.28.252.60.21366
 214157 103.235.92.193.17744
  98625 120.28.200.206.53751
  93355 45.115.225.48.123 (this server, this row was expected)
  79616 124.217.19.122.9939
     34 136.158.10.187.31883
     34 126.209.21.163.39952
     34 112.210.150.62.1461
     34 112.209.74.13.21897

If you want to have a look at this pcap file, it’s available at https://miuku.net/tmp/ph_idle.pcap.gz
I don’t know if these oddities are the result of a bug or just maliciously forged source IP addresses.

Edit: For comparison, here’s a similar 1M packet capture when the server was in the pool: https://miuku.net/tmp/ph_active.pcap.gz

I must say I’m a little bit concerned about the percentage of dropped packets, but on the other hand, the pool monitors seem to be happy so I suppose it’s okay.

1 Like

Could it be that this is due to “sane” traffic (in the sense of more legitimate, or less likely to violate the limits you set) having a lower share of the overall traffic when the server is not in the pool, so that the “insane” traffic gets a higher share overall, and thus also dominates the (relative) packet drop ratio?

Hmm, but the monitors’ traffic would not hit the rate limits you set, so isn’t that to be expected? And the question is as to whether the pool should take the treatment of “insane” traffic into account for its metric. I.e., should there be an expectation, then to be reflected in the scoring, that a server should be penalized (dropped score) when it protects itself against traffic that arguably could be considered abusive? And if so, how to measure it, given that the monitors’ traffic is designed to be sane, e.g., not violate ntpd’s default of rejecting traffic with less than 2 second spacing between packets?

Actually, in a related case, the pool already penalizes servers that arguably protect themselves against “too much” traffic by sending a RATE kiss code, leading to 3.5 points being deducted from the score. Though the server would need to have a particularly sensitive rate limit configuration, or there would need to be other special circumstances, e.g. the monitor sitting behind CGNAT with a bunch of regular clients also hitting the same server, for that to be triggered.

Here an example for a server on the East Coast of the USA:

1765709567,2025-12-14 10:52:47,,-3.5,15.499140739,123,cayyz1-3strqkc,21.302,3,RATE
1765679273,2025-12-14 02:27:53,,-3.5,15.499871254,129,usmci1-3strqkc,49.692,3,RATE
1765656659,2025-12-13 20:10:59,,-3.5,13.595982552,277,usdca1-3grrbhg,1.464,3,RATE
1765653547,2025-12-13 19:19:07,,-3.5,14.63672924,87,usewr3-1a6a7hp,14.397,3,RATE
1765642727,2025-12-13 16:18:47,,-3.5,15.49015522,123,cayyz1-3strqkc,21.749,3,RATE
1765642393,2025-12-13 16:13:13,,-3.5,15.308969498,87,usewr3-1a6a7hp,14.556,3,RATE
1765619362,2025-12-13 09:49:22,,-3.5,13.749756813,87,usewr3-1a6a7hp,13.225,3,RATE
1765608061,2025-12-13 06:41:01,,-3.5,9.988577843,87,usewr3-1a6a7hp,15.306,3,RATE
1765606758,2025-12-13 06:19:18,,-3.5,13.233420372,87,usewr3-1a6a7hp,17.184,3,RATE
1765598659,2025-12-13 04:04:19,,-3.5,11.834592819,174,usmdw1-2trgvm8,22.14,3,RATE
1765596354,2025-12-13 03:25:54,,-3.5,9.44002533,87,usewr3-1a6a7hp,14.915,3,RATE
1765594866,2025-12-13 03:01:06,,-3.5,15.361688614,123,cayyz1-3strqkc,21.464,3,RATE
1765593907,2025-12-13 02:45:07,,-3.5,11.322280884,87,usewr3-1a6a7hp,16.464,3,RATE
1765589791,2025-12-13 01:36:31,,-3.5,12.268638611,87,usewr3-1a6a7hp,13.373,3,RATE
1765588944,2025-12-13 01:22:24,,-3.5,15.49984169,174,usmdw1-2trgvm8,25.117,3,RATE
1765585667,2025-12-13 00:27:47,,-3.5,14.019984245,87,usewr3-1a6a7hp,15.503,3,RATE
1765577303,2025-12-12 22:08:23,,-3.5,14.664486885,87,usewr3-1a6a7hp,15.308,3,RATE
1765568048,2025-12-12 19:34:08,,-3.5,15.474635124,123,cayyz1-3strqkc,20.748,3,RATE
1765565125,2025-12-12 18:45:25,,-3.5,14.969315529,87,usewr3-1a6a7hp,17.23,3,RATE
1765546231,2025-12-12 13:30:31,,-3.5,11.086711884,87,usewr3-1a6a7hp,16.507,3,RATE
1765545620,2025-12-12 13:20:20,,-3.5,15.109930038,87,usewr3-1a6a7hp,13.508,3,RATE
1765528055,2025-12-12 08:27:35,,-3.5,14.663721085,87,usewr3-1a6a7hp,14.339,3,RATE
1765525417,2025-12-12 07:43:37,,-3.5,15.490222931,123,cayyz1-3strqkc,21.823,3,RATE
1765516765,2025-12-12 05:19:25,,-3.5,15.455646515,87,usewr3-1a6a7hp,16.849,3,RATE
1765484497,2025-12-11 20:21:37,,-3.5,14.493034363,87,usewr3-1a6a7hp,14.264,3,RATE
1765475560,2025-12-11 17:52:40,,-3.5,15.393410683,123,cayyz1-3strqkc,21.765,3,RATE
1765473973,2025-12-11 17:26:13,,-3.5,15.061638832,87,usewr3-1a6a7hp,14.056,3,RATE

Operating one of the monitors that the server guards against, I think in this case, the penalty is ok, knowing that my monitor certainly is not abusive in the sense discussed in this thread. In my case, I suspect it’s the monitor server itself syncing with this upstream server*, in combination with the monitor additionally polling the server. I.e., given today’s reality of CGNAT deployments, this server’s rate limits are in my view unrealistically tight, and limit the server’s usefulness as a server in a realistic environment, thus the penalty (though it’s rare enough to not actually affect the server being included in the pool after all).

* It could be argued that monitors should not get time from servers they monitor. But given that, with a few exceptions, all monitors monitor all servers, though desirable, I think it unfortunately would be unrealistic in many cases to require that monitors not get time from servers that are in the pool.

All true. I don’t think the pool needs any changes because of this situation. Getting rid of the abusive clients (or forged source IP addresses) would be nice, though, but that’s not directly in our control.

1 Like

I don’t think it’s unrealistic to request that monitors not use the pool to find sources (e.g. use pool DNS names in their configuration). That’s not to say never use any servers which are in the pool, rather, pool monitors should manually maintain a configuration of good upstream sources that may include, for example, Cloudflare’s anycast servers.

2 Likes

Yes, indeed. But my point was not about using pool names. The pool discourages that already for servers.

But rather, my point was about monitors, which have kind of the role of referees in the system, so should have some degree of impartiality, and independence from the pool. That impartiality/independence could be questioned when the monitors themselves get their own time, as one basis for judging servers, from (some of) the servers they are supposed to judge impartially, even when hand-picked and configured statically. :slightly_smiling_face:

Indeed in my opnion the monitors should get their time directly from multiple stratum 0 sources backed up by very reliable sources and using a long holdover time (good OXCO, Rb or Cs source). And have the monitors check the correct time against other monitors.

But the monitor (agent) software continuously check its time against 12 ipv4 and 8 ipv6 ntp servers and pause its operation when its time seems out, so does it really matter?

Also your own monitor does not “judge” your own ntp servers, so you can use them at least?

They do. Like mine via GSP+PPS, being stratum 1.

Monitors beging 0 is impossible, as the load on them would be immense.

Monitors are checked against reliable servers. As such the monitors are not the problem.

The problem is the pool, @ask , the pool should make big servers join smaller servers, yet it doesn’t.

I takes global servers, it takes country servers, but anything between…meaning EU/ASIA/etc seems to be ignored.

As such undeserved country servers are being hit hard, very hard. This got NOTHING to do with the monitors, they do their job.

It has everything to do with a bug in the ntppool system that doesn’t distribute load fairly over servers that are not local/global…all in between seem to be ignored.

Ergo and EU server won’t serve Belgium…that is the problem, a Global does.
Plenty EU servers, yet BE-lgium is undeserved.

The same problem hits others. This is a POOL-BUG!