A (preliminary) list of client IPs requesting time milions of times constantly

From inclusion of my server (http://www.pool.ntp.org/scores/161.53.131.133) into the China zone, at the time of the zone bootstrap, I compiled a list of clients with avgint of 0 seconds, which generally go into millions and millions of consecutive ntp requests each.

I opened a new thread for this, as I think it has relevance for our general maintaining not only the NTP, but also of the whole Internet.

In another post (Adding servers to the China zone) I made a comment about the problems with IoT cameras and other equipment, so this list of IP addresses may be connected with that, however, it may also be that a specific vendor/software… misuses the pool.

It would be probably worth investigating the DNS lookup records from those clients, as the dropping out of the pool (see the China zone thread) recovers the ntp servers, which could indicate that those clients not only misuse the NTP protocol, but also the DNS, constantly re-asking for the ntp.pool address(es).

This is a list of the clients, in the form of ipf.conf rules (the e1000g0 interface is for Solaris 10 on SunFire X4100. If using this list, change it to your ethernet device). The ipf I found on Solaris and FreeBSD (DragonFly BSD uses ipfw, I did not check if the rules are the same). Sorry, for other OSes I did not investigate, as I have no need for packet filtering on those machines. Anyway, the addresses are important :slight_smile::

#
# ipf.conf
#
# Solaris 10
#
# See ipf(4) manpage for more information on
# IP Filter rules syntax.

#... ... ...

# Zorislav Shoyat, 18/3/2017 16:05
#

# The list of "overpersistent" NTP clients
#
# I did not block whole networks, as only those clients
# did show misbehaviour. However they all come from
# class C subnets, so disabling the whole subnet would
# probably also be OK
#
# This list is updated from the China zone bootstrap
# by adding new clients. It is possible that some of those
# in the list do not misbehave any more.
#
# However, the list shows a very specific list of subnets!
#
# With this ip filtering rules the "nastiest" clients are those
# which ask hundreds of thousands of times for time, but not
# more often than each 4 to 5 seconds.
#

block in quick on e1000g0 from 1.82.184.3 to e1000g0/32
block in quick on e1000g0 from 1.82.184.4 to e1000g0/32
block in quick on e1000g0 from 1.82.184.17 to e1000g0/32
block in quick on e1000g0 from 1.82.184.22 to e1000g0/32
block in quick on e1000g0 from 1.82.184.23 to e1000g0/32
block in quick on e1000g0 from 1.82.184.24 to e1000g0/32
block in quick on e1000g0 from 1.82.184.30 to e1000g0/32

block in quick on e1000g0 from 36.111.130.16 to e1000g0/32
block in quick on e1000g0 from 36.111.130.17 to e1000g0/32
block in quick on e1000g0 from 36.111.130.19 to e1000g0/32
block in quick on e1000g0 from 36.111.130.52 to e1000g0/32
block in quick on e1000g0 from 36.111.130.62 to e1000g0/32

block in quick on e1000g0 from 36.42.35.186 to e1000g0/32
block in quick on e1000g0 from 36.42.35.184 to e1000g0/32
block in quick on e1000g0 from 36.42.35.185 to e1000g0/32

block in quick on e1000g0 from 106.2.230.128 to e1000g0/32
block in quick on e1000g0 from 106.2.230.147 to e1000g0/32
block in quick on e1000g0 from 106.2.230.163 to e1000g0/32
block in quick on e1000g0 from 106.2.230.189 to e1000g0/32

block in quick on e1000g0 from 106.2.233.41 to e1000g0/32
block in quick on e1000g0 from 106.2.238.167 to e1000g0/32
block in quick on e1000g0 from 106.2.233.43 to e1000g0/32

block in quick on e1000g0 from 112.85.42.28 to e1000g0/32

block in quick on e1000g0 from 113.113.121.164 to e1000g0/32
block in quick on e1000g0 from 113.113.121.166 to e1000g0/32

block in quick on e1000g0 from 113.140.51.32 to e1000g0/32
block in quick on e1000g0 from 113.140.51.33 to e1000g0/32
block in quick on e1000g0 from 113.140.51.35 to e1000g0/32
block in quick on e1000g0 from 113.140.51.36 to e1000g0/32
block in quick on e1000g0 from 113.140.51.37 to e1000g0/32
block in quick on e1000g0 from 113.140.51.38 to e1000g0/32
block in quick on e1000g0 from 113.140.51.39 to e1000g0/32

block in quick on e1000g0 from 117.190.234.26 to e1000g0/32
block in quick on e1000g0 from 117.190.234.17 to e1000g0/32

block in quick on e1000g0 from 119.147.159.3 to e1000g0/32
block in quick on e1000g0 from 119.147.159.2 to e1000g0/32
block in quick on e1000g0 from 119.147.159.5 to e1000g0/32
block in quick on e1000g0 from 119.147.159.6 to e1000g0/32

block in quick on e1000g0 from 124.116.244.112 to e1000g0/32
block in quick on e1000g0 from 124.116.244.113 to e1000g0/32
block in quick on e1000g0 from 124.116.244.114 to e1000g0/32
block in quick on e1000g0 from 124.116.244.115 to e1000g0/32
block in quick on e1000g0 from 124.116.244.116 to e1000g0/32
block in quick on e1000g0 from 124.116.244.117 to e1000g0/32
block in quick on e1000g0 from 124.116.244.118 to e1000g0/32
block in quick on e1000g0 from 124.116.244.119 to e1000g0/32


block in quick on e1000g0 from 124.116.245.3 to e1000g0/32
block in quick on e1000g0 from 124.116.245.7 to e1000g0/32
block in quick on e1000g0 from 124.116.245.8 to e1000g0/32
block in quick on e1000g0 from 124.116.245.11 to e1000g0/32
block in quick on e1000g0 from 124.116.245.12 to e1000g0/32
block in quick on e1000g0 from 124.116.245.13 to e1000g0/32
block in quick on e1000g0 from 124.116.245.14 to e1000g0/32

block in quick on e1000g0 from 221.181.34.205 to e1000g0/32
block in quick on e1000g0 from 222.190.109.202 to e1000g0/32

Is that the complete list?

If it’s a widespread broken-software thing (which exists, too) I’d expect many more IPs.

I wonder if those IPs are really “carrier grade” NAT gateways?

That would be a plausible explanation, however, from a NAT I would expect the requests not all comming from the same port (TCP and UDP protocols are normally address and port translated).

Just a few minutes ago I had a dropout of availability of my China zone server (http://www.pool.ntp.org/scores/161.53.131.133/log?limit=50, “2017-03-19 12:00:24”). The following is ntpdc monlist for the NTP “clients” flooding my server:

1.82.184.1             24304 161.53.131.133  14404076 3 4    1d0      0     382
1.82.184.6             47660 161.53.131.133   8989689 3 4    1d0      0     642
1.82.184.2             55464 161.53.131.133   7908662 3 3    1d0      0    1477
1.82.184.27            58928 161.53.131.133   4409090 3 3    1d0      1    1487

(The last one has avgint of 1, but still a series of 4409090 requests!)
These were not included in the ip filtering list yet.

Generally, from the snoop (tcpdump, etherfind) ntp requests from such addresses come, it seems, in a constant stream, or at least several times a second. The originating times though can differ wildly (i.e. any date whatsoever in the epoch), which would indicate that they are different clients. Or just floods, which seems to me more probable, given the facts.

The address 1.82.184.1 above traceroutes (on the 29th hop!, after that it is just *) to:
29 202.97.65.114 (202.97.65.114) 514.333 ms * 496.873 ms

A flood through such internet distance from just several addresses is a really heavy burden on the Internet itself, disregarding the NTP server problems.

It seems that the floods start when the server gets assigned through the DNSs to serve the China zone. On other servers (in hr/europe) zone I did not notice a significant problem of this type.

I have presently no idea if those IPs expect an NTP answer at all, as it seems that the ip filter has to block a lot of requests. What I want to say is that I do not know if they stop sending requests as soon as the server stops responding, but I suspect that they send those requests as long as the servers address is active in the DNS.

So, as I already mentioned in my previous post, it would be interesting, if possible, to check if they actually ask the address from DNS also in a floodish way, i.e. each time before sending the NTP request, or how often. From the fact that it seems they flood only the servers while they are in the DNS (otherwise I would be expecting the floods to continue indefinitely), if follows that they have to consult the DNS regularly.

I have a strong feeling that if all of us in the China zone disallow those addresses (and any we find afterwards with the same behaviour), the zone could be stabilised quite well.

PS: @ask Could it be possible to add to the server status web page (and/or CSV log!) an indication if the server is presently in the DNS or not, as this could significantly help server operators to track anomalous behaviour?

did you have filter martian packet at external interface? because, i has see many nat private ip try connect to my NTP server. maybe can help you solve this problem.

it’s my pf.conf

# --
# macros
# --
# interface (external and internal)
if_ext = "igb0"
.......

# --
# table
# --
# https://en.wikipedia.org/wiki/Reserved_IP_addresses
# https://en.wikipedia.org/wiki/Martian_packet
table <martian> const {
    0.0.0.0/8 10.0.0.0/8 100.64.0.0/10 127.0.0.0/8 127.0.53.53 169.254.0.0/16 \
    172.16.0.0/12 192.0.0.0/24 192.0.2.0/24 192.168.0.0/16 198.18.0.0/15 \
    198.51.100.0/24 203.0.113.0/24 224.0.0.0/4 240.0.0.0/4 255.255.255.255/32 \
    ::/128 ::1/128 ::ffff:0:0/96 ::/96 100::/64 2001:10::/28 2001:db8::/32 \
    fc00::/7 fe80::/10 fec0::/10 ff00::/8
}
.......

# --
# filter rules
# --
#-----------# external ipv4 & ipv6 inbound traffic #-------#
# block and log incoming packets from martian table
block in log quick on $if_ext from <martian>

.......

tcpdump filiter output sample

# tcpdump -ttt -n -i pflog0 rulenum 2 | head -30
tcpdump: WARNING: pflog0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 65535 bytes
00:00:00.000000 IP 100.91.0.170.36464 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000042 IP 100.71.31.19.42717 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000073 IP 100.71.236.12.42920 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000553 IP 100.69.99.189.54492 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000321 IP 100.74.129.131.40680 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000500 IP 100.79.80.90.45718 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000240 IP 100.69.54.196.59056 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000147 IP 100.108.146.179.35971 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000043 IP 100.112.233.64.47159 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000483 IP 100.95.206.34.44350 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000024 IP 100.106.227.106.36176 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000232 IP 100.68.157.220.36075 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000120 IP 100.75.34.3.44938 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000004 IP 100.70.250.225.57591 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000385 IP 100.120.216.106.40667 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000018 IP 100.75.34.3.39927 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000062 IP 100.102.27.167.58262 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000276 IP 100.78.109.107.36947 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000013 IP 100.71.209.181.46389 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000021 IP 100.71.209.181.47973 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000025 IP 100.81.69.94.41007 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000136 IP 100.78.216.102.57835 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000296 IP 100.104.35.227.38243 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000124 IP 100.102.39.92.38203 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000315 IP 100.80.241.136.57601 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000460 IP 100.82.85.168.44245 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000190 IP 100.69.16.169.55995 > 61.216.153.107.123: NTPv3, Client, length 48
00:00:00.000027 IP 100.75.183.113.52258 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000028 IP 100.125.2.9.53999 > 61.216.153.104.123: NTPv3, Client, length 48
00:00:00.000302 IP 100.102.143.5.54875 > 61.216.153.104.123: NTPv3, Client, length 48
95 packets captured
3106 packets received by filter
0 packets dropped by kernel

Anti spoofing should take care of that last lot - even the most basic firewalling will recognise such packets arriving at the external interface and ignore them (mine ignores all but my own /24 in the 192.168 range even on the internal interface), and ISP’s shouldn’t be routing them at all - mine (Zen) drops them where they belong, in /dev/null (at least, I believe they do, as I’ve never seen any on the internet facing interfaces of any of my devices).
But the previous two lots could form the foundation of a list that can be firewalled - ideally grouped into subnets where appropriate (I notice that 1.82.184.0/24 & 124.116.244.0/23 seem to be problem networks, along with a few others)
We could quite quickly between us generate a list of known problem addresses/networks which could be recommended for filtering, to anyone wanting to assist in kick-starting zones which are under-served by the pool. Any Chinese user in those subnets who is NOT an abuser could then lean on their ISP to block such problem users, as would be the case in most countries with decent ISPs.
By generating such a list, we could even be doing an additional service to the internet community, as once it became known and used more widely than just by members of the ntp pool, it may create pressure on careless or complicit ISPs in China to put their houses in order - as has mostly happened elsewhere.
Worth considering?

I agree that I think its abuse to do like this. I firewalled them, my resources
are better used elsewhere. This is how much that have been blocked for 17 hours:

 pkts bytes target     prot opt in     out     source               destination
49142 3735K BadIPsDrop  all  --  *      *       1.82.184.0/24        0.0.0.0/0 
 563K   43M BadIPsDrop  all  --  *      *       14.204.86.0/24       0.0.0.0/0 
  12M  891M BadIPsDrop  all  --  *      *       36.111.130.0/24      0.0.0.0/0 
27862 2118K BadIPsDrop  all  --  *      *       36.42.35.0/24        0.0.0.0/0 
 185K   14M BadIPsDrop  all  --  *      *       106.2.230.0/24       0.0.0.0/0 
 1125 85534 BadIPsDrop  all  --  *      *       106.2.233.0/24       0.0.0.0/0 
37559 2855K BadIPsDrop  all  --  *      *       106.2.238.0/24       0.0.0.0/0 
  127  9652 BadIPsDrop  all  --  *      *       112.85.42.0/24       0.0.0.0/0 
 289K   22M BadIPsDrop  all  --  *      *       113.113.121.0/24     0.0.0.0/0 
30427 2313K BadIPsDrop  all  --  *      *       113.140.51.0/24      0.0.0.0/0 
3288K  250M BadIPsDrop  all  --  *      *       117.190.234.0/24     0.0.0.0/0 
  161 12611 BadIPsDrop  all  --  *      *       119.147.159.0/24     0.0.0.0/0 
 1145 87407 BadIPsDrop  all  --  *      *       124.116.244.0/24     0.0.0.0/0 
 1669  127K BadIPsDrop  all  --  *      *       124.116.245.0/24     0.0.0.0/0 
3476K  264M BadIPsDrop  all  --  *      *       221.181.34.0/24      0.0.0.0/0 
 9663  734K BadIPsDrop  all  --  *      *       222.190.109.0/24     0.0.0.0/0 
 
18 Gbyte traffic from 100.64.0.0/10 in 17 hours...

5740K  436M BadIPsDrop  all  --  *      *       10.0.0.0/8           0.0.0.0/0
2210K  168M BadIPsDrop  all  --  *      *       172.16.0.0/12        0.0.0.0/0
 239M   18G BadIPsDrop  all  --  *      *       100.64.0.0/10        0.0.0.0/0 
  142 10792 BadIPsDrop  all  --  *      *       169.254.0.0/16       0.0.0.0/0

Clearly they dont query cn.pool.ntp.org hostname because at an other site,
I have NO hits at all from these ranges. So something are just hitting the
IP or IPs that they already got.

 pkts bytes target     prot opt in     out     source               destination
    0     0 BadIPsDrop  all  --  *      *       1.82.184.0/24        0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       14.204.86.0/24       0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       36.111.130.0/24      0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       36.42.35.0/24        0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       106.2.230.0/24       0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       106.2.233.0/24       0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       106.2.238.0/24       0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       112.85.42.0/24       0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       113.113.121.0/24     0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       113.140.51.0/24      0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       117.190.234.0/24     0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       119.147.159.0/24     0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       124.116.244.0/24     0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       124.116.245.0/24     0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       221.181.34.0/24      0.0.0.0/0
    0     0 BadIPsDrop  all  --  *      *       222.190.109.0/24     0.0.0.0/0

This sort of thing could be another use case for a general system to sample what the servers sees that I mentioned in another thread.

Right now it’s nearly impossible for each operator to get data that’s meaningful enough to do the (often significant) work to correct a bad client, but if we had more data from across the thousands of servers it might be possible to just target the truly top 0.001% awful clients that likely do many percent of the dumb queries.

1 Like