Getting beyond 10k qps?


#1

I’ve been experimenting with higher traffic servers running on VM’s. I’ve tried FreeBSD and Ubuntu, I’ve played with ntpsec, legacy ntp and chrony. In all cases I max out around 60% cpu and 10-12k qps and then start dropping packets. All the ntp implementations seem to be single thread so adding cores doesn’t help (I tried it just in case the net stack might end up on a different thread, it didn’t). Digital ocean seems to behave a bit better than linode (plus they don’t actually measure the bandwidth) but neither do that well.

If the load were relatively stable I could tune the bandwidth settings but it’s very spiky, I saw a 40k qps 10 min spike on my Bangalore server yesterday. My singapore server has +/- 80% traffic swings in an hour.

Has anybody managed to get beyond the 10-12 k qps range on a VM and if so what setup are you using?

Has anybody done any work on a multithreaded server?


#2

In my tests about 6 months ago I didn’t have much success in pushing multicore servers for server higher NTP traffic loads. In the end I built a docker NTP container and by running multiple containers on a physical box was able to load it up fairly well.

You need to run a load balancer or multiple public IPs to distribute the queries across the containers.


#3

My CentOS 7 server in Singapore handles an avg of 30k qps per day, with 5 minute peaks of 50+k qps with standard ntpd. http://www.pool.ntp.org/scores/94.237.64.20 and http://makaki.miuku.net/stats/ntppackets.html have some stats. http://makaki.miuku.net/stats/cpuload.html has the CPU load graphs (one core until week 12, two cores thereafter, so ntpd takes around 35-40% of a single CPU core at 30k qps). At this rate I would consume some 6TB/month, but I guess I’ll need to change to a lower netspeed setting to keep my bandwidth consumption below my 4TB/month limit. Of course they have plans with higher tranfer limits, but those cost more. The 4TB/month package is $20.

The VM is from UpCloud. If you want to try, see https://www.upcloud.com/register/?promo=YH2F6G
They have servers in Frankfurt, Helsinki, Amsterdam, Singapore, London and Chicago.

(full disclosure: “You can earn bonus credits to your UpCloud account by sharing your unique referral link or code with your friends. For every new user who signs up using your referral code and makes at least the minimum one time payment, you will receive $50 worth of free credits. Every new user also receives a bonus worth of $25 credits when signing up through the referral program.”)


#4

There is a multithreaded NTP server written in Rust I was playing with some time ago: https://github.com/mlichvar/rsntp

On a VMware guest I have in the pool the maximum throughput improved from about 55kpps to 110kpps. The average rate is about 2kpps (with the maximum speed setting), so it only made a difference for spikes happening on midnight, like this one: https://i.imgur.com/MxsJt5F.png (rsntp wasn’t running here, so it responded only to about 55kpps).

Please note that the CPU utilization doesn’t scale linearly with traffic. With interrupt coalescing the kernel can process multiple packets per interrupt, so an NTP server should be able to handle more than double of what it can handle at 50% utilization, even if it is single-threaded.


#5

I wondered who the other operators in Singapore were — the traffic volume is stupendous. I’m staring at 2.5TB/month inbound (5TB month inbound+outbound). I’m running FreeBSD 11 with 2 core and stock ntpd. I did see dropped packets during spikes when I had one vCPU, but I upgraded to two and I haven’t seen them since.


#6

I just upgraded my singapore digital ocean droplet to a three core 1GB ($15/month). I also flipped it to run rsntp from @mlichvar (https://github.com/mlichvar/rsntp)

What’s interesting about singapore is how spiky the usage is:

image

I have the same 3 core setup in Bangalore that that really sees some traffic. I’ve seen 75k/qps peaks. and it’s handing about 2 billion requests a day (that was at 500M setting on ipv4, I just turned it up to the full 1G)

image

Both of these machine are not doing much beyond ntp. rsntp seems to work really well. Here is the CPU usage for the Bangalore machine for the same 24 hrs.

image

The only weirdness I saw was rsntp would not play nice with the floating IP setup Digital Ocean uses, not sure why, Regular NTP and Chrony both handle it fine.


#7

My guess is that the high-traffic parts correspond to times when your address was returned by DNS and the low-traffic parts are clients that remember the address (like NTP clients are supposed do). There is a small number of servers in the zone, so the duty cycle is large.

What sampling interval do you use? I use collectd with the iptables plugin using 1-second interval, which shows all spikes very clearly.

One difference is that rsntp may not respond from the same address as the request was received on. It relies on system routing configuration. Is it possible that the default route used a different address in the “floating” setup? I think a new option to bind the rsntp’s sockets to specified addresses would help with that.


#8

My guess is that the high-traffic parts correspond to times when your address was returned by DNS and the low-traffic parts are clients that remember the address (like NTP clients are supposed do). There is a small number of servers in the zone, so the duty cycle is large.

Probably true, I’m guessing with the Bangalore server I’m in the cycle pretty much 100% the time which is why it look more even. Those grabs are screen grabs from the digital ocean console - not sure what the sampling is, if I had to guess I’d say 1 min. Putting real monitoring on the boxes is one of my tasks for this weekend.

Regarding rsntp yeah I think that’s it, the wrong IP in the reply. I don’t think binding a specific address will help this, the address in question isn’t aliased to an interface. I’m not sure what legacy ntp and chrony are doing that’s different, rust isn’t once of my languages so I’ll probably punt for now.


#9

In order to respond with a correct source address, ntpd binds a socket to all local addresses and chronyd sets the address for each packet using the PKTINFO control message. I’m not sure how would I do the latter in Rust, so I added an option to rsntp to bind its sockets to a specified address if you want to try it. I think it should help.


#10

Quick update. My Bangalore server (3 core 1GB Digital Ocean for $15/month) is now handing peaks of 80k QPS set to gigabit on IP4 and IP6. The @mlichvar rsntp server works very well at these loads. Traffic is around 12 TB/month.


#11

Using rsntp, has anybody seen EINVAL/os error 22 errors? I get anywhere from 4-12 / hour, eg:

Jun 14 21:32:11 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 21:32:31 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 21:52:20 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 21:52:40 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 22:22:12 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 22:22:12 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 22:54:01 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 22:54:21 minime rsntp[6490]: Thread #2 failed to send packet: Invalid argument (os error 22)
Jun 14 23:05:21 minime rsntp[6490]: Thread #1 failed to send packet: Invalid argument (os error 22)
Jun 15 00:10:39 minime rsntp[6490]: Thread #3 failed to send packet: Invalid argument (os error 22)

Everything seems to be working okay, so I’m trying to figure out if it’s a bad/bonus source IP in a packet or if for some reason some of the time data in the buffer is wrong… I’m guessing IP…


#12

@mlichvar cc the post above:


#13

I get that too and I think your guess is correct. It’s probably trying to respond to an invalid IP address. If you pull from git, the error message will include the remote address.


#14

Awesome, thanks! That’ll definitely help debug the errors!