I must added a couple of server that have both IP4 and IP6 addresses.
Am I right in thinking the the bandwidth numbers are independent so setting 100mbit for 4 and 6 is effectively setting 200mbit for the server as a whole?
I noticed that the IP6 scores climb considerably faster than the IP4 ones. What is the reasoning behind that?
I think scores of IPv6 addresses change faster, because their number is smaller and they are monitored separately from IPv4 addresses, so the script which makes the measurements and updates the scores needs less time to finish a round with IPv6 addresses than IPv4 addresses and therefore runs more frequently.
The speed setting is independent for IPv4 and IPv6 addresses (the monitoring system doesn’t actually know they are the same server), but setting the same speed for an IPv4 and IPv6 address in the same zone doesn’t mean the amount of traffic over IPv4 and IPv6 will be the same. This depends on the zone, but typically the IPv6 traffic is much smaller.
For example, on one of my servers I get about 340 packets per second on IPv4 with 1gbit/s speed and only about 20 packets per second on IPv6 with the same setting of speed.
Interesting. I have a similar situation. For a host with IPv6 I have 20 packets per second about. For a dual stack it’s 430 packets/s. Assuming the same amount of IPv6 requests the rest for IPv4 will be a little bit more than 400 p/s. I am in the Austrian pool. For which pool is your service ?
I have two dual stack servers in the US / North America zone. They are currently seeing ~2000 PPS IPv4 and 300 PPS IPv6 (both 4 and 6 set to gigabit). The IPv6 numbers show a fairly strong daily cycle with lows around 10:00 UTC and peaks between 15:00 and 04:00 UTC. IPv4 is much flatter with no clear pattern.
Incidentally these are $5/month linode VM’s that come with 1TB/month of transfer outbound and unlimited inbound bandwidth. I’m using about 16.5 GB/day of outbound. Both seem to be keeping time pretty well ( +/-
3ms, unlike the Digital Ocean VM’s which wandered +/- 50ms)
Yeah, they are independent. The “megabit number” is really just relative to everyone else. I phrased it that way to give everyone a comparable measuring unit for how to set it.
@mlichvar is right – there are less IPv6 servers so the monitoring system can more or less keep up with the desired interval (10 minutes) where on IPv4 it’s a bit slower currently. It’s on the todo list to monitor everything more often, but there are other things to fix first to make sure the extra measurement data doesn’t muck with the system.
(if you look carefully you can see that the IPv6 servers deliberately are being monitored more often than the IPv4 ones, in addition to the system limitation – I don’t remember why I did that; that change was from 2011 …).