I realised that the maximum offset which is monitored is now much smaller since some days. This happens for all my IPv6 server but not for IPv4. ( Same server dual stack ) IPv6 server are now between +/- 1 ms. IPv4 is still between +/- 4 ms. I am here in Vienna/Austria. As there was no change on my side somewhere else something was changed.
so far i understand, there are many factors that impact the performance of IPv4 and IPv6 connectivity, starting with routing protocols.
because of dynamic routing protocols, sometimes the traffic between the monitoring station in LA, California may change its route, and hence the stability of the link may vary, providing sometimes higher timing jitter intervals.
i have noticed in the last days the IPv6 offsets are more stable and it could be due to an improvement of IPv6 protocol in the US in LA, or in the data center of the monitoring station, or because of an improvement of a contributor of the NTP pool project.
# traceroute 2607:f238:2::ff:5
traceroute: Warning: Multiple interfaces found; using 2001:628:21f0:99::99:154 @ net0:1
traceroute to 2607:f238:2::ff:5, 30 hops max, 60 byte packets
1 2001:628:21f0:99::99:1 0.725 ms 0.291 ms 0.236 ms
2 fxy.iiasa.ac.at (2001:628:21f0:125::125:90) 0.260 ms 0.226 ms 0.247 ms
3 2001:628:21f0:124::124:90 0.513 ms 0.482 ms 0.390 ms
4 2001:628:1101:1010::1 1.575 ms 1.604 ms 1.517 ms
5 bundle-ether-21-73.core21.aco.net (2001:628:1101:1::1) 2.108 ms * *
6 2001:7f8:30:0:2:1:0:6939 11.481 ms 12.382 ms 9.538 ms
7 100ge13-1.core1.par2.he.net (2001:470:0:3f4::1) 18.869 ms 23.020 ms 17.516 ms
8 * 100ge14-1.core1.nyc4.he.net (2001:470:0:33b::1) 95.432 ms 98.036 ms
9 100ge9-1.core2.chi1.he.net (2001:470:0:298::1) 104.312 ms 122.257 ms 104.291 ms
10 100ge12-1.core1.mci3.he.net (2001:470:0:36c::1) 116.673 ms 116.792 ms 116.735 ms
11 100ge12-1.core1.den1.he.net (2001:470:0:204::1) 129.119 ms 165.944 ms 128.910 ms
12 10ge13-1.core1.lax2.he.net (2001:470:0:15d::2) 161.383 ms 153.982 ms 164.839 ms
13 2001:504:13::210:50 156.379 ms 156.606 ms 157.774 ms
14 2607:f238:0:8::2 157.055 ms 156.801 ms 156.769 ms
15 manage.ntppool.org (2607:f238:2::ff:5) 156.795 ms !X 157.040 ms !X 156.875 ms !X
Here we have the situation where offset goes down currently between -1 and -2 ms
# traceroute 2607:f238:2::ff:5
traceroute: Warning: Multiple interfaces found; using 2001:628:21f0:99::99:154 @ net0:1
traceroute to 2607:f238:2::ff:5, 30 hops max, 60 byte packets
1 * 2001:628:21f0:99::99:1 0.528 ms 0.424 ms
2 fxy.iiasa.ac.at (2001:628:21f0:125::125:90) 0.371 ms 0.249 ms 0.255 ms
3 2001:628:21f0:124::124:90 0.645 ms 0.463 ms 0.410 ms
4 2001:628:1101:1010::1 1.559 ms 1.538 ms 1.460 ms
5 * * *
6 2001:7f8:30:0:2:1:0:6939 1.992 ms 2.014 ms 1.795 ms
7 100ge13-1.core1.par2.he.net (2001:470:0:3f4::1) 23.980 ms 17.178 ms 18.706 ms
8 100ge14-1.core1.nyc4.he.net (2001:470:0:33b::1) 96.043 ms 98.451 ms 87.712 ms
9 100ge9-1.core2.chi1.he.net (2001:470:0:298::1) 104.211 ms 104.283 ms 112.097 ms
10 100ge12-1.core1.mci3.he.net (2001:470:0:36c::1) 116.595 ms 121.502 ms 125.929 ms
11 100ge12-1.core1.den1.he.net (2001:470:0:204::1) 131.160 ms 128.929 ms 139.509 ms
12 100ge12-1.core1.lax2.he.net (2001:470:0:3fa::1) 159.062 ms 154.440 ms 154.368 ms
13 2001:504:13::210:50 155.723 ms 155.466 ms 155.860 ms
14 2607:f238:0:8::2 154.901 ms 155.176 ms 154.923 ms
15 manage.ntppool.org (2607:f238:2::ff:5) 155.020 ms !X 155.144 ms !X 155.039 ms !X
both routes from the monitoring station to the NTP servers are different and according to my experience, we cannot guarantee each route to provide the same timing jitter.
timing jitter is something chaotic, unpredictable and unavoidable. one can only adapt to it and sometimes develop techniques for decreasing it.
each routing station has a fingerprint. there are many factors that impact on the performance of each station.
for example: the load of the routers, the cables’ length difference between two possible routes, temperature, software, hardware, etc.
accurate time measurement is kinda complicated. for example a stratum 1 server located in sydney, australia might offer to the monitoring station in los angeles a high timing jitter and then provide higher offsets. however it is possible that both share 99.999999% the same time.
maybe it would be ok to have a pool of servers monitoring other servers. for example my servers 1a.ncomputers.org1b.ncomputers.org and 1c.ncomputers.org are located in germany and one of them could monitor other german IPv4 / IPv6 servers. then send the information to the monitoring station in Los Angeles, and using statistics provide more accurate scores.
in general terms, after researching about randomness i can apport that all hardware has some kind of randomness, since there is a direct relationship between hardware (physical matter) and the quantum zero point energy.
information that could be useful to understand more about the randomness of timing jitter (additional to what you can find in wikipedia):
Now the offset for IPv6 is permanently about -1.5 ms. ( For me in Austria ) This could be my fault too. I am running a DCF77, a GPS and an atomic rubidium stratum-1 server and theoretical all three could go wrong. So I compared with the “official” time provider in Austria and Germany. But I have only some ± 10-microseconds offset to them.
@ Oliver ( ncomputers.org )
I completely agree with you. There are a lot of situations which could be the reason for a higher offset. Allowing me to add some comments: I made the experience that different speed for upstream and downstream data will cause such an offset issue. This can be verified very easy with an ADSL line. I made some experiments and theoretical aspects and posted it on my blog.