Please see a screenshot above for three servers (US, NL).
I didn’t know that the first two letters on the server mean country!
How come you get the offset and the RTT?
Unsure. This is just standard functionality on NTP pool server management page.
I didn’t understand exactly.
My ntp server is configured with GPS+PPS and gets time from PSM.
For the past 30 hours time.ravnus.com shows very low jitter; typical of what I see from other GPS-based NTP servers. The offset is probably low, though we can’t be certain. There were a few brief instances where the time was in error by hundreds of milliseconds, possibly related to NTP server reset.
==> Server accuracy isn’t a likely problem.
The RTTs I see from my monitors are consistent with those reported by Kets_One
United Kingdom 236 msec RTT
Germany 220 msec
India 181 msec
New York 181 msec
San Francisco 138 msec
Chicago 176 msec
Ohio (US) 170 msec
==> Latency isn’t a likely problem. [Note: my monitors are not part of the NTP Pool]
There are significant NTP failures (request sent, but no response seen). For example my monitor in Germany saw 10% failures between 2024-10-21 14:00 and 2024-10-22 11:00.
==> These failures are a problem.
Troubleshooting is complicated by the intermittent use of Geoblocking.
I don’t see how Geoblocking can be robustly implemented in the current NTP Pool.
There are techniques to isolate the NTP loss. Running 24-hour tests from clients within Korea would be a start
https://atlas.ripe.net/measurements/80431128/
…as well as the blocking of ICMP echo requests.
https://atlas.ripe.net/measurements/80368493/
And still not knowing what HW/SW the NTP server is made up of, and what amount of traffic it is seeing during its brief stints in the pool.
I used traceroute-style probes to further investigate the paths.
These showed losses very close to the server.
UK
Hop
100% 11 ICMP:TTL 10.44.255.211
100% 12 ICMP:TTL 100.71.53.182 / 100.72.74.226
95% 13 ICMP:TTL 175.117.50.8 (time.ravnus.com)
95% 14 NTP 175.117.50.8 (time.ravnus.com)
New York
Hop
100% 18 ICMP:TTL 10.44.255.157
100% 19 ICMP:TTL 100.71.53.182 / 100.72.74.226
93% 20 ICMP:TTL 175.117.50.8 (time.ravnus.com)
95% 21 NTP 175.117.50.8 (time.ravnus.com)
The loss seems to be at time.ravnus.com, or possibly one hop upstream.
As PoolMUC suggests more information about the NTP server’s environment is needed to explore further.
Do you still have a publicly available stratum 1 server in South Korea?
I’ve set up an instance in a datacenter in Seoul, and am looking for good upstream servers.
I note time.ravnus.com still serves time, but is at stratum 4 currently.
Thanks!
Yes, I’m still running. I deleted it in the ntp pool for a while because I’m changing the ISP on the 14th.
I just checked and I think it’s because the PPS signal is not normal and the ntp server is using another source. I think I touched something while upgrading the equipment yesterday.
I’ll check again tomorrow and let you know.
I forgot to set up the gpio pin yesterday while upgrading the hardware to the raspberry pi 5 nvme ssd.
Now I’m using the PPS signal normally.
It will be referred to Stratum 1.
Thanks!! Looking very good, with chronyd
right away replacing the cloud-provided NTP server with yours as upstream:
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* time.ravnus.com 1 6 377 37 +116us[ +121us] +/- 750us
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
time.ravnus.com 10 6 581 +0.000 0.201 +8ns 26us
Let’s see what it looks like once chronyd
has settled in and, e.g., ramped up the polling interval…
Not long ago, I changed my ISP.
I duplicated the ntp server from Statum 1 to Statum 2.
Statum 1 is accessible only in our local, and Statum 2 server synchronized in the same local to provide ntp service to external users so that it can be more stable.
Statum 2 servers have better RAM and CPU performance than traditional Statum 1.
For those who use our server in Asia, please check if our server is running reliably!