So I am happily running a Garmin 18X 1hz with the FreeBSD PPS_SYNC kernel option and running NTPD. I have reduced the 18X to just one sentence, it is running at 4800 baud. This works great and I have good performance…
But… I see this 5hz 18X out there… It sounds, ummm, better but the 1hz is pretty damn good.
My question tho is how to get 5hz working in my setup. Well, and is 5hz any better ? haha…
But i am not seeing any config help online for 5hz for FreeBSD _ NTPD…
Should I move my 18X 1hz to 9600 baud ? I dont think it will really matter.
I don’t believe NTPD supports 5Hz PPS signaling. Others have used 5Hz PPS with Chrony/GPSD with debatable benefit. Ordered me a new GPS - #8 by Bas
I am able to achieve approximately +/- 1.5 usec offset with the 1Hz PPS signal connected to my pfSense firewall. I increased the baud rate to 38400 to reduce fudge time2. 5Hz/10Hz PPS doesn’t seem to be worth the additional time and expense in my application.
Hmm… Faster baud rate reduces fudge time. Hmmm… Well then, i will change to 38.4… Easy enough. When i was doing config on the Garmin I considered it. I see how to do NTPD for this - i think…
Thank you…
Yes, that is what I thought. NTPD does not have 5hz support. I like the simplicity of just running NTPD and it does what I need.
If you use the PPS signal, the baud rate and setting time2 will not matter. The PPS signal will cause an interrupt, which gets timestamped and that is what will be used.
Yes, this was my understanding. So the baud rate did not matter other then it fills a buffer quicker. Best practice to keep to one sentence to avoid buffer overflow. But im not sure that a faster baud rate hurts anything. Well, I suppose the computer/buss has to interrupt to read the data and this might, in a very slight way, affect it sensing the PPS on pin 1 of the serial port ? So baud rate might affect something in a really small way ? or not…
Might be cool tho to figure out how to feed 5hz sentences to a car GPS unit tho. It would be VERY responsive, hahaha…
Not sure it really is. The point of the NTP protocol (as opposed to, e.g., PTP) is to “tune” (aka “discipline”) the local clock so that ideally, it would itself keep proper time. For that, it takes samples over a longer period of time (see, e.g., the “Span” column in the chronyc sourcestats output), and “averages” those to smooth out the noise (it is actually mathematically more sophisticated filtering rather than mere averaging), and based on that tunes the local clock to the “right” frequency.
Synchronizing to remote clocks obviously operates at yet other timescales than local reference clocks, but consider that the time daemons all adjust the poll rate autonomously (with certain knobs available to the operator to tune that mechanism) and, e.g, increase the poll intervall upwards as much as possible, if circumstances permit (e.g., stable temperature/load, stable network conditions, …). Point being, apart from reducing load on remote servers, that the target would be a stable local clock that only would need infrequent adjusting.
Getting more samples at much shorter intervals into the filters on the other hand would tend to prevent the clock reaching such a steady state (which itself is however kind of fluid, as, e.g., environmental conditions change over time), and rather track more short-term variations and the inherent noise, e.g., short-term fluctuations in network traffic, to the detriment of actual clock accuracy. It would be more “wobbly” in the short-term, rather than having the desired smoothness over longer (in comparison to the sample rate) periods of time. Kind of like a short vehicle on a highway oscillating in response to all the bumps in the road (or at least more of them), while a longer vehicle gives more of a smooth ride.
Note this is not a “strict” definition/description of what is going on behind the scenes, not sure that could be captured in a short post. Rather, it should only give some intuition as to what is going on, and more in-depth study would be needed to get a more exact understanding. And it would take more knowledgeable people than myself to thoroughly explain the fine print…
Your right… Faster is often not as fun as slower with slower requiring more discipline.
I have a lot of electronic test equip and my clock over there is 10Mhz that everything is locked to. But that application is VERY different, in every way. So my brain just went “more hz is better”… hahaha… fail…
I tell you though. Hooking that 1hz wire from the Garmin 18x LVC to a freq counter that has a rubidium clock was amazing. That signal coming from the Garmin / NIST is just insanely good. A GPS is a amazingly cheap freq reference. Im pretty new to this, so, I was impressed when the freq counter was 1 and then ALL zeros… It was a good low freq test of the freq counter. 1Ghz is easy vs 1hz, hahaha…
I was just putting Chrony on my server to play with it… I then realized it needed GPSD to work with serial port 1 PPS… Running all that and doing all that config is kinda annoying VS just running NTPD + server 127.127.20.1 . I am on FreeBSD and it seems I would need Linux and specilized NICs to get a meaningful performance increase…
And I am getting 1uS now… Thats plenty for me… So I wont be doing Chrony+GPSD, even tho it is tempting…
It’s been a while since I’ve used it myself, and I am not familiar with FreeBSD. But chronyd should in my understanding be able to work with a PPS input without needing gpsd or a specialized NIC. A PPS device node seems all that is needed. And it’s more flexible than ntpd classic in that it can be given a filename under which to access the device as configuration option, vs. the filename to be accessed being hard-coded.
And I understood from other discussions here on the forum that *BSD is actually more flexible with respect to the PPS input, in the sense that it has some configuration option as to what serial control line to derive the PPS signal from, while Linux only supports a single, hard-coded control line (DCD).
I believe fudge time2 is necessary if you want NTPD to use GPS time as the source of time data in addition to the PPS signal, as opposed to other servers/pools added to NTPD. It represents the NMEA sentence offset from the PPS signal. It is a function of baud, message length, and manufacturer specific timing when the messages are sent relative to PPS signal.
Example:
GPS is PPS peer and time source, as indicated by all other sources marked as candidates
Four use cases come to kind (but not meant to be an exhaustive list):
Time from the NMEA stream is ambiguous with respect to the PPS signal. The time daemon will associate the NMEA timestamp with the “closest” PPS pulse. Typically, but not always (see next point), the NMEA sentence comes after the pulse. But if it is more than 500ms away from the preceeding pulse (and considering some gray area between pulses where it may just be noise whether the current sample is closer to the preceeding or the following edge), the time daemon assumes it belongs with the following PPS pulse. Thus if the delay of the NMEA timestamp is “too far” away from the preceeding edge (like I see for example with an MTK receiver), time2 helps put the NMEA timestamp clearly within the 500ms after the corresponding pulse.
Or the other way round, if the pulse rightly is after the NMEA timestamp, ensure again it is “close enough” to the relevant pulse.
I believe at least ntpd classic does not use the PPS signal until it has been in the synchronized state at least once. If NMEA data is your only other source, you may want the initial sync off it to be as close as possible to real time so as to minimize time until the clock is subsequently synced with the PPS pulse.
Some GNSS receivers may output a valid-looking NMEA timestamp, but no PPS signal if they are not really locked to the satellites. Again, to minimize the clock deviating from real time while the time daemon is syncing to the NMEA timestamps without PPS until PPS is regained, use time2 to make the difference as small as possible. Though in my (in this respect limited) experience, “consumer-grade” GNSS receivers’ clocks start drifting quite rapidly when they aren’t locked to satellites, and that part quickly dominates the overall drifting. More advanced receivers backed by more stable oscillators may not have that issue, e.g., there are receivers combined with atomic clocks, or at least some high end crystal oscillator system (e.g., oven-controlled crystal oscillator) with impressive holdovers times.
Hmm, not sure how this example highlights the use of time2, as the offset seems to indicate the reference clock is based off the PPS input. At least the very small offset shown is in my understanding not typically achievable with NMEA-only time stamps. So the application of time2 would not be directly visible.
Just noting that another interesting aspect is missing from the screenshot, just to highlight its relevance. Namely the jitter. With NMEA-only timestamps, that should be quite significant. Which also makes it somewhat dificult to get a good value for time2 in the first place, at least if one wants to use NMEA without PPS.
Wouldn’t you like to write a little howto on what you acomplished, focussed on the details that matter …?
Since the recent NTP disaster which shut down a militairy base in The Netherlands, I was reminded of the importance of accurate timestamps in RRSIG’s when signing TLD’s with DNSSEC.
I discovered Chrony, and see public NTS servers in Switzerland with extreme simple add-ons.
And so I’m now also planning to play around with hardware also.
I wish to be able to receive seperate GNSS, and sync that to a rubidium atomic clock.
For receiving I’m looking at the NEO-M8T or simular.
For the atomic clock I thought the PRS-10M would be a simple and very affordable start.
Obvious you must have a custom kernel with options PPS_SYNC
What else did you do to accomplish what you did?