Is AT&T killing off their landline internet for 5G and what options to keep running public time servers

I have a friend that still works for AT&T and stated that he has seen plans for areas where AT&T doesn’t do fiber but has copper VDSL that they may be retiring the DSLAM’s that support the copper based VDSL service and replacing it with 5G wireless internet. I have been lucky with the VDSL service. I always have the same IPv4 and IPv6 services, I have very low latency and the service has been very reliable.

If they terminate DSL services and to go 5G it would be an end to easy hosting of my timer servers along with other home lab projects since CGNAT from AT&T doesn’t accommodate normal port forwarding. I have been working for about a month now on using a VPS with an outbound VPN connection to the VPS to get around CGNAT and have been successful on closing ports on my router and opening up inbound TCP connections from the VPS though the VPN.

The issue is Time Servers. There isn’t a easy way to forward UDP traffic over the VPN from the VPS so I installed the distributions Chrony on the VPS and had my two time servers’ sync with the VPS. Thus, I can then open port 123 and provide services on the VPS.

The question I have is what is your thoughts on this approach? Latency much higher since the VPN goes from my home to a NY cloud vendor. Here are my stats from the Chrony NTP server on that VPS:

MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ pi1.vargofamily.com           1  10   377   129  -9592ns[-9592ns] +/-   23ms
^* pi3.vargofamily.com           1  10   377   939   +517us[ +559us] +/-   23ms

Do you think it makes sense to publish this time server with the higher latency for the NTP Pool?

I am not aware of that situation in particular, but I am aware of a precedent and an incentive that make it seem believable to me.

Verizon (not wireless, the local telco element) in the Eastern US resulted from the merger of a number of former baby bell operating companies, including NYNEX serving New York City and C&P serving Maryland, DC, and Virginia. After Hurricane Sandy flooded a key southern Manhattan central office where many copper loops were served via conduits that were flooded with saltwater, Verizon declared a catastrophic failure and terminated all that copper service, offering each subscriber a cell-connected POTS emulator while promoting fiber service as the long-term replacement for all the copper loop customers. One downside of these telephone line emulators: the need for subscriber-provided power. Simple wired phones can operate entirely using central office power provided via the loop. Modem and fax communications may not work as well, which is an issue even for some then-current devices, such as alarm systems and remote heart monitors.

In the US, traditional copper telephone service is heavily regulated, with providers required to service every customer (universal service) and the maintenance is typically performed by organized labor (unions). On the other hand, cellular service doesn’t have a universal service mandate and typically doesn’t use union labor.

We held on to our landline phone service connected to a CO about a mile away until about two years ago, in part because it was likely to work in a natural disaster even when our pole-provided power was out. We switched from $45/month service with no long distance to $10/month VoIP with 3000 minutes of voice per month to the US and Canada. Annoyingly, we can also receive text messages but cannot send as that requires registration as a business and this is a residential line.

I forgot to include this link:

1 Like

The way I understand your setup, this should be no problem at all. In fact, the basic pattern is a very common and very typical one:

The chronyd instance on the VPS is a stratum 2 server that serves pool clients. It gets its own time from your stratum 1 servers at home through the VPN.

So I’d venture to even say this is the basic pattern in NTP.

Latency as such is not a problem. Asymmetries in latency are, and jitter is. chronyd will kind of “filter out” especially the jitter before passing on the time to its clients.

With respect to asymmetries, they cause an offset of the client time from the server time. If the asymmetry is constant, then the offset will also be constant. Then you can simply add a few more external servers for some time that seem to have good, especially symmetric connectivity. Then, you can figure out what the offset is that the asymmetry on your link between your servers causes, and then configure chronyd to compensate for that when passing on time it gets from your home servers. Even if the asymmetries are not constant, chronyd will kind of “average” them out, so you can again compensate for that to some degree, depending on the specific characteristics of the asymmetry.

chronyd and the ntpd variants both have logging capabilities to facilitate the aforementioned calibration. I have not done something like that myself yet, so if you aim for that, others would have to chime in with guidance on that. But for the time being, you could start by just adding another external server (or two or three) as reference to get an initial feeling for the characteristics of the link between your stratum 1 and stratum 2 servers, how it reflects on the timekeeping of your stratum 2 server. And/or add the VPS to the pool in monitoring-only mode, and see what the distributed monitors think.

1 Like

My opinions.

DSL service’s possible demise is a consequence of AT&T’s desire to eliminate the costs associated with maintaining the copper infrastructure. In my area (Illinois/USA) the voice-only POTS cost increases every couple of months, probably to move customers away from copper.

If you’re running NTP on a cloud server, that NTP daemon should ideally use the closest available stratum 1 servers (details omitted here). That may not be the home servers.

Is the VPN TCP based? If so that introduces delay variability.

Yes, you’ve been lucky. Here on the North California coast ATT is absolutely horrible. The day my former housemates switched to Comcast and VoIP was like an order of magnitude improvement, not just for IP but for voice too. I believe ATT just stopped maintaining their wires, basically.

Interestingly, I am now on Verizon wireless for IP, and it is at least passable. I run the setup you’re contemplating (with a raspberry serving to my Linode over wireguard, and the Linode in the pool), except that the Linode has other sources, too.

I know that ATT is trying to get in the wireless ISP game here too, but so far they have nowhere near the coverage of either Verizon or TMobile.


Ian

In this context, I was wondering as to other people’s view/experience as to the “trustworthiness” (not a good term for what I mean, but the closest that came to mind) of NTP daemons to pick their optimal upstream.

While I never did it thoroughly in the sense of, e.g., using the respective daemon’s logging capabilities, I typically give a server a whole bunch of upstream servers initially (more than are going to be needed in the end). And after a while, I would prune that set by picking the one or few from among the original set that I had the impression were selected most often by the daemon itself (plus some backups). Also considering perceived availability. Might not be the “closest” ones (in terms of latency), or lowest stratum, but I figured the daemon would know best, based on its intricate filtering and selection mechanisms.

Apart from the potential improvement on my side to do this more thoroughly by using the data that logging would produce, is that the “best” approach?

Or should one rather not “trust” the daemon’s picks and instead “guide” the selection by manually picking upstream servers that kind of by definition/declaration are assumed to be “best”, e.g., based on stratum, delay, or operator reputation (national metrology institutes and similar institutions, …), …?

I am running WireGuard on my pfSense router to the VPS in DigitalOcean with a hand configured instance (pain in the butt). UDP is the protocol. Currently I see about a 40 ms overhead of the tunnel. My route goes from Elgin Illinois to their NY City site 1 datacenter.

CGNat may be closer that I thought. Received a notice that AT&T 5g internet is now available in my location. I hope I have at least a year to get everything moved from my local IP address to the VPS IP address before AT&T kills copper here in my neighborhood.

The configuration notes on here (pool.ntp.org: the internet cluster of ntp servers) are quite specifict that you should hand pick some servers :slight_smile: The worst case would be running a server in the pool that gets its time from the pool. That could run away and create a separate time island.
They link you to a list of public servers WebHome < Servers < Network Time Foundation's NTP Support Wiki

What you as an operator pick is your choice. For my servers I normally redistribute the time information from the german national metrology lab PTB (with some backup servers for redundacy).
But “my” clients need the national german time to provide consistency in log timestamps etc. so that guided my choice in PTB as upstream.
As a courtesy the clients in the pool get access to these servers and I don’t feel to bad about the quality of time I distribute.

That is precisely the scope for which I was looking for other people’s experience/view: How do I hand pick those servers?

Do I just go to the list and pick “what looks good”, e.g., by name/operator/some other, abstract criteria (e.g., same country suggesting that it might be “closer” than servers from other countries), and that is it, those are then cast? Because those are supposed to be “good” kind of “by definition”?

Or do I pick from that list a few servers according to the aforementioned criteria, more than I eventually want to use. And then observe how they perform with my NTP client, and then subsequently prune those that seem to not perform as well?

That is what I meant by “trustworthiness” of an NTP instance’s assessment of an upstream server’s quality, should that be ranked higher than some abstract, “by definition”/“on paper” grade of quality?

Yeah, that is a common choice in Germany, guided by “reputation”: The PTB is the official timekeeper in Germany (which is, as you mention, the reason why you picked them), so one could say it is “good by definition”.

However, consider that ptbtime4.ptb.de is hosted in the German research network DFN. Thus, the latency from a DTAG customer line is about twice as high than to the other PTB instances hosted within the DTAG network. And the interconnect between DFN and DTAG is (or at least used to be in the past) not very well provisioned. So one would see higher latencies to ptbtime4.ptb.de, and potentially higher jitter and/or packet loss, varying throughout the day.

That is something that doesn’t get factored in if one just picks servers according to some abstract criteria, vs. considering an NTP client instance’s assessment of an upstream server.

Traceability to official German time is not necessarily a requirement or even up to the task if it is about consistency of timestamps across an ensemble of systems such as the set that you hint at being in charge of. In fact, depending on specific circumstances obviously, but if a set of systems each syncs to a “far away” server, their timestamp consistency might be worse than if they all were to sync to a “closer” server. And be it a local server of higher stratum that is not too well connected to its upstream server, but where its local clients are well connected (low latency and jitter) so they can very closely follow their upstream, even if that one “wobbles” a bit in absolute terms (because it is “too far” from its own upstream).

If it is actually traceability to official German time that is needed, then obviously, from a formal point of view, connecting to the PTB servers would ensure that (I guess). Whether that makes a difference from a practical point of view is another matter. E.g., PTB is a large contributor to global UTC, so also other sources have “a lot of PTB UTC” in them. And I doubt one would even be able to reach sync accuracy via NTP where the difference between “PTB UTC” and “other UTC” would even be visible, especially when syncing across a public network. My hunch is that the difference would anyhow get drowned out by all the other error contributors in the system.

Today I put up my VPS on IPv6 and registered it with NTP Pool for monitoring. It will be interesting to see the results vs. directly connecting the GPS enabled server.

I would guess so, pick some servers that work from your vantage point/server. You bring up the ptbtime4 server hosted in DFN. I would try to pick servers that are geographically close and close in network/transit delays

Yeah, we don’t control all of the clients (kiosk terminals and hand held terminals). But all the third party gear should be synced to german time by contract, so if we receive a problem report with a timestamp printed on the remote end, we should be able to find it by the timestamp (and location/make and model) in our logs. Sure there is some millisecond deviation because of network delays etc. but we, the third party terminals and the revenue service (“Finanzamt”) are all using german time to determine when some sales revenue was made.
We don’t need µs precision (imagine the old way: bill of sales just stamped with a date and entered into the books later) we need just all the timestamps to be aligned and eg. daylight saving time observed in a consistent manner.

While a bit off-topic in this thread, still a quick/brief response, as it seems there might be some misconceptions here.

Ok, understand the formal/legal point, even if mostly irrelevant from technical point of view, or even counterproductive.

Depending on circumstances, this may actually be rendered more difficult by everyone syncing to PTB’s timeservers.

Again, this may be less accurate when syncing to PTB timeservers than otherwise technically possible.

As above.

NTP and UTC have nothing to do with daylight saving time whatsoever, or with local time, e.g., “german time”, regardless of whether provided by PTB or any other source. It is a matter entirely local to each individual system to “translate” UTC/NTP time to the local time/time zone for presentation, including considerations regarding DST.

Yeah, I agree it’s totally off topic but I can get you the contact information of our legal counsels :slight_smile: I also argued that “it doesn’t matter” with which servers we exchange time information, the whole world should be within nano seconds of some sort of UTC and if they were right we only see after the fact in the curriculumns you posted. Also on the issue of timezones and daylight savings time…I am aware that that is a correction after the fact and that ntp runs on UTC and UTC only (TAI anyone?)…
But sometimes it is easier to just say (and monitor and document, very important) that we sync to the people in charge of “german” time. No matter that we essentially just need time to the nearest second or so.
I am just working DevOps there and I am aware of our our needs (and that is why we don’t run DCF77/GNSS receivers on prem/the datacenter) since we just essentially need to agree on when the day changes. Added benefit is that we can correlate log file timestamps to within some milliseconds.

Not totally off topic, as it still revolves around the question of synchronization performance, and upstream server selection aspects.

Ok, fully understand. This forum is mostly going a bit deeper, though, so I was triggered a bit by the repeated (at least implied/suggestive) reference to sync to “German time” being required for timestamp correlation, not least also since your descriptions already suggested what you now explicitly confirmed, that “time to the nearest second or so” is good enough :smiley:

Thanks, but no thanks! :joy:

Added benefit of what? That is kind of what is triggering me, any even remote suggestion that “German time”, or using the PTB’s timeservers (as such, i.e., because PTB are the official German timekeepers) somehow improve timestamping, vs. other good/stable sources of common time.

But I think that I’ve made that point, and I understand we basically agree, so, “over and out” :wink:

I don’t have a ready-made source of TAI at hand, but NIST have UT1 on offer

:smiley:

Yeah, but I didn’t think that this forum was on the level of time-nuts :slight_smile: But if you are still with me, I work as a programmer (sorry, “Senior IT Developer”) in the B2B e-commerce world, and multiple requirements from different actors and decades clash.
The revenue office is interested in the sales bills but essentially just the buisiness day the sales were made and entered into the books is of interest according to the law.
But then sales terminals have to timestamp and sign receipts to the millisecond (I am sure you have seen that in the last years on your supermarket receipt if you are german, look for the base64 encoded data at the bottom including a timestamp and signature counter). That is where our requirement for more precise time sync comes from, just to keep the fraud prevention offices happy so nobody can alter transactions after the fact in the register. These terminals and kiosks are the driver for more precise time, and need the most monitoring so that no one can enter a transaction for the wrong date, like putting in a refund for yesterday.
My job is just to have the backend systems (delivery, logistics and book keeping) in sync with the front end terminals and registers etc.
That is why I glanced over the details with just: We need to sync to “german time”, meaning that everything in the chain from the card terminal, register, hand held ordering device, shipping etc. has to run on the same time.
Otherwise we could get end up needing to explain why we sold and delivered some goods before they were ordered or paid and entered into the general ledger/books.

I am old enough that I still remember the days of the sysadmins IPL’ing the mainframe and getting asked for the current time and date during cold boot. Back then he would look at his watch or a wall clock and enter that date/time and after that the computer clock was essentially free running for the business day until the interactive system was shut down for EOD (End of Day) processing.

I in fact do. Here is an old photo with the components, LPRO-101 rubidium frequency standard, steering a Morion MV-89a DOCXO to provide my home reference 10 MHz signal.

Around 2008 I build the PLL and a divider down to 1Hz and TTL clock to display my own version of TAI (never steered it :slight_smile:)

True for probably the majority of topics, but sometimes getting close :slight_smile: (but in any case mostly slightly/somewhat different perspective on the topic). My point was more that I felt that regardless of depth, the contraction in this case might have been misleading to people without a bit deeper understanding of the topic.

Still here, not going anywhere :wink:

I hear you.

Many thanks for your sharing of your specific use case where timekeeping via NTP is relevant, and the practical aspect of it. Obviously, having good time is implicitly important in many aspects, some even mostly unnoticed until things go wrong. So very interesting to see a concrete example in a bit more depth, what the “problem” was, and how you approached it. Thanks!

Cool! Thanks for sharing!

That is indeed a level of time-nuttery that, e.g., I have not delved into myself so far. Partly having to do with having less patience delving into such things these days, and persisting to make them work, than I used to have… Not long ago, I salvaged an old Meinberg GPS clock with the idea to repurpose its internals, e.g., the oscillator, but never did. The firmware on the unit was severely outdated, and it required a special antenna to operate that was missing, so I couldn’t use the unit as such. But I never found/made the time to repurpose its parts, so I eventually put it back where I had found it…