Changes at Equinix Metal (ex Packet)

There is a good writeup at NTP Pool servers on Kubernetes on Packet | NTP Pool News about the setup of NTP Pool servers on Packet from 2019. Three years later, Packet has been acquired by Equinix and is now branded as “Equinix Metal”, and data centers are being updated and modernized and some older server types decommissioned.

It’s going to be necessary to do work on this set of servers, with some of them going offline and many systems needing to be rebuilt in new data centers. I think there’s time to do all this over the summer if the effort begins relatively soon.

Equinix Metal provides a set of servers to the NTP Pool project, but does not run them ourselves. @ask and crew will need to be leading this effort, and I am happy to help in any way.

3 Likes

I’d be willing to help, if I can. Remotely, that is.

1 Like

Who cares, just ask another big ISP to take over.
There are plenty that can do this task.

Why should this be a problem?

Heck, even a private server in Holland can take over with a 500/500mbut fiber connection.

Don’t forget, most of it is just DNS-assignment and cached all over the world.

I see no problem to move it anywhere that have better servers and peers.

It’s not quite that simple but I fail to see why you are getting so angry about this. All they are saying is we need to migrate to the new server’s. It’s not like he just came in here and said it has to be done tonight or we are charging you 10K. Your reply was really out of line and improper.

4 Likes

This was all done this fall by the way. Equinix have been great for us. It was a bunch of work to migrate the cluster, but much much easier than the server migrations done for the project in the past.

The cluster at Equinix runs the website (of course), various databases (MySQL and ClickHouse primarily), the monitoring API, tools to manage the monitoring data, a couple of Prometheus instances for monitoring the cluster itself and the ~40 DNS servers, Grafana, a couple of distributed Loki setups for managing log files for the cluster itself and for all the GeoDNS servers, various other DNS servers and DNS management, Vector, Kafka and some custom software to ingest the GeoDNS query logs, Tempo for OpenTelemetry tracing of some of the software, Request Tracker for the ticketing system, various mail software, Pomerium for authentication to our internal tools, Ceph for distributed and redundant file systems and block storage, a bit of infrastructure to distribute the GeoDNS configuration files and dozens of other things. Oh, and much of this runs in both a “beta” and “production” flavor. It’s a lot!

The Equinix APIs and powerful hardware makes it easy to work with, and Kubernetes made it reasonably seamless keeping the services running there available during the move, but it was still a bunch of work!

It was “just DNS-assignment” 20 years ago when the late Adrian von Bidder started the project, but it didn’t take long before that was completely unmanageable for how popular the project became.

3 Likes

Glad to hear it’s solved now :+1: :grin: