Is it just me or does anyone else cringe at the thought that NTP is being served from virtual machines? I get that in this day and age everyone is running servers in the cloud but still… It feels so very very wrong.
Nowadays, this is a baseless fear for all but the most stringent timekeeping requirements. The onus is on anyone saying NTP in VMs is inadequate to produce data demonstrating that this is the case.
I’ve published some data from my experiments which show that VMs are good enough for most use cases (apologies for the long URLs):
If you have the opportunity to run NTP on a dedicated bare metal host, great! But for a lot of shops, this is becoming less desirable for various reasons.
I think most VMs do some behind the scene magic to keep NTP happy and track nicely with the hypervisor.
The bigger issues when it comes to time accuracy is the network, not the server. Network error you are talking about milliseconds, while at the OS level you are only talking about microseconds (when comparing against a non-vm). Of course, if either machine is overloaded then all bets are out the window…
I honestly feel that it’s perfectly fine to help the pool out with a VM. One of the main point of the pool is with load distribution after all.
I run NTPd realtime on a Vultr VPS and usually offsets test around 1-2 thousandths of a second different than GPS, government, military and educational time servers. You can’t get much better than this over the Internet due to jitter sampling. It is a $5 VPS.
Just to note, my openvz (of all things) VPS in Hong Kong at $25 a year just clocked in, in the MILLIONTHS of a second offset wise. Dont expect this but it does go to say. A responsible host can have good offsets.
I once compared NTP server accuracy running on a $17 CPU (Raspberry Pi) with a $170 CPU (desktop) and $1700 CPU (a 64-core device designed for VPS hosting). Set them all up with the same Stratum 1s and 2s, and let some ntp clients pick which one they liked.
The $1700 CPU wins. Even when it’s running a virtualization layer. And heavily loaded with multiple VPS customers. And far away on a congested network.
There have been many years worth of complicated configuration or bugs of OpenVZ and maybe Vmware, KVM, Xen, etc – with keeping time correctly in the past. I don’t know exactly what the status of it is these days, but I know there are still large configuration problems out there.
All but OpenVZ are probably OK for pool use. You don’t have control of the kernel with OpenVZ, so if your host does not want to play nicely with you, you’re screwed. Tasks like setting the clock and firewalls and various network tunables within the kernel are typically out of reach from you when you’re on OpenVZ. Two of my servers run in KVM, one runs in Xen. The fourth one runs on bare metal.