both routes from the monitoring station to the NTP servers are different and according to my experience, we cannot guarantee each route to provide the same timing jitter.
timing jitter is something chaotic, unpredictable and unavoidable. one can only adapt to it and sometimes develop techniques for decreasing it.
each routing station has a fingerprint. there are many factors that impact on the performance of each station.
for example: the load of the routers, the cables’ length difference between two possible routes, temperature, software, hardware, etc.
accurate time measurement is kinda complicated. for example a stratum 1 server located in sydney, australia might offer to the monitoring station in los angeles a high timing jitter and then provide higher offsets. however it is possible that both share 99.999999% the same time.
maybe it would be ok to have a pool of servers monitoring other servers. for example my servers 1a.ncomputers.org 1b.ncomputers.org and 1c.ncomputers.org are located in germany and one of them could monitor other german IPv4 / IPv6 servers. then send the information to the monitoring station in Los Angeles, and using statistics provide more accurate scores.
in general terms, after researching about randomness i can apport that all hardware has some kind of randomness, since there is a direct relationship between hardware (physical matter) and the quantum zero point energy.
information that could be useful to understand more about the randomness of timing jitter (additional to what you can find in wikipedia):
Definition of ubit (unpredictable bit)
True random number generator: pandom