101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

516
active users

#ntp

1 post1 participant0 posts today

Okay, it's time for the big #ntp and #ptp wrap-up post. My week-long timing project spiraled out of control and turned into a two month monster, complete with 7 (ish?) GPS timing devices, 14 different test NICs, and a dozen different test systems.

What'd I learn along the way? See scottstuff.net/posts/2025/06/1 for the full list (and links to measurements and experimental results), but the top few are:

1. It's *absolutely* possible to get single-digit nanosecond time syncing with NTP between a pair of Linux systems with Chrony in a carefully-constructed test environment. Outside of a lab, 100-500 ns is probably more reasonable with NTP on a real network, and even that requires carefully selected NICs. But single-digit nanoseconds *are* possible. NTP isn't just for millisecond-scale time syncing.
2. Generally, PTP on the same hardware shows similar performance to NTP in a lab setting, with a bit less jitter. I'd expect it to scale *much* better in a real network, though. However, PTP mostly requires higher-end hardware (especially switches) and a bit more engineering work. Plus many older NICs just aren't very good at PTP (especially ConnectX-3s).
3. Intel's NICs, *especially* the E810 and to a lesser extent the i210 are very good at time accuracy. Unfortunately their X710 isn't as good, and the i226 is mixed. Mellanox is less accurate in my tests, with 200ns of skew, but still far better than Realtek and other consumer NICs.
4. GPS receivers aren't really *that* accurate. Even good receivers "wander" around 5-30 ns from second to second.
5. Antennas are critical. The cheap, flat window ones aren't a good choice for timing work. (Also, they're not actually supposed to be used in windows, they generally want a ground plane).
6. Your network probably has more paths with asymmetrical timing in it than you'd have expected. ECMP, LACP, and 2.5G/5G/10Gbase-T probably all negatively impact your ability to get extremely accurate time.

Anyway, it's been a fun journey. I had a good #time.

scottstuff.net · Timing ConclusionsThis is the 13th article that I’ve written lately on NTP and PTP timing with Linux. I set out to answer a couple questions for myself and ended up spending two months swimming in an ocean of nanosecond-scale measurements. When I started, I saw a lot of misinformation about NTP and PTP online. Things like: Conventional wisdom said that NTP was good for millisecond-scale timing accuracy. I expected that to be rather pessimistic, and expected to see low microsecond to high nanosecond-range syncing with Chrony, at least under controlled circumstances.In a lab environment, it’s possible to get single-digit nanosecond time skew out of Chrony. With a less-contrived setup, 500 ns is probably a better goal. In any case “milliseconds” is grossly underselling what’s possible. Conventional wisdom also said that PTP was better than NTP when you really cared about time, but that it was more difficult to use and made more requirements on hardware.You know, conventional wisdom is actually right sometimes. PTP is somewhat more difficult to set up and really wants to have hardware support from every switch and every NIC, but once you have that it’s pretty solid. Along the way I tested NTP and PTP “in the wild” on my network, built a few new GPS-backed NTP (and PTP) servers, collected a list of all known NICs with timing features,Specifically GNSS modules or PPS inputs. built a testing environment for measuring time-syncing accuracy to within a few nanoseconds, tested the impact of various Chrony polling settings, tested 14 different NICs for time accuracy, and tested how much added latency PTP-aware switches add. I ran into problems with PTP on Mellanox/nVidia ConnectX-4 and Intel X710 NICs.Weird stuff. The X710 doesn’t seem to like PTP v2.1, and it doesn’t like it when you ask it to timestamp packets too frequently. I fought with Raspberry Pis. I tested NICs until my head hurt. I fought with statistics. This little project that I’d expected to last most of a week has now dragged on for two months. It’s finally time to summarize what I’ve learned and celebrate The End Of Time.

Okay, hopefully that's it for #NTP for now:

scottstuff.net/posts/2025/05/1

I'm seeing up to 200 ns of difference between various GPS devices on my desk (one outlier, should really all be closer to that) plus 200-300 ns of network-induced variability on NTP clients, giving me somewhere between 200 and 500 ns of total error, depending on how I measure it.

So, it's higher than I'd really expected to see when I started, but *well* under my goal of 10 μS.

scottstuff.net · The Limits of NTP Accuracy on LinuxLately I’ve been trying to find (and understand) the limits of time syncing between Linux systems. How accurate can you get? What does it take to get that? And what things can easily add measurable amounts of time error? After most of a month (!), I’m starting to understand things. This is kind of a follow-on to a previous post, where I walked through my setup and goals, plus another post where I discussed time syncing in general. I’m trying to get the clocks on a bunch of Linux systems on my network synced as closely as possible so I can trust the timestamps on distributed tracing records that occur on different systems. My local network round-trip times are in the 20–30 microsecond (μS) range and I’d like clocks to be less than 1 RTT apart from each other. Ideally, they’d be within 1 μS, but 10 μS is fine. It’s easy to fire up Chrony against a local GPSTechnically, GNSS, which covers multiple satellite-backed navigation systems, not just the US GPS system, but I’m going to keep saying “GPS” for short. -backed time source and see it claim to be within X nanoseconds of GPS, but it’s tricky to figure out if Chrony is right or not. Especially once it’s claiming to be more accurate than the network’s round-trip time20 μS or so. , the amount of time needed for a single CPU cache miss50-ish nanoseconds. , or even the amount of time that light would take to span the gap between the server and the time source.About 5 ns per meter. I’ve spent way too much time over the past month digging into time, and specifically the limits of what you can accomplish with Linux, Chrony, and GPS. I’ll walk through all of that here eventually, but let me spoil the conclusion and give some limits: GPSes don’t return perfect time. I routinely see up to 200 ns differences between the 3 GPSes on my desk when viewing their output on an oscilloscope. The time gap between the 3 sources varies every second, and it’s rare to see all three within 20 ns of each other. Even the best GPS timing modules that I’ve seen list ~5 ns of jitter on their datasheets. I’d be surprised if you could get 3-5 GPS receivers to agree within 50 ns or so without careful management of consistent antenna cable length, etc. Even small amounts of network complexity can easily add 200-300 ns of systemic error to your measurements. Different NICs and their drivers vary widely on how good they are for sub-microsecond timing. From what I’ve seen, Intel E810 NICs are great, Intel X710s are very good, Mellanox ConnectX-5 are okay, Mellanox ConnectX-3 and ConnectX-4 are borderline, and everything from Realtek is questionable. A lot of Linux systems are terrible at low-latency work. There are a lot of causes for this, but one of the biggest is random “stalls” due to the system’s SMBIOS running to handle power management or other activities, and “pausing” the observable computer for hundreds of microseconds or longer. In general, there’s no good way to know if a given system (especially cheap systems) will be good or bad for timing without testing them. I have two cheap mini PC systems that have inexplicably bad time syncing behavior,1300-2000 ns. and two others with inexplicably good time syncing20-50 ns . Dedicated server hardware is generally more consistent. All in all, I’m able to sync clocks to within 500 ns or so on the bulk of the systems on my network. That’s good enough for my purposes, but it’s not as good as I’d expected to see.

Ah ha! Here we go, a reasonably fundamental limit to #NTP accuracy on my network.

I'm starting think that ~300ns is about the limit of time accuracy on my network, and even that's probably a bit optimistic.

Here's one solid example. I have 2 identical NTP servers (plus several non-identical ones that I'm ignoring here) with their own antennas connected at different points on my network. Then I have 8 identical servers syncing their time from NTP once per second using Chrony.

This is a graph of the delta between NTP1's 1-hour median offset and NTP2's 1-hour median offset, showing one line for each server.

Notice that half of them think that NTP1 is faster and half think that NTP2 is faster.

This is almost certainly due to ECMP; each server is attached to 2 L3 switches. Each NTP server is connected to a different L2 switch, and each of those L2 switches are connected to both L3 switches via MLAG.

For some reason, one ECMP path seems to be faster than the other, so server-NTP pairs that hash onto the fast path go 200-400ns faster than server-NTP pairs that take the other path.

I’m already providing public #NTP systems to the NTPPool project for more than 10 yrs and with the BoxyBSD locations, I bring them all to the NTPool - starting now with #Milan (IT), Kansas (US) & Amsterdam (NL).

Let's take a moment to remember the guy who made sure we don't have to change Every Goddamn Clock today, David L. Mills, creator of Network Time Protocol (NTP) who passed last year.

My wristwatch is synced to my phone, which is synced to the internet, which knows that time it is right now thanks to David Mills. Cheers to his memory 🥃

cse.engin.umich.edu/stories/re

Computer Science and EngineeringRemembering alum David Mills, who brought the internet into perfect timeMills created the Network Time Protocol, which enables any device online to know precisely what time it is.

How much do #Python clocks progress in "60 seconds"?

CLOCK_BOOTTIME 60.6
CLOCK_MONOTONIC 60.6
CLOCK_MONOTONIC_RAW 56.0
CLOCK_PROCESS_CPUTIME_ID 0.000952
CLOCK_REALTIME 60.6
CLOCK_TAI 60.6
CLOCK_THREAD_CPUTIME_ID 0.000952

Only CLOCK_MONOTONIC_RAW is tracking the hardware #clock correctly.

So if you actually want to measure #time accurately, use that one.

I pressed the #turbo button on my laptop.

(edited for clarity)

$ sudo hwclock && date && sleep 60 && sudo hwclock && date

22:50:48.433491+01:00
02:29:53 CET 2025

22:51:44.397740+01:00
02:30:54 CET 2025

It can now finish a minue in 56 seconds!

Was scratching my head for days why my core NTP server, which is also the internally authoritative DNS provider, couldn't resolve hostnames from its ntp.conf on startup, only literal IP addresses.

Not actually listing it in its own resolver configuration, might just possibly have been at the root of it.

D'oh!

#linux#ntp#dns

Поднял локальный #ntp, ибо гайверовская гирлянда виснет, сука, если не может до внешнего достучаться.
Теперь сердце всея интранета бьётся синхронно, плюс можно чуть ли не каждую минуту синкать

Bueno, estuve trabajando un poco en la página del servidor NTP, ahora aparece una gráfica del GPS fix con la señal ruido, los satélites utilizados y un java applet que muestra el tiempo universal y local en el navegador. Todo un departamento de Metrología che undernet.uy/ntp/ #ntp #stratum1 #undernet #metrología #metrology #time #tiempo #tiempouniversal #universaltime #astronomy #uruguay

undernet.uyUndernet Uruguay: Servidor Comunitario Autogestionado