Issues with clock_gettime(CLOCK_REALTIME, &wait) pre macOS 10.13

Fred Wright fw at fwright.net
Sat Oct 27 23:11:25 UTC 2018


On Tue, 23 Oct 2018, Chris Jones wrote:

> I've stumbled into the same issue twice in recent days, with two different 
> ports, which is the use of
>
> clock_gettime(CLOCK_REALTIME, &wait);
>
> which is only available in macOS 10.12 or newer. See for instance the issue I 
> found yesterday in xrootd.
>
> https://github.com/xrootd/xrootd/issues/846
>
> I am still waiting to see what upstream say, but I am hopeful they will 
> consider it a bug. ( It would seem quite extreme to reduce the supported OSX 
> releases from 10.7+ to 10.12+ in a minor patch revision...)
>
> But I was wondering, is this something anyone else has stumbled over, and do 
> we have a way of fixing this particular issue in the older OSes ?

Yes, in both GPSD and ntpsec.  Rather than trying to figure out which of 
the many messages in this thread to answer directly, I'll just throw out 
what I know about this issue.

There are three "global" timescales potentially provided by 
clock_gettime():  CLOCK_REALTIME, CLOCK_MONOTONIC, and 
CLOCK_MONOTONIC_RAW.  Only the first is eligible for clock_settime().

CLOCK_REALTIME is the same "Unix" timescale as the original time() and 
later gettimeofday(), but with (ostensibly) nanosecond resolution.  It's 
subject to both slewing and step adjustments, as needed to synchronize its 
value to some time source.

CLOCK_MONOTONIC was created to avoid problems (including crashes) 
sometimes caused by the backward step adjustments that may be applied to 
CLOCK_REALTIME.  Although the official documentation is woefully 
underspecified, it's typically implemented as a variation on 
CLOCK_REALTIME that excludes all step adjustments (including the initial 
one that "sets the clock"), but includes all slewing.  This makes it 
continuous as well as monotonic, but its rate accuracy is corrupted by the 
slewing adjustments.  In practice, it's almost never what you really want.

CLOCK_MONOTONIC_RAW is also woefully underspecified, but is usually just 
the raw hardware time source scaled to standard units based on the 
assumed clock rate, but not steered at all.  Since even the cheapest 
crystals are typically rated at +/- 100ppm or better, and since the 
slewing adjustments applied to CLOCK_MONOTONIC can easily be much larger 
than that, CLOCK_MONOTONIC_RAW is usually a more accurate timescale for 
rates, durations, and delays than CLOCK_MONOTONIC.


There are basically two fallback options for CLOCK_REALTIME: 
gettimeofday() and the Mach-specific clock_get_time() based on 
CALENDAR_CLOCK.  The latter ostensibly has nanosecond resolution, but in 
reality it's only the *representation* that has nanosecond resolution, 
while the values are all multiples of 1000 nanoseconds.  In addition, it's 
more than an order of magnitude slower than gettimeofday() even in the 
best case, and the commonly circulated example of its use is even slower, 
as well as having a "port leak" bug.  Thus, it's best to simply use 
gettimeofday() with a microseconds->nanoseconds as a fallback.  This 
approach also works for substituting settimeofday() for 
clock_settime(CLOCK_REALTIME, ...).  IMO, the microsecond->nanosecond 
conversion should be done without "rounding", but the 
nanosecond->microsecond conversions should round by adding 500ns prior to 
the floored division by 1000.  The unrounded conversion is consistent with 
both clock_get_time() and the "official" clock_gettime() in 10.12+, which 
still only has microsecond actual resolution.

For CLOCK_MONOTONIC, I believe the only functionally correct fallback is 
to use clock_get_time() with SYSTEM_CLOCK.  As noted above, programs 
shouldn't really be using CLOCK_MONOTONIC anyway, but it's necessary to 
include it for compatibility with programs too dumb to know that.  The 
problem with clock_get_time() is that it requires messing with Mach ports. 
The most efficient way to do this is to obtain a SYSTEM_CLOCK port once 
initially, and the reuse it on each call.  Even with this, it takes over 
700ns on a 3.46GHz Mac Pro, as compared to ~40ns for gettimeofday(), but 
that's the price of correctness.  Since something intended to be a drop-in 
replacement for clock_gettime() can't rely on initialization or cleanup 
functions, the best it can do is to allocate the Mach port on first call, 
and then rely on exit cleanup to eventually deallocate it.

For CLOCK_MONOTONIC_RAW, the straightforward approach is to use 
mach_absolute_time() with the proper scaling.  As long as the scale 
factors are constant, they can be obtained once initially and then cached 
for later use.  Unlike the clock_get_time() case, cleanup isn't even an 
issue.  However, I've seen some mention of the possibility that the rate 
of mach_absolute_time() may not be constant.  I'm not aware of any cases 
where this actually happens, and perhaps it's only theoretical, but if it 
did actually happen, it would complicate things significantly.  In order 
to convert a variable-rate clock to standard units, it's necessary to know 
not only the current scale factor, but also the last time that the factor 
changed and what the correspondence was at that time.  Since that 
information isn't provided, either the scale is actually constant or the 
API is deficient.  Hopefully it's the former.


My clock_gettime() replacement for ntpsec is defined directly in a header 
file as an inline function.  Although it currently only supports 
CLOCK_REALTIME, it does include a switch() on the clock_id for 
extensibility.  A significant advantage of the inline approach is that 
whenever the clock_id is a compile-time constant (as is almost always the 
case in real use cases), the optimizer can completely remove the switch() 
and degenerate into just the inline code needed (quite simple for 
CLOCK_REALTIME) in the relevant case.  And of course it also avoids 
adding new link-time dependencies.

Fred Wright


More information about the macports-dev mailing list