Port-alpha archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Alpha timecounter
NetBSD/alpha uses a timecounter based on what I assume is the
`processor cycle counter' (PCC), implemented in the MI
sys/kern/kern_cctr.c based on an old version of the x86 TSC
synchronization. It seems to have some problems, as noted in an
earlier thread:
https://mail-index.netbsd.org/port-alpha/2017/07/13/msg000830.html
Judging by the cross-CPU calibration that kern_cctr.c does every
second, I assume this means that alpha does not present a global view
of a monotonic clock to all CPUs.
The attached compile-tested patch attempts to fix it by enforcing a
global ordering, and calling tc_gonebad if the calibration fails to
keep it within 1sec. The patch is for code that is nominally MI,
kern_cctr.c, but only alpha uses it.
My alpha is out of commission right now, but if anyone would like to
give this a shot, let me know how it goes.
Some background:
The periodic cross-CPU calibration was discarded for the x86 TSC
timecounter in rev. 1.16 of sys/arch/x86/x86/tsc.c, because `it didn't
work properly, and it's near impossible to synchronize the CPUs in a
running system, because bus traffic will interfere with any
calibration attempt'.
Now the x86 TSC timecounter does all calibration at boot when nothing
else is running, and gets pretty well synchronized. But it too fails
to provide a global monotonic clock on MP x86 systems like my laptop,
although it's close enough that I had to run a clock_gettime loop
constantly all night to detect it a handful of times.
I'm not addressing the value of periodic vs boot-time cross-CPU
calibration. Maybe it's a good idea on alpha even if not on x86;
maybe it's a bad idea everywhere. I'm also not addressing whether we
should have an MI concept of a CPU-local timecounter -- it would be
nice to support calibrating the frequency of a CPU-local timecounter
for relative durations without requiring global synchronization of a
reference point, but that's outside the scope of timecounter(9).
Index: sys/kern/kern_cctr.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_cctr.c,v
retrieving revision 1.9
diff -p -u -r1.9 kern_cctr.c
--- sys/kern/kern_cctr.c 3 Jan 2009 03:31:23 -0000 1.9
+++ sys/kern/kern_cctr.c 1 Nov 2017 16:32:25 -0000
@@ -95,6 +95,12 @@ void cc_calibrate_cpu(struct cpu_info *)
static int64_t cc_cal_val; /* last calibrate time stamp */
+/*
+ * Highest cc witnessed globally. We assume the calibration happens
+ * once a second.
+ */
+static volatile u_int cc_global __cacheline_aligned;
+
static struct timecounter cc_timecounter = {
.tc_get_timecount = cc_get_timecount,
.tc_poll_pps = cc_calibrate,
@@ -132,8 +138,8 @@ cc_init(timecounter_get_t getcc, uint64_
/*
* pick up tick count scaled to reference tick count
*/
-u_int
-cc_get_timecount(struct timecounter *tc)
+static u_int
+cc_get_timecount_local(struct timecounter *tc)
{
struct cpu_info *ci;
int64_t rcc, cc, ncsw;
@@ -186,6 +192,36 @@ cc_get_timecount(struct timecounter *tc)
}
/*
+ * advance the global view of the clock by our tick count and return it
+ */
+u_int
+cc_get_timecount(struct timecounter *tc)
+{
+ u_int freq = tc->tc_frequency;
+ u_int local, global, result;
+
+ do {
+ local = cc_get_timecount_local(tc);
+ global = cc_global;
+ if ((local - global) < freq) {
+ /* We're ahead, so use our counter. */
+ result = local;
+ } else {
+ /* We're behind or desynced, so use the global. */
+ result = global + 1;
+ }
+ } while (atomic_cas_uint(&cc_global, global, result) != global);
+
+ if (__predict_false((global - local) > freq)) {
+ /* More than 1sec away from global: bad. */
+ printf("%s: local cc desynchronized\n", tc->tc_name);
+ tc_gonebad(tc);
+ }
+
+ return result;
+}
+
+/*
* called once per clock tick via the pps callback
* for the calibration of the TSC counters.
* it is called only for the PRIMARY cpu. all
Home |
Main Index |
Thread Index |
Old Index