Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: What to do about "WARNING: negative runtime; monotonic clock has gone backwards"



> Date: Tue, 01 Aug 2023 16:02:17 -0400
> From: Brad Spencer <brad%anduin.eldar.org@localhost>
> 
> Taylor R Campbell <riastradh%NetBSD.org@localhost> writes:
> 
> > So I just added a printf to the kernel in case this jump happens.  Can
> > you update to xen_clock.c 1.15 (and sys/arch/x86/include/cpu.h 1.135)
> > and try again?
> 
> Sure...

Correction: xen_clock.c 1.16 and sys/arch/x86/include/cpu.h 1.136
(missed a spot).

> >> If the dtrace does continue to run, sometimes, it is impossible to exit
> >> with CTRL-C.  The process seems stuck in this:
> >> 
> >> [ 4261.7158728] load: 2.64  cmd: dtrace 3295 [xclocv] 0.01u 0.02s 0% 7340k
> >
> > Interesting.  If this is reproducible, can you enter crash or ddb and
> > get a stack trace for the dtrace process, as well as output from ps,
> > ps/w, and `show all tstiles'?
> 
> It appears to be reproduceable..  in the sense that I encountered it a
> couple of times doing exactly the same workload test.  I am more or less
> completely unsure as to what the trigger is, however.  I probably should
> have mentioned, but when this happened the last time, I did have other
> newly created processes hang in tstile (the one in particular that I
> noticed was 'fortune' from a ssh attempt .. it got stuck on login and
> when I did a CTRL-T tstile was shown).

`show all tstiles' output in crash or ddb would definitely be helpful
here.

> I also probably should have mentioned that the DOM0 (NOT the DOMU) that
> the target system is running under has HZ set to 1000.  This is mostly
> to help keep the ntpd and chronyd happy on the Xen guests.  If the DOM0
> is left at 100 the drift can be too much on the DOMU systems.  Been
> running like this for a long time...

Interesting.  Why would the dom0's HZ choice a difference?  Nothing
in the guest should depend substantively on the host's tick rate.

A NetBSD XEN3_DOM0 kernel periodically updates the hypervisor with a
real-time clock derived from NTP (the `timepush' callout in
xen_clock.c), but the period between updates is 53 sec + 3 ticks, and
it's hard to imagine that setting the clock every 53.03 sec vs every
53.003 sec should make any difference for whether guests drift.

The resolution of the real-time clock sent to NTP is 1/hz, because
resettodr uses getmicrotime instead of microtime, but while that might
introduce jitter from rounding, I'm not sure it should cause
persistent drift in one direction or the other and I don't think
guests are likely to periodically query the Xen wall clock time often
enough for this jitter to matter.

Does the dom0 have any substantive continuous influence on domU
scheduling and timing?  I always assumed the hypervisor would have all
the say in that.

As an aside, I wonder whether it's even worthwhile to run ntpd or
chronyd on the domU instead of just letting the dom0 set it and
arranging to do the equivalent of inittodr periodically in the domU?

Can you try the attached patch on a dom0 and see if you still observe
drift?
From 639672fef2a0a6959161b6e882e16128b1340c69 Mon Sep 17 00:00:00 2001
From: Taylor R Campbell <riastradh%NetBSD.org@localhost>
Date: Tue, 1 Aug 2023 21:58:45 +0000
Subject: [PATCH] WIP: push real-time clock directly from nanotime to
 hypervisor

Bypass resettodr's use of getmicrotime, and any overhead from
rtc_set_ymdhms.
---
 sys/arch/xen/xen/xen_clock.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/sys/arch/xen/xen/xen_clock.c b/sys/arch/xen/xen/xen_clock.c
index e36ec38ba758..fc2fa95490c4 100644
--- a/sys/arch/xen/xen/xen_clock.c
+++ b/sys/arch/xen/xen/xen_clock.c
@@ -988,8 +988,26 @@ fail:	sysctl_teardown(&log);
 static void
 xen_timepush_intr(void *cookie)
 {
+#if 1
+	struct timespec now;
+	int error;
 
+	nanotime(&now);
+	error = HYPERVISOR_platform_op(&(xen_platform_op_t) {
+		.cmd = XENPF_settime,
+		.u = {
+			.settime = {
+				.secs = now.tv_sec,
+				.nsecs = now.tv_nsec,
+				.system_time = xen_global_systime_ns(),
+			},
+		},
+	});
+	if (error)
+		printf("%s: XENPF_settime failed: %d\n", __func__, error);
+#else
 	resettodr();
+#endif
 	if (xen_timepush.ticks)
 		callout_schedule(&xen_timepush.ch, xen_timepush.ticks);
 }
@@ -1090,15 +1108,22 @@ xen_rtc_set(struct todr_chip_handle *todr, struct timeval *tvp)
 		clock_secs_to_ymdhms(tvp->tv_sec, &dt);
 		rtc_set_ymdhms(NULL, &dt);
 
+#if 1
+		__USE(op);
+		__USE(systime_ns);
+		return 0;
+#else
 		/* Get the global system time so we can preserve it.  */
 		systime_ns = xen_global_systime_ns();
 
 		/* Set the hypervisor wall clock time.  */
+		/* XXX zero it first? (`mbz' field probably must be zero) */
 		op.cmd = XENPF_settime;
 		op.u.settime.secs = tvp->tv_sec;
 		op.u.settime.nsecs = tvp->tv_usec * 1000;
 		op.u.settime.system_time = systime_ns;
 		return HYPERVISOR_platform_op(&op);
+#endif
 	}
 #endif
 


Home | Main Index | Thread Index | Old Index