tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: memcpy of struct buf, or similar?
Chris Gilbert wrote:
> Simon Burge wrote:
>> Andrew Doran wrote:
>>
>>> On Tue, Jun 10, 2008 at 01:58:43AM +1000, Simon Burge wrote:
>>>
>>>> Andrew Doran wrote:
>>>>
>>>>> Are you running with LOCKDEBUG? It's probably failure to release a spin
>>>>> mutex.
>>>> No anymore - I used to run with it all the time, but not since enabling
>>>> it took a large performance hit some time post netbsd-4. I'll try
>>>> LOCKDEBUG (tomorrow).
>>> There have beem some changes recently that reduce its CPU usage. I'm finding
>>> it a lot more useable than it was.
>> With a config that is just GENERIC + LOCKDEBUG+DEBUG+DIAGNOSTIC, a
>> kernel immediately before a change to condvars[1] can do two builds
>> in ~70 minutes. A kernel from immediately after that change hits
>> biowait/tstile problems in ~18 minutes.
>>
>> [1] http://mail-index.netbsd.org/source-changes/2008/05/31/msg206402.html
>
> Having come to this issue fresh from not looking at it for a week I
> think I've identified the problem.
>
> By adding code into the sleepq_changepri/lendpri code to save the last
> lwp that was removed and inserted, I found that the biowait thread was
> lendpri'd just before the lockup occurs, suggesting that it was briefly
> not in the sleepq.
>
> In the above change we used to test for the cv being in use by looking
> at cv_waiters, now we do so by checking the sleepq being empty or not.
> However, all accesses to the sleepq look like they have to be protected
> by using sleeptab_lookup.
>
> This is important as sleepq_changepri and sleepq_lendpri remove and
> re-insert entries into the sleepq, and have only the sleepq mutex to
> lock/unlock, not the original lock (eg bp->b_done)
>
> Attached is a patch that removes the "quick" checks, and either locks
> the looking at the sleepq or call the functions that do lock the sleepq
> and handle the sleepq being empty.
>
> It's survived two build.sh runs for me (one with and one without
> DIAGNOSTICs)
>
> I'd like to check in the attached patch in the next 24 hours, as a
> short-term fix, while we look for a more efficient way to do this, as it
> makes my quad-core useable.
Attached is a slightly revised patch, to get rid of the changes to
whitespace and sleeptab_lookup being moved.
I also added the KASSERT back to the cv_unsleep, as the lwp lock should
be held at that point, and the lwp lock will be set to the sleepq lock,
so it's safe to look at the sleepq.
Note there is an assumption that cv_has_waiters will be called with the
mutex used to access the cv held, and so locking the sleepq lock will
make sure we get a consistent state.
Thanks,
Chris
Index: kern/kern_condvar.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_condvar.c,v
retrieving revision 1.22
diff -u -p -r1.22 kern_condvar.c
--- kern/kern_condvar.c 4 Jun 2008 11:22:55 -0000 1.22
+++ kern/kern_condvar.c 14 Jun 2008 15:54:21 -0000
@@ -279,8 +279,7 @@ cv_signal(kcondvar_t *cv)
/* LOCKDEBUG_WAKEUP(CV_DEBUG_P(cv), cv, CV_RA); */
KASSERT(cv_is_valid(cv));
- if (__predict_false(TAILQ_FIRST(CV_SLEEPQ(cv)) != NULL))
- cv_wakeup_one(cv);
+ cv_wakeup_one(cv);
}
static void __noinline
@@ -329,8 +328,7 @@ cv_broadcast(kcondvar_t *cv)
/* LOCKDEBUG_WAKEUP(CV_DEBUG_P(cv), cv, CV_RA); */
KASSERT(cv_is_valid(cv));
- if (__predict_false(TAILQ_FIRST(CV_SLEEPQ(cv)) != NULL))
- cv_wakeup_all(cv);
+ cv_wakeup_all(cv);
}
static void __noinline
@@ -389,9 +387,18 @@ cv_wakeup(kcondvar_t *cv)
bool
cv_has_waiters(kcondvar_t *cv)
{
+ bool result;
+ kmutex_t *mp;
+ sleepq_t *sq;
+
+ sq = CV_SLEEPQ(cv);
+ (void)sleeptab_lookup(&sleeptab, cv, &mp);
- /* No need to interlock here */
- return !TAILQ_EMPTY(CV_SLEEPQ(cv));
+ /* we can only get a valid result with the sleepq locked */
+ result = !TAILQ_EMPTY(sq);
+
+ mutex_spin_exit(mp);
+ return result;
}
/*
Home |
Main Index |
Thread Index |
Old Index