tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Fixing pool_cache_invalidate(9), final step



On Mon, Jun 4, 2012 at 11:34 AM, Jean-Yves Migeon
<jeanyves.migeon%free.fr@localhost> wrote:
> On Thu, 31 May 2012 12:55:42 +0100, Jean-Yves Migeon wrote:
>>>>>
>>>>> if my grepping is right pool_reclaim/pool_cache_reclaim are only user
>>>>> in subr_pool.c (for _HARDKERNEL) so draining should indeed only happen
>>>>> from the pageout daemon and not from interrupt context at all, so we
>>>>> should (if I'm not mistaken) be able to simplify be integrating
>>>>> pool_drain_start/end and calling the pool_cache_xcall from
>>>>> pool_cache_invalidate...
>>>>> currently with the split pool_drain_start/end the time the xcall runs
>>>>> might on SMP be used to drain the buffer pool, the only different that
>>>>> comes to my mind quickly.
>>>>
>>>>
>>>> I came to the same conclusion while looking at the drain_start/_end
>>>> functions, but they were outside the scope of my patch (I expected
>>>> oppositions, so kept it as small as possible).
>>>>
>>> I think this comes as a package ;-)
>>>
>>> Just some thoughts,
>>
>>
>> Will do, however I cannot make such an API change for -6, even if it
>> does not appear in the documentation (I can still ask releng@ and
>> core@, but I highly doubt that this is worth the hassle). I can do
>> this for -current though.
>>
>> I will update both patches accordingly, but the drain start/end split
>> will likely remain for -6.
>
>
> Here are the new proposals, taking Lars' suggestions into account.
>
> [General]
> - Dumped pool_cache_invalidate_cpu() in favor of pool_cache_xcall() which
> transfers CPU-bound objects back to the global pool.
> - Adapt comments accordingly.
>
> [-Current]
> - pool_cache_invalidate() will now schedule a xcall(9) so that CPUs will
> feed back their objects to the global pool before invalidation. The function
> remains synchronous because we have to wait for all CPUs to finish executing
> the xcall before we can start invalidation.
> - the pool_drain_start/_end functions have been merged now that
> pool_cache_invalidate() drains per-CPU caches synchronously. The new
> pool_drain() function is
>
> "bool pool_drain(struct pool **ppp)", with:
>  - bool indicating whether reclaiming was fully done (true) or not (false),
>  - ppp will contain a pointer to the pool that was drained (optional
> argument).
>
> [NetBSD-6]
> - pool_cache_invalidate() will only feed back objects cached by the calling
> CPU. This will not invalidate _all_ CPUs bound objects (the caller has to
> explicitly schedule a xcall(9) for !curcpu() CPUs), but at least it has a
> way to invalidate them now.
> - reflect this in the CAVEATS in pool_cache(9) documentation.
>
> [NetBSD-5]
> In case someone think that it would be reasonable to backport the -6 fix to
> -5 (or at least document the CAVEAT inside pool_cache(9)), ping me.
>
>
> Full release build for -nb6 and -current has been done over the week end,
> and two ATF runs for each (one with 1GiB memory, the other with "just" 64MiB
> to put a bit of pressure on pgdaemon). No regression observed.
>
> I am planning to commit these at the end of the week, and submit the
> NetBSD-6 patch to releng today.
>
> Comments :) ?
>
>

looks good to me, much cleaner then with the workaround (splitted
pool_drain) for calling not being able to pool_cache_invalidate_cpu at
elevated IPLs.

Lars

-- 
Mystische Erklärungen:
Die mystischen Erklärungen gelten für tief;
die Wahrheit ist, dass sie noch nicht einmal oberflächlich sind.
   -- Friedrich Nietzsche
   [ Die Fröhliche Wissenschaft Buch 3, 126 ]


Home | Main Index | Thread Index | Old Index