Subject: Re: poolifying fileassoc
To: None <blymn@baesystems.com.au>
From: YAMAMOTO Takashi <yamt@mwd.biglobe.ne.jp>
List: tech-kern
Date: 01/07/2007 22:07:29
> > again, can you explain why it needs to be done with simple_locks held?
> > i really don't see any reason why it should.
> > 
> 
> When we enter uvm_io_done() we do so with locks held (apart from doing
> a swap i/o but that does not matter here).  I chose to put the
> veriexec page check at the top of uvm_io_done() because it makes
> logical sense to me - if the veriexec check decides the pages have
> been modified then all I do set the error variable so the rest of
> uvm_io_done() behaves as though there was an i/o error and performs
> the uvm related clean up.  I don't believe it is safe to unlock before
> doing the veriexec check because the uvm page structures have not been
> consistently updated.  Shifting the veriexec check lower and unlocking
> before doing the check means that we will then have to remove the
> pages if the veriexec page check fails and I wonder about something
> actually using a "bad" page before it is detected and removed (race?),
> maybe this is not valid with a kernel big lock though.

keeping pages locked (PG_BUSY) is enough for your check, isn't it?

> > poolify merely hides the bug, unless you know a strict number of
> > necessary reservations.
> > 
> 
> Assuming we have a count of the CPU's then I think that we can be
> deterministic with this.  We need a pool entry for each cpu to handle
> one instance of the check.  The pool entry is used and returned within
> a function so there should not be any growth in the storage
> requirements.

i don't think we need such a hack in this case.

YAMAMOTO Takashi