Subject: Re: genfs_getpages and MAX_READ_AHEAD
To: YAMAMOTO Takashi <email@example.com>
From: Andrey Petrov <firstname.lastname@example.org>
Date: 06/11/2003 07:33:43
On Wed, Jun 11, 2003 at 02:02:47PM +0900, YAMAMOTO Takashi wrote:
> > > currently, genfs_getpages has a limit of number of pages.
> > > it's annoying because a caller should consider filesystem's block size
> > > to avoid assertion failure.
> > >
> > > how about following patch?
> > > although using alloca here is ...yucky,
> > > there're already variable-sized arrays around.
> > >
> > Could you give more details on why it's needed. Is that easy
> > to bump on that limitation?
> to bump it simply, we should know maximum size of filesystem block.
What exactly should I do to get that assert launched? It seems that
there are plenty of arguments checks, and it also has MAX_READ_AHEAD
panic in there. So what's situation is resolved with this patch?
> > In this patch you're replacing fixed-size array and assert on
> > alloca, which is more dangerous. malloc/free would be safer
> > but they cost more.
> i don't think that (big) fixed-sized array is more safe than alloca.
There might be some confusion but
#define MAX_READ_AHEAD 16 /* XXXUBC 16 */
wrt big numbers. This makes me wonder what's a deal, just
make it larger.
> i agree that having a limit of size to alloca and using malloc for big ones
> (as Jason suggested) is reasonable.
If you're going to take that path, i'd suggest having fixed-sized
array for that limit, and malloc/free for larger one. alloca will
be just extra cycles.