tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Where is the component queue depth actually used in the raidframe system?



On Mar 14,  8:47am, Greg Oster wrote:
} Subject: Re: Where is the component queue depth actually used in the raidf
} On Thu, 14 Mar 2013 10:32:26 -0400
} Thor Lancelot Simon <tls%panix.com@localhost> wrote:
} 
} > On Wed, Mar 13, 2013 at 09:36:07PM -0400, Thor Lancelot Simon wrote:
} > > On Wed, Mar 13, 2013 at 03:32:02PM -0700, Brian Buhrow wrote:
} > > >         hello.   What I'm seeing is that the underlying disks
} > > > under both a raid1 set and a raid5 set are not seeing anymore
} > > > than 8 active requests at once across the entire bus of disks.
} > > > This leaves a lot of disk bandwidth unused, not to mention less
} > > > than stellar disk performance.  I see that RAIDOUTSTANDING is
} > > > defined as 6 if not otherwise defined, and this suggests that
} > > > this is the limiting factor, rather than the actual number of
} > > > requests allowed to be sent to a component's queue.
} > > 
} > > It should be the sum of the number of openings on the underlying
} > > components, divided by the number of data disks in the set.  Well,
} > > roughly.  Getting it just right is a little harder than that, but I
} > > think it's obvious how.
} > 
} > Actually, I think the simplest correct answer is that it should be the
} > minimum number of openings presented by any individual underlying
} > component. I cannot see any good reason why it should be either more
} > nor less than that value.
} 
} Consider the case when a read spans two stripes...  Unfortunately, each
} of those reads will be done independently, requiring two IOs for a given
} disk, even though there is only one request.
} 
} The reason '6' was picked back in the day was that it seemed to offer
} reasonable performance while not requiring a huge amount of memory to
} be reserved for the kernel.  And part of the issue there was that
} RAIDframe had no way to stop new requests from coming in and consuming
} all kernel resources :(  '6' is probably a reasonable hack for older
} machines, but if we can come up with something self-tuning I'm all for
} it...  (Having this self-tuning is going to be even more critical when
} MAXPHYS gets sent to the bitbucket and the amount of memory needed for
} a given IO increases...)
} 
} Later...
} 
} Greg Oster

        Hello.  If I understand Thor's formula right, then a raid set I have
(raid5) with 4 components, each on a wd(ata) disk, then the correct number
of outstanding requests should be limited to 4 because it looks like our
ata drivers only present 1 opening per channel.  However, increasing the
outstanding requests on this box from 6, which is already too high
according to the formula as I understand it, to 20, increases the disk
throughput on this machine by almost  50% for many of the work loads I put
on it.  I imagine there is a point of diminishing returns in terms of how
much of a queue I should allow on the outstanding requests limit, but right
now, it's unclear to me how to figure out what the optimal setting is for
this number based on any underlying capacity indicators there may be.  It
seems like a better huristic might be to be able to specify a maximum
amount of memory the raidframe driver would be allowed to use, and then
have it  set the outstanding request count accordingly.  IN the case of the
machine I refer to above, I have 2 raid sets, the stripe size is set to 64
blocks (32K) with 4 stripes per raid set. with one of the raid sets running
in degraded mode, the maximum amount of memory used by the raidframe
subsystem is 10.4MB.  That's not an insignificant amount of memory, but
it's certainly not a profligate amount.  Further thoughts?

-Brian


Home | Main Index | Thread Index | Old Index