tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: RAIDframe performance vs. stripe size



On Thu, May 10, 2012 at 11:47:36AM -0600, Greg Oster wrote:
> On Thu, 10 May 2012 13:23:24 -0400
> Thor Lancelot Simon <tls%panix.com@localhost> wrote:
> 
> > On Thu, May 10, 2012 at 11:15:09AM -0600, Greg Oster wrote:
> > > 
> > > What you're typically looking for in the parallelization is that a
> > > given IO will span all of the components.  In that way, if you have
> > > n
> > 
> > That's not what I'm typically looking for.  You're describing the
> > desideratum for a maximum-throughput application.  Edgar is describing
> > the desideratum for a minimum-latency application.  No?
> 
> I think what I describe still works for minimum-latency too...  where
> it doesn't work is when your IO is so small that the time to actually
> transfer the data is totally dominated by the time to seek to the data.

What if I have 8 simultaneous, unrelated streams of I/O, on a 9 data-disk
set?  Like, say, 8 CVS clients all at different points fetching a repository
that is too big to fit in RAM?

If the I/Os are all smaller than a stripe size, the heads should be able
to service them in parallel.

If they are stripe size or larger, they will have to be serviced in
sequence -- it will take 8 times as long.

In practice, this is why I often layer a ccd with a huge (and prime)
"stripe" size over RAIDframe.  It's also a good use case for LVMs.  But
it should be possible to do it entirely at the RAID layer through proper
stripe size selection.  In this regard RAIDframe seems to be optimized for
throughput alone.

-- 
Thor Lancelot Simon                                          
tls%panix.com@localhost
  "The liberties...lose much of their value whenever those who have greater
   private means are permitted to use their advantages to control the course
   of public debate."                                   -John Rawls


Home | Main Index | Thread Index | Old Index