tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

re: Improving RAIDframe Parity Handling: The Diff




hi!


i've been running with your patch for a day now, and i've tried to
break it pretty hard, and i haven't succeeded.  my notes:


- overall, i'm very impressed.  the patch looks clean and i've not observed
  problems i would consider shipstoppers.  i didn't look really closely
  at the changes themselves.

- seems to deal fine with normal reboots and also with hard power failures

- newfs tends to dirty a huge portion of zones.  for my 250GiB filesystem,
  newfs dirtied 1491 out of 4096 zones, which is a few more than the total
  number of cyl groups:

    using 1425 cylinder groups of 184.30MB, 11795 blks, 23296 inodes.

  these zones cleared up a few minutes later, without syncing 1491 * 64MB,
  so this will only be a problem with a crash in the minutes after a newfs

- with 10 extractors of pkgsrc and one 'cvs co src xsrc' (and rm -rf's for
  the both) running all in parallel, i ended up with about 250 dirty zones
  out of 4096, which seems pretty high.  i haven't seen it go beyond 514
  except for the newfs case.. but >1/8th seems a lot.

- (nit) "raidctl -s" output is confusing for parity reconstruction.  the
  percentage done doesn't seem to make sense for me now.  from a guess, it
  is not considering in-sync but beyond the current sync-point as being
  in-sync so that the percentage done number grows at strange speeds, slow
  while in a dirty zone, and rapidly while skipping clean zones.

- have not done any performance measurements

- might be nice to add a comment to the RAIDFRAME_SET_COMPONENT_LABEL ioctl
  that the new #if 0'ed code is not well tested?

- be nice to get answers from some one (hi greg!) from your XXXjld's


great work!


.mrg.


Home | Main Index | Thread Index | Old Index