tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

WAPL/RAIDframe performance problems



So apart from the WAPL panic (which I'm currently unable to reproduce), I seem 
to be facing two problems:

1. A certain svn update command is ridicously slow on my to-be file server.
2. During the svn update, the machine partially locks up and fails to respond 
   to NFS requests.

There is little I feel I can analyze (2). I posted a bunch of crash(8) traces, 
but that doesn't seem to help.

There seem to be two routes I can persue analyzing (1):
A. Given the svn update is fast on a single-disc setup and slow on the file 
   server having a RAID, larger blocks and what else, find what the significant 
   difference between the two setups is.
B. Given the only unusual thing that svn update command does is creating a 
   bunch of .lock files, a lot of stat()'s and then unlinking the .lock files,
   find some simple commands to enable others to reproduce the problem.

Regarding (B), a simple command somewhat mimicking the troublesome svn update 
seems to be (after mkdir'ing the 3000 dirs)
        time sh -c 'for i in $(seq 1 3000); do touch $i/x; done; for i in $(seq 
1 3000); do rm $i/x; done'

Regarding (A), there seems to be no single difference explaining the whole 
performance degradation, so I tried to test intermediate steps.

We start with a single SATA disc on a recent 5.1 system.
With WAPL, the svn update takes 5s to 7s (depending on the FFS version and the 
fsbsize) while the touch/rm dance takes 4s.
Disabling WAPL makes the svn update take 5s (i.e. better or no worse than with 
WAPL enabled), while the touch/rm slows down to almost 14s.
Enabling soft updates, the svn update finishes in 1,25s, the touch/rm in 4s.
Write speed (dd) on the file system is 95MB/s.
- So the initial data point is 5s for svn and 4s for the substitute for a 
  95MB/s file system write throughput.
- We also note that softdep outperforms WAPL by a factor of 4 for the svn 
  command and plain FFS performs no worse that WAPL.

We now move to a plain mpt(4) 7200rpm SAS disc (HUS723030ALS640, if anyone 
cares) on the 6.0 system.
Without WAPL, the svn update takes (on different FFS versions and fsbsizes) 
5s to 7s. The touch/rm takes 9,5 to 19s.
With WAPL, svn takes 9s to 13s and touch/rm 8 to 9,5s.
No softdeps on 6.0 to try.
Write speed to fs is 85MB/s.
So we have:
- without WAPL, both "the real thing" and the substitute are roughly as fast 
  as on the SATA system (which has slightly higher fs write throughput).
- with WAPL, both commands are significantly slower that on the SATA box.

Now to a two-component Level 1 RAID on two of these discs. We chose an SpSU 
value of 32 and a matching fsbsize of 16k.
The svn update takes 13s with WAPL and just under 6s without.
The touch..rm test takes 22s with WAPL and 19s without.
Write speed is at 18MB/s, read at 80MB/s
So on the RAID 1:
- Without WAPL, things are roughly as fast as on the plain disc.
- With WAPL, both svn and the test are slower than without (with the real thing 
  worse than the substitute)!
- Read speed is as expected, while writing is four times slower than I would 
  expect given the optimal (for writing) fsbsize equals stripe size relation.

Next a five-component Level 5 RAID. Again, an SpSU of 8 matches the fsbsize
of 16k.
Here, the update takes 56s with WAPL and just 31 without.
The touch..rm test takes 1:50 with WAPL and 0:56 without.
Write speed on the fs is 25MB/s, read speed 190MB/s
So on the RAID 5:
- Both the "real thing" and the substitute are significantly (about a factor of
  five) slower than on the RAID 1 although the RAID's stripe size matches the 
  file system block size and we should have no RMW cycles.
- OTOH, write and read speeds are faster than on RAID 1; still, writing is 
  much, much slower than reading (again, with an SpSU otimized for writing).
- Enabling WAPL _slows down_ things by a factor of two.

Simultaneously quadrupling both SpSU and fsbsize (to 32 and 64k) doesn't change 
much on that.

But last, on a Level 5 RAID with 128SpSU and 64k fsbsize (i.e., one file system 
block per stripe unit, not per stripe):
The svn update takes 36s without WAPL and just 1,7s with WAPL, but seven 
seconds later, the discs are 100% busy for another 33s. So, in fact, it takes 
38s until the operation really finishes.


Now, that's a lot of data (in fact, about one week of analysis).
Can anyone make sense out of it? Especially:
- Why is writing to the RAID so slow even with no RMWs?
- Why does WAPL slow down things?


If it wasn't for softdep-related panics i suffered on the old (active, 4.0) 
file server no-one was able to fix, I would simply cry I wanted my softdeps 
back. As it is, I need help.


Home | Main Index | Thread Index | Old Index