Subject: Some LFS vs. FFS+softdep testresults
To: None <tech-kern@netbsd.org, tech-perform@netbsd.org>
From: Frank van der Linden <fvdl@netbsd.org>
List: tech-perform
Date: 03/30/2003 03:48:23
I ran some tests comparing FFS and LFS performance. System description:

	SiS64x Asus P4 motherboard
	2.4Ghz Pentium 4
	1G of DDR333 memory
	additional Adaptec 7892 controller
sd0 at scsibus0 target 0 lun 0: <QUANTUM, ATLAS10K3_18_WLS, 020W> disk fixed
sd0: 17537 MB, 31022 cyl, 2 head, 578 sec, 512 bytes/sect x 35916548 sectors
sd0: sync (25.0ns offset 127), 16-bit (80.000MB/s) transfers, tagged queueing

On sd0, I created a partition spanning the whole disk, using 2k fragment
size, and 16k block size. This partition had its type changed and was
prepared using newfs(8) or newfs_lfs(8) as appropriate. Both were
run using the default parameters (i.e. no flags specified). The kernel
was -current as of march 29th.

I compared the metadata performance by doing:
cd /mnt ; time tar xf ~fvdl/pkgsrc.tar ; time rm -rf pkgsrc ;
cd / ; time umount /mnt
In this test, pkgsrc.tar is an older pkgsrc tar file, located on a
different disk (IDE disk attached to pciide).


FFS+softdep:
0.403u 8.270s 0:30.10 28.8%     0+0k 3488+23072io 0pf+0w
0.154u 0.840s 0:15.79 6.2%      0+0k 10751+13463io 6pf+0w
0.000u 0.524s 0:02.68 19.4%     0+0k 5+528io 8pf+0w

LFS:
0.565u 6.423s 0:10.37 67.3%     0+0k 3643+2326io 0pf+0w
0.193u 1.410s 0:03.75 42.6%     0+0k 11948+240io 6pf+0w
0.000u 0.074s 0:00.20 35.0%     0+0k 6+33io 8pf+0w

LFS beats FFS+softdep hands down, which is not surprising.
(for a laugh, here are the plain FFS numbers:
0.564u 4.208s 9:08.21 0.8%      0+0k 4102+133119io 15pf+0w
0.117u 1.486s 7:40.36 0.3%      0+0k 7561+105979io 6pf+0w
0.000u 0.018s 0:00.10 10.0%     0+0k 3+7io 8pf+0w
).


The next test was a complete build of -current. Source, objects,
tools and destination directory all on the same filesystem.
This test showed no difference between the two; both took about
43 minutes. A large working set in memory probably made differences
small.


Lastly, I ran bonnie -s 2000.

FFS:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         2000 49712 46.0 47548 19.3 11092  3.8 47817 49.0 48925  9.6 217.4  1.2


LFS:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         2000 34875 40.8 39680 26.9 22030 17.2 47937 49.1 46426  9.1 240.1  1.3


I don't quite understand why LFS is slower writing, this needs to be
looked at. Bumping the segment size to a much larger value than the
default 1M (I made it 1G) didn't help much. It's also interesting that
LFS is twice as fast in the 'rewrite' case, The 'rewrite' case of bonnie
has suffered somewhat since UBC was brought in.

A preliminary conclusion is that LFS is now pretty stable for normal
usage (although I didn't create any disk-nearly-full situations
which make LFS' life hard), and then it does quite well. The write
case is worth some investigation.

- Frank

-- 
Frank van der Linden                                            fvdl@netbsd.org
===============================================================================
NetBSD. Free, Unix-like OS. > 45 different platforms.    http://www.netbsd.org/