Subject: Re: interactive responsiveness
To: None <current-users@NetBSD.org>
From: Alan Barrett <email@example.com>
Date: 02/04/2004 09:42:32
On Mon, 02 Feb 2004, Steve Bellovin wrote:
> Using a kernel from Saturday, with NEW_BUFQ_STRATEGY set but otherwise
> default options (and in particular with default sysctl settings for
> vm.), I'm seeing *excellent* responsiveness. I don't know what the
> changes were, but from my perspective, they're working just fine.
> (Note: I do not use softdep)
I still get poor interactive responsiveness during heavy disk activity.
I have raidframe (two IDE disks in a RAID1 array), several FFS1
filesystems on the raid device, softdep enabled on most of the
filesystems, several nullfs mounts. When I do something disk-intensive
(rsync, cvs update, or cp a large tree), I find that interactive
performance in other windows often freezes -- sometimes for as long as
10 seconds at a time. This has been going on for months, and might even
be worse with NEW_BUFQ_STRATEGY than without it. I have tried various
different sysctl vm.* settings, without ever finding good values.
I suspect that configuring raidframe to use a queue strategy other than
the default "fifo 100" might help, but there seems to be no way of doing
that in conjunction with raidframe autoconfig.
How are hints about relative time sensitivity or importance of different
operations passed through from the file system to raidframe to the
disk? Does this framework (whatever it is) work in the face of multiple
layers, such as nullfs on ffs on cgd on raid?
--apb (Alan Barrett)