Subject: Re: Smoother writing for LFS
To: Thor Lancelot Simon <tls@rek.tjls.com>
From: Konrad Schroder <perseant@hhhh.org>
List: tech-kern
Date: 10/23/2006 17:42:29
On Mon, 23 Oct 2006, Thor Lancelot Simon wrote:
> 1) Rather than a global estimate, maintain an estimate per-filesystem
> of the current number of dirty pages. I'm not sure how hard this
> would be, and would appreciate feedback.
>
> 2) Maintain, per filesystem, minimum and maximum "target write sizes".
>
> 3) Once per second, traverse the list of filesystems, and for any
> filesystem with more than the minimum outstanding, clean until there's
> nothing left or we hit the maximum.
For LFS, this is almost done already. We keep a per-filesystem page
count, though it may be somewhat inaccurate since it isn't kept by the VM
system itself. lfs_writerd wakes up every 0.1 seconds to see if anything
needs writing; it would be trivial to add a test against fs->lfs_pages <=>
lfs_fs_pagetrip there as well, and flush the whole filesystem at that
point.
In the past when I've tried doing something like what you're describing,
performance always degraded, so I didn't pursue it further. I wasn't, of
course, testing the specific case you're trying to address. It sounded at
the time, too, as if keeping track of the number of dirty pages per mount
point at the VM level would be an overall lose (especially if LFS is the
only fs that ever uses the data) so it may be possible that the
performance lose was due to an inaccurate count of dirty pages.
> It is easy for LFS to track the current write bandwidth of the disk, so
> we could set the maximum size so that the disk is never more than X%
> busy with background writes in any given second.
How would you calculate X?
Take care,
Konrad Schroder
perseant@hhhh.org