tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Reclaiming vnodes

On Sep,Monday 14 2009, at 5:55 AM, matthew green wrote:

i'm still not entirely sure what the point of this patch is.  i
understand it helps zfs, but i don't understand why or how. i'm
also curious what sort of testing you've done.  i do not believe
that testing in qemu is sufficient.  how does it affect systems
that recycle vnodes a lot, such as older systems running a build?

I do not have such system yet. What this patch does is that it uses another thread to reclaims vnodes, this way vnodes are reclaimed in different context than are allocated.

Vnodes are allocated only if there are no vnodes on a free_list. If there is a free vnode on a list it will be recycled which actually means that it will call VOP_RECLAIM.

In zfs there is a problem with calling getnewvnode form zfs_zget, in some cases getnewvnode pick vnode from a free list and call VOP_RECLAIM. This can lead to deadlock because VOP_RECLAIM can try to lock same mutex as was hold by zfs_zget. This can't be easily fixed if we do not want to touch and change whole zfs locking protocol.

With Patch:

Vnodes are only allocated and there is no vnode recycling. If number of used vnodes in a system is > than kern.vnodes_num_hiwat(in percents of maxvnodes) vrele thread will be woken up and it will start with releasing of free vnodes until number of used vnodes is < than kern.vnodes_num_lowat.

If we really want to push FS locking down to FS level something like this patch will be needed in a future to simplify locking inside of fs.

please get a bunch more testing done with this before commiting,
comparing loads that *would* have led to recycle with the old

Ok I will try to do some more tests tomorrow.

some comments no the code itself:

Thanks I have fixed them.



Home | Main Index | Thread Index | Old Index