Subject: Looking for advice on using pageable memory
To: None <tech-kern@netbsd.org>
From: Julio M. Merino Vidal <jmmv84@gmail.com>
List: tech-kern
Date: 11/13/2006 13:26:57
Hi,
[ please CC me any replies ]
I'm trying to make tmpfs's metadata pageable by tweaking its custom
pool allocator (tmpfs_pool.[ch]) but need some advice. So far I've
tried several approaches and one of them works, but I doubt it is
correct. They are explained below along with some comments/questions.
1) The first thing I tried was to create an anonymous object
(with uao_create) similar to how files are handled. Then I "mapped"
the pool pages over this object as if they were file offsets. This
failed, first because you cannot have more than one window for
the aobj active at a time (ubc_alloc), and second because accessing
the contents of the aobj is not a transparent operation as
accessing pool objects through the returned pointers is (or at least
I don't know how to handle them properly.) I think the second
point is enough to discard this possibility.
2) Then I tried to tweak the uvm_km_alloc_poolpage and other
related functions (their *_cached and *free* counterparts) to request
pageable kernel memory by using the UVM_KMF_PAGEABLE flag
(instead of the UVM_KMF_WIRED currently used). Similarly I changed
usages of pmap_kenter_va for pmap_enter which allows me to _not_
pass the wired flag.
This worked (the system didn't crash, that is) but the system could
not send them to swap at all. It also reached a limit were it could
not allocate more pages (even when there was plenty of free swap
space), so it started thrashing as crazy.
Does it have any effect to set a page UVM_KMF_PAGEABLE in the
main kernel map? It doesn't seem so...
3) At last I mimic'ed exec_map's example. I allocated a submap of
the kernel memory by using uvm_km_suballoc setting
VM_MAP_PAGEABLE on it. Then I allocated memory from it by
using uvm_km_alloc and passing the UVM_KMF_PAGEABLE flag.)
This worked and it properly made the tmpfs metadata pageable.
I was able to create a lot more files than the ram was supposed
to allow (running under qemu) and the system used swap as
required. This also has the benefit of being transparent to the
code (contrary to 1 afaict) because all addresses are in the
same virtual address space.
This feels the proper way to go but there is the problem of how
to allocate the submap because it requires setting a maximum
size beforehand. How to know what size to set?
Should each pool use its own submap? Or should there be a
single kernel submap for all pageable pools? The latter seems
neater as it and it'd be easily used to implement a generic
pageable pool allocator.
(I'm aware that I've probably mentioned obvious details above. Also
some may be completely wrong. But I'm new to the UVM interface/code
and am just experimenting so far to gather some knowledge.)
So does any of the above approaches make sense? If not, any advice on
what else should I look at?
Thank you.
--
Julio M. Merino Vidal <jmmv84@gmail.com>
The Julipedia - http://julipedia.blogspot.com/