Subject: Re: ubc_inactive() hackage, some analysis
To: None <root@ihack.net, tech-kern@netbsd.org>
From: None <eeh@netbsd.org>
List: tech-kern
Date: 01/28/2001 02:05:40
	So, if I understand this correctly:

	The main reason this will work -- if at all -- is that it causes the
	active page scan to terminate early because of an excess of inactive
	pages.  This effect will be most notable when the process working set
	is smaller than `memory - inactive_target'.  In order for *any* anon
	pages to get paged out, the number of active pages must become greater
	than this figure, and the number of active pages will never drop below
	it unless processes exit.

	What this means is that, on a machine with less memory than desired
	process memory (such as a multi-user system with lots of idle Emacsen
	and IRC clients), the UBC cache will generally stabilize at around
	inactive_target, and very rarely poke above it -- even though there
	may be a lot of idle pages that could be paged out.

	Note that this is only commentary; I'm not drawing any conclusion here
	about whether this effect is good or bad.  However, I do believe that
	the major effect of this change is primarily based on subtle side
	effects, and as such MUST be clearly documented.  Otherwise a user
	cannot reasonably be expected to understand the performance of their
	machine at all.

Another interesting sideffect is that when the system is
generating lots of data (read "big file writes"), pages
are pullled off the free list, written to, and put on the
inactive list.  This empties the free list, which kicks the
page scanner, which starts cleaning the inactive list, 
which is full of the data that is in the process of being
written out.  In that case you do not have to wait for
the syncer to get to the dirty vnodes, and you don't have to
worry about otherwise active application pages being 
reclaimed to make up for a deficit of clean pages.

Eduardo