Subject: Re: tmpfs memory leak?
To: None <email@example.com>
From: Andrew Doran <firstname.lastname@example.org>
Date: 10/23/2007 20:49:22
On Mon, Oct 22, 2007 at 02:56:36PM -0500, David Young wrote:
> I have an 8M tmpfs filesystem whose available blocks are exhausted over
> time. There seems to be a memory leak, because I cannot account for more
> than 3M of the use with fstat(1) (for deleted files & dirs) and du(1),
> Filesystem Size Used Avail %Cap iUsed iAvail %iCap Mounted on
> /dev/wd0e 230M 18M 200M 8% 1088 116670 0% /
> tmpfs 496K 196K 300K 39% 1211 53 95% /dev
> tmpfs 8.0M 8.0M 0B 100% 1856 0 100% /mfs
IIRC the unused pages won't be released unless the pagedaemon asks for them
back; there should be some soft of high water mark. On HEAD, tmpfs doesn't
free tmpfs_nodes until unmount, they are recycled through a per-mount list.
> Does no one else see this? My application may be a bit unusual, both
> in that I use null mounts, and in that I have no swap activated.
> Could the cause of the leak be an interaction between my null mounts
> and tmpfs? Also, I am dimly aware of some reference-counting bug in
> tmpfs; it was mentioned in one of ad@'s commits to the vmlocking branch.
> (I do not run the vmlocking branch.)
On the vmlocking branch, tmpfs_nodes are freed when the linkcount drops to
zero. I'm seeing a bug where tmpfs_rmdir is called, and the directory nodes
very occasionally have an extra link, so they become orphaned. I haven't
found the cause yet.