NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

kern/47702: coredumping big programs freeze NFS during dump



>Number:         47702
>Category:       kern
>Synopsis:       coredumping big programs freeze NFS during dump
>Confidential:   no
>Severity:       serious
>Priority:       medium
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Thu Mar 28 15:15:00 +0000 2013
>Originator:     Reinoud Zandijk
>Release:        NetBSD 6.1_RC2
>Organization:
NetBSD
>Environment:
Any NetBSD machine with homedirecories on NFS, not port specific AFAICT, seen
on Architecture: i386, Machine: i386

Coredumping firefox 17, compiled natively.

>Description:
My i386 work machine has its homedirectories on a NAS over NFS. Using Firefox
from multiple machines is no problem since all firefox data is stored on the
local harddisc.

When firefox coredumps, well xulrunner is, it can create huge coredump files
of 2.5 Gb, more than the amount of physical memory the machine has. The
coredump will be written in the homedirectory on NFS.

While xulrunner is coredumping, X keeps on working fine but every process that
attempts to reach the homedirectory is blocking until the coredump is
finished. The NFS server is still reachable by other machines and sshing into
the machine showed that it was busy writing the coredump out to disc, but not
excessive enough to hinder the others.

My hypothesis is that the memory that is written out during the coredump is
not released immediately after writing out thus starving the machine
completely from all its memory since its coredumping a process whom's file is
bigger than physical memory. It might very well be that NFS is thus blocking
on memory allocation. Regretfully i haven't managed to get evidence for this
in since the machine basicly wedged until it was finished coredumping.

Shouldn't it be possible to have the process write out the data beginning with
the pages that are in memory, freeing them ASAP and then, using demand
swapping, write out the other pages/info?

        
>How-To-Repeat:
Have firefox coredump a 2.5 Gb coredump file on a 2 Gb machine (1918 MB total
memory, 1873 available).
        
>Fix:
For workability, make your homedirectory not accept coredump files.

        

>Unformatted:
 Not really that specific, AFAIK -current is showing it too


Home | Main Index | Thread Index | Old Index