Subject: Re: CVS commit: syssrc/sys/arch/vax/include
To: None <,>
From: Anders Magnusson <>
List: tech-kern
Date: 02/23/2002 00:15:09
> On Fri, Feb 22, 2002 at 10:34:33PM +0100, Anders Magnusson wrote:
> > 
> > Why cannot this be solved as it was in the 3BSD VM system then, by 
> > letting a process allocate more pte space when it grows? Because of
> > the way sbrk/mmap interacts; all mmap'ed memory is above MAXDSIZ and
> > therefore all space between the break and MAXDSIZ must have at least
> > system pte's associated with it.
> > 
> > The solution to this would be to remove the sbrk interface from kernel
> > and then let malloc use mmap instead. I have the phk malloc rewritten 
> > to use mmap (actually it got slightly faster) and routines that emulate
> > sbrk via mmap, I just wanted to test it some more before I write a 
> > proposal about it.
> I would strongly support the elimination of sbrk(), if we can manage to
> find a way to do it.
I think it isn't so difficult:
- make malloc et al use mmap().
- userspace emulation of sbrk() (if something calls it):
        mmap a contiguous area with an arbitrary size; like datasize/2 or
        something, and use that for the process. It's OK for brk() and
        sbrk() to return ENOMEM at anytime if the process allocates all
        of its space. 
- kernel emulation in the same way as in userspace #ifdef COMPAT_15.

Due to Klaus Klein, sbrk() isn't in any standards (anymore).

Our end(3) and sbrk(2) man pages states that the break area starts at
the address where the end global is. I can't think about anything that
is relying on this, a program that uses sbrk() probably comes from the 
pdp11 era, and on that arch it would be false anyway because it may have
split I/D. 

Anyway, this discussion should go on tech-kern, and later.

-- Ragge