Port-amiga archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: cross compiling on Amiga



On Fri, Jun 15, 2012 at 06:20:35PM +0100, David Brownlee wrote:
...
> So theoretically two machines with a 2G/2G userland/kernel virtual
> memory split, one with 64M and the other 2G, both with over 2GB of
> swap space should be able to allocate 2G of space to a (single)
> process. One will just be much slower to run as it pages memory in and
> out.
> 
> So what prevents that?
> 
> One issue is how the stack and heap are mapped into a process. Both
> grow over time, but if they both grow up (or down), then its possible
> for one to have used all of its available space and return out of
> memory while the other still has much space left.
> 
> The other (and more relevant issue here) is how the large the MMU
> tables are. While a process may be able to grow up to 2G in size, if
> its using 4K pages then thats data for 512 pages which have to be
> stored per process, plus MMUs tend to map memory in multi level
> tables, the top table has entries pointing to 2nd level tables, which
> may then have entries to third level tables, which then contain data
> about the pages. Again there may be both physical contraints on the
> MMU, and design constraints in terms of how many of each table types
> are allocated per process.
> 
> So the short answer is that packing more physical memory in the
> machine will may make it faster and better able to run more processes,
> but is not going to help in this case :)

The page tables for user processes are usually pageable.
A 2G process needs 2**(31-12) or 2^19 ie 512k pages (not 512).
A hardware page table entry will be 4 bytes, so 1k fit in a 4k page.
So the outer level entries will refer to 1M of memory each - but
can usually indicate the next level page table is absent.

So you ought to be able to pile in a load of swap and run a large
process - if slowly.

There are issues on some systems with there not being enough kernel
address space allocated for the physical memory tables (etc).
I think a recent change move some kernel memory allocations between
two different kernel VA regions - that could cause grief.

There might be an issue withe the stack hitting other stuff - but I
think NetBSD tends to reserve process address space for the entire
stack (to its ulimit value) at exec time. So that shouldn't happen.

Maybe there are some hard coded limits for process VA sizes on top
of the ulimit ones.

        David

-- 
David Laight: david%l8s.co.uk@localhost


Home | Main Index | Thread Index | Old Index