[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: cross compiling on Amiga
It may help to clarify virtual vs physical memory here (for most
people in the thread none of this will be new, so this is for anyone
A typical 32 bit processor has a 4GB virtual address space, which may
be split between kernel & userspace (so the kernel and any given user
process could each access up to 2GB, or one 3G and the other 1G).
A given hardware implementation will have a certain physical RAM
capacity, and the kernel will use the MMU will map this in units of
pages into the kernel and userland processes. 4k is a common page
size, though the kernel may elect to always group pages together to
use larger memory more effectively, eg vax maps 4 1k pages to deal
with effective 4k pages).
Kernel memory is almost always wired - physical pages are mapped to
virtual memory when needed, and are returned to a pool when done
(kernel text is mapped from boot to shutdown, filesystem buffers are
created as needed).
Each userland process has a small amount of wired memory (usually
containing metadata about the process which the kernel uses), and the
remainder is pageable. The contents of pageable memory can be copied
from a physical memory page to swap space and then the page released
and used for other purposes.
There are extra treats such as copy-on-write where when a process
forks, or maps in shared libraries those physical memory pages can be
shared readonly until one of the processes writes to them at which
point a page fault is taken and a new physical page used to take the
copy of the modified data, and optimistic virtual memory allocation
where when a process allocates memory physical pages are only used
once writes occur (think of this as copy-on-write all sharing a single
zero filled page :)
Back to the original case:
So theoretically two machines with a 2G/2G userland/kernel virtual
memory split, one with 64M and the other 2G, both with over 2GB of
swap space should be able to allocate 2G of space to a (single)
process. One will just be much slower to run as it pages memory in and
So what prevents that?
One issue is how the stack and heap are mapped into a process. Both
grow over time, but if they both grow up (or down), then its possible
for one to have used all of its available space and return out of
memory while the other still has much space left.
The other (and more relevant issue here) is how the large the MMU
tables are. While a process may be able to grow up to 2G in size, if
its using 4K pages then thats data for 512 pages which have to be
stored per process, plus MMUs tend to map memory in multi level
tables, the top table has entries pointing to 2nd level tables, which
may then have entries to third level tables, which then contain data
about the pages. Again there may be both physical contraints on the
MMU, and design constraints in terms of how many of each table types
are allocated per process.
So the short answer is that packing more physical memory in the
machine will may make it faster and better able to run more processes,
but is not going to help in this case :)
On 15 June 2012 00:06, Al Zick <al%familysafeinternet.com@localhost> wrote:
>>>>> IIRC the amiga and some other m68k ports also have a quite limited
>>>>> virtual address space (to allow running SunOS/68k binaries?
>>>> This is not the case, if I recall right. The limitation is because the
>>>> 1st-and 2ndlevel entries are limited to a fixed table - "just" an
>>>> implementation detail.
>>> So, then are you saying that the problem is not with SunOS/68k binaries?
>>> What would be the solution to fix this then?
>> Add a few hundred meg of ram. and increase swap to 2gig.
> There must be a another solution besides adding more ram, although it would
> be nice to go up to 192 Megs. You wouldn't happen to know where I can get 64
> Meg fastpage simms?
> I could use dd to create a 2 gig swap file and swapctl, but would this give
> me enough address space?
Main Index |
Thread Index |