tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]


I put the latest diff, the presentation and paper I submitted to
EuroBSDCon 2010.  Please read the paper for the details.

On Mon, Oct 25, 2010 at 10:04:28PM +0900, Izumi Tsutsui wrote:
> wrote:
> > I think the uebayasi-xip branch is ready to be merged.
> > 
> > This branch implements a preliminary support of eXecute-In-Place;
> > execute programs directly from memory-mappable devices without
> > copying files into RAM.  This benefits mainly resource restricted
> > embedded systems to save RAM consumption.
> > 
> > My approach to achieve XIP is to implement a generic XIP vnode
> > pager which is neutral to underlying filesystem formats.  The result
> > is a minimal code impact + sharing the generic fault handler.
> Probably it's better to post more verbose summary that describes:
> - which MI kernel structures/APIs are modified or added

        block device:
                DIOCGPHYSSEG ioctl
        flash(4) (new)
        xmd(4) (new)

> - which sources are mainly affected


uvm_page.c implements struct vm_physseg structure for devices;
"vm_physseg" has been allocated for memory segments.  Since now,
it's also allocated for device segments that are potentially
user-mapped.  "vm_page" structures are allocated in the device

genfs_io.c implements filesystem-neutral vnode pager backend which
looks up the matching "vm_page" for given pages.  It queries block
offset of the given file to filesytems by calling VOP_BMAP().
Then return the "vm_page" back to the fault handler.

> - how many changes are/were required in MD pmap or each file system etc.

PMAP needs some adjustments depending on how it handles PV.

> - which ports / devices are actually tested

arm (arm11), powerpc (ibm40x), x86 (i386)

> - related man page in branch


I can add xip.9 (analogous to wapbl.9) if it's considered appropriate.

> - benchmark results

 1) Memory consumption

XIP saves memory of page cache.  XIP consumes extra memory for
"vm_page" array of device segment.  I confirmed this by comparing
"cat * >/dev/null; vmstat -s" (== read pages).

This is after cat'ing ~9M files without/with XIP. [1]  Especially
these lines show XIP works as expected:

-     2184 cached file pages
+       47 cached file pages

 2) Start time

XIP saves time to read pages through slow I/O.  I confirmed this
by comparing outputs of "time -l ksh -c :" (start ksh, do nothing,
then exit), on NFS (via a PIO driver) and XIP (FlashROM). [2]
Especially these lines:

-        0.08 real         0.00 user         0.00 sys
+        0.03 real         0.00 user         0.02 sys

show XIP is faster to start up a program.  Also:

-        19  page faults
+         0  page faults

XIP execution causes no major faults.

 3) Run time

The difference of executing programs between on normal page cahe
and XIP is cache fill time == access speed to the device.  XIP
using RAM (xmd(4)) is as fast as page cache.

The actual access time penalty depends on many aspects.  If programs
are well pipelined and CPU can pre-fetch the next cache line, and
the result will be same.  Otherwise CPU will stall to wait cache
lines to be filled.

I'll post more interesting numbers after I'll implement pmc(4)
backend for ARM11.

In short: performance is good.

> etc.
> > I asked Chuck Silvers to review the branch.  I believe I've addressed
> > most of his questions except a few ones.
> It's also better to post all his questions and your answers
> so that other guys can also see what's going on.

Another mail...


> ---
> Izumi Tsutsui

Masao Uebayashi / Tombi Inc. / Tel: +81-90-9141-4635

Home | Main Index | Thread Index | Old Index