tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

storage-class memory (was: Re: state of XIP?)



On Tue, Oct 15, 2013 at 12:17:30AM -0700, Matt Thomas wrote:
 > On Oct 14, 2013, at 11:41 PM, David Holland 
 > <dholland-tech%netbsd.org@localhost> wrote:
 > > Did uebayasi@'s XIP work get finished/committed? Which things does it
 > > work with? And (other than UTSL) where am I supposed to look to find
 > > out more?
 > 
 > It was not committed since core felt that it needed too many kludges
 > to properly work.

That's kind of a shame. While I understand it was written with the
intent of supporting direct execution from NOR flash to save RAM on
small devices, there's another case where it would be useful.

"Storage-class memory" is a catchall term for several different
proposed or experimental storage media that share a number of common
properties:
   - they are nearly as fast as DRAM
   - they are much faster than flash
   - they are expected to be much larger per unit than DRAM
   - they have markedly better wear profiles than flash
   - they are expected to, eventually, become competitively priced
   - and they are persistent.

The type most commonly cited is phase-change memory; memristors are
another.

It is not clear yet whether these will take off or whether
(individually or collectively) they'll be the next magnetic bubble
memory. There are also a number of unresolved problems pertaining to
actually accessing these things usefully, and it remains unclear
whether these materials will really be memory-mapped or will end up
inside SSDs like flash. However, things have reached the stage where
hardware vendors are interested in addressing these issues, and it's
looking pretty likely that at least one of these schemes will make it
at least to the point of shipping first-generation devices.

Because of the low access latencies of these materials, reading from
them into RAM is both more or less pointless and also a substantial
waste of time. Instead, you'd like accesses to go straight to the
hardware, with the hardware's pages mapped directly into user process
memory.

And hence, execute-in-place. Or more accurately, mmap-in-place, as one
would want to be able to read. There's even a fairly substantial
amount of interest in some quarters in writing via mmap, despite all
the problems that entails. Furthermore, for non-mapped I/O one still
wants reads and writes to go directly to the hardware pages without
passing through cache pages in between.

This is something I think NetBSD should be pursuing, because there's a
decent chance of at least some hardware materializing and it would be
unfortunate to be caught flat-footed.

So.

If the XIP code is not mergeable, what's entailed in doing a different
implementation that would be? Also, is the getpages/putpages interface
expressive enough to allow doing this without major UVM surgery? For
now I'm assuming a file system that knows about storage-class memory
and can fetch the device physical page that corresponds to any
particular file and offset. ISTM that at least in theory it ought to
be sufficient to implement getpages by doing this, and putpages by
doing nothing at all, but I don't know that much specifically about
UVM or the pager interface.

(There is a second and more or less separate class of problems
pertaining to buffers and file system metadata, but at least to begin
with I think a SCM-aware file system can just ignore the buffer cache.
Running ffs on a memory-mapped SCM device is a thornier issue we can
probably defer.)

-- 
David A. Holland
dholland%netbsd.org@localhost


Home | Main Index | Thread Index | Old Index