tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: IIs factible to implement full writes of strips to raid using NVRAM memory in LFS?



On Sun, 21 Aug 2016 10:20:07 -0400
Thor Lancelot Simon <tls%panix.com@localhost> wrote:

> On Fri, Aug 19, 2016 at 10:01:43PM +0200, Jose Luis Rodriguez Garcia
> wrote:
> > On Fri, Aug 19, 2016 at 5:27 PM, Thor Lancelot Simon  
> > >
> > > Perhaps, but I bet it'd be easier to implement a generic
> > > pseudodisk device that used NVRAM (fast SSD, etc -- just another
> > > disk device really) to buffer *all* writes to a given size and
> > > feed them out in that-size chunks.  Or to add support for that to
> > > RAIDframe.
> > >
> > > For bonus points, do what the better "hardware" RAID cards do,
> > > and if the inbound writes are already chunk-size or larger,
> > > bypass them around the buffering (it implies extra copies and
> > > waits after all).
> > >
> > > That would help LFS and much more.  And you can do it without
> > > having to touch the LFS code.  
> > 
> > Won't it be easier to add a layer that do these tasks in
> > the LFS code. It has the disadvantage that it would be used only
> > by  
> 
> I am guessing not.  The LFS code is very large and complex -- much
> more so than it needs to be.  It is many times the size of the
> original Sprite LFS code, which, frankly, worked better in almost all
> ways.  It represents (to me) a failed experiment at code and
> datastructure sharing with FFS (it is also worse, and larger, because
> Sprite's buffer cache and driver APIs were simpler than ours and
> better suited to LFS' needs).
> 
> It is so large and so complex that truly screamingly funny bugs like
> writing the blocks of a segement out in backwards order went
> undetected for long periods of time!
> 
> It might be possible to build something like this inside RAIDframe or
> LVM but I think it would share little code with any other component
> of those subsystems.  I would suggest building it as a standalone
> driver which takes a "data disk" and "cache disk" below and provides
> a "cached disk" above.  I actually think relatively little code
> should be required, and avoiding interaction with other existing
> filesystems or pseudodisks should keep it quite a bit simpler and
> cleaner.

Building this as a layer that allows arbitrary devices as either the
'main store' or the 'cache' would work well, and allow for all sorts of
flexibility.  What I don't know is how you'd glue that in to be a
device usable for /.  The RAIDframe code in that regard is already a
nightmare!

Perhaps something along the lines of the dk(4) driver, where one could
either use it as a stand-alone device, or hook into it to use the
caching features.. (e.g. 'register' the cache when raid0 is configured,
and then use/update the cache on reads/writes/etc to raid0)

Obviously this needs to be fleshed out a significant amount...

Later...

Greg Oster


Home | Main Index | Thread Index | Old Index