Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/doc/roadmaps update nvme entry to current reality



details:   https://anonhg.NetBSD.org/src/rev/64c1220b84a2
branches:  trunk
changeset: 818047:64c1220b84a2
user:      jdolecek <jdolecek%NetBSD.org@localhost>
date:      Wed Sep 21 20:32:47 2016 +0000

description:
update nvme entry to current reality

diffstat:

 doc/roadmaps/storage |  21 ++++++++++++++-------
 1 files changed, 14 insertions(+), 7 deletions(-)

diffs (43 lines):

diff -r 177f5f7ee9bd -r 64c1220b84a2 doc/roadmaps/storage
--- a/doc/roadmaps/storage      Wed Sep 21 20:31:31 2016 +0000
+++ b/doc/roadmaps/storage      Wed Sep 21 20:32:47 2016 +0000
@@ -1,4 +1,4 @@
-$NetBSD: storage,v 1.17 2016/09/16 15:02:23 jdolecek Exp $
+$NetBSD: storage,v 1.18 2016/09/21 20:32:47 jdolecek Exp $
 
 NetBSD Storage Roadmap
 ======================
@@ -211,13 +211,12 @@
 ----------------
 
 nvme ("NVM Express") is a hardware interface standard for PCI-attached
-SSDs. NetBSD now has a driver for these; however, it was ported from
-OpenBSD and is not (yet) MPSAFE. This is, unfortunately, a fairly
-serious limitation given the point and nature of nvme devices.
+SSDs. NetBSD now has a driver for these.
 
-Relatedly, the I/O path needs to be restructured to avoid software
-bottlenecks on the way to an nvme device: they are fast enough that
-things like disksort() do not make sense.
+Driver is now MPSAFE and uses bufq fcfs (i.e. no disksort()) already,
+so the most obvious software bottlenecks were treated. It still needs
+more testing on real hardware, and it may be good to investigate some further
+optimizations, such as DragonFly pbuf(9) or something similar.
 
 Semi-relatedly, it is also time for scsipi to become MPSAFE.
 
@@ -226,6 +225,14 @@
  - The nvme driver is a backend to ld(4) which is MPSAFE, but we still
    need to attend to I/O path bottlenecks. Better instrumentation
    is needed.
+ - Flush cache commands via DIOCCACHESYNC is currently implemented using polled
+   commands for simplicity, limiting speed to about 10 milliseconds due to use
+   of delay(9); investigate if it's worth changing this to a cv to avoid
+   the delay, especially for journalled/heavy fsync scenarios
+ - NVMe controllers supports write cache administration via GET/SET FEATURE, but
+   driver doesn't currently implement the cache ioctls, leading to somewhat
+   ugly dkctl(1) output; it would be fairly simple to add this, but would
+   require small changes to ld(4) attachment code
  - There is no clear timeframe or release target for these points.
  - Contact msaitoh or agc for further information.
 



Home | Main Index | Thread Index | Old Index