Subject: Re: Corrupt data when reading filesystems under Linux guest
To: None <port-xen@NetBSD.org>
From: Jed Davis <jdev@panix.com>
List: port-xen
Date: 06/12/2005 03:29:45
In article <20050611003941.GA29403@panix.com>,
Thor Lancelot Simon  <tls@rek.tjls.com> wrote:
> 
> 2) Not doing it *wrecks* performance by doubling the number of IOPS
>    needed to handle a client OS doing the perfectly reasonable thing
>    and sending us 64K writes on the assumption that, just like a Linux
>    domain0, we will merge them.

And it wrecks them more when the transfers come in at 5.5k to the
request, like they are in the case that started this thread.  Linux
should of course Not Do That, but nonetheless it does.

> 3) If you only merge forward in the ring, you can't break filesystem
>    ordering constraints, but you _will_ fix the problem where 64K from
>    the client turns into 44K + 20K.

And the forward-only merging is what I had in mind.  But more to the
point, if the client was expecting ordering constraints to be honored,
it was already getting an unpleasant surprise since, as I understand it,
we were just tossing a bunch of bufs into the queue at once and handling
the completions whenever they came back --- to say nothing of whatever
the Linux backend is doing.

-- 
(let ((C call-with-current-continuation)) (apply (lambda (x y) (x y)) (map
((lambda (r) ((C C) (lambda (s) (r (lambda l (apply (s s) l))))))  (lambda
(f) (lambda (l) (if (null? l) C (lambda (k) (display (car l)) ((f (cdr l))
(C k)))))))    '((#\J #\d #\D #\v #\s) (#\e #\space #\a #\i #\newline)))))