tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Proposal: B_ARRIER (addresses wapbl performance?)



On Sat, Nov 01, 2008 at 11:46:49AM -0400, Thor Lancelot Simon wrote:
> On Fri, Oct 31, 2008 at 10:16:41PM -0700, Bill Stouder-Studenmund wrote:
> > On Thu, Oct 30, 2008 at 08:17:04PM -0400, Thor Lancelot Simon wrote:
> > > On Thu, Oct 30, 2008 at 01:28:21PM -0700, Bill Stouder-Studenmund wrote:
> > > 
> > > But SCSI disks don't lie like this unless explicitly configured to, and
> > > with proper use of tags, there is no need to configure them that way;
> > > there is no performance benefit.
> > > 
> > > Never mind that if they have power protection, it's still safe to do so.
> > > 
> > > So from my point of view, this does precisely what WAPBL needs -- render
> > > it just as safe as a non-journalled filesystem, or safer, while radically
> > > improving performance.
> > 
> > Until someone turns off the disk caches. Or more accurately forgets to 
> > turn them off. My understanding is that all disks come with caches enabled 
> > these days.
> 
> I'd like an example of a SCSI (including FC or SAS) disk which shipped with
> the write cache enabled.  I have never encountered this.  The entire point
> of SCSI-style tagged queueing is to make it unnecessary for performance
> reasons.

This actualy isn't the point of tagged queueing. Tagged queueing was 
developed to make tape drives work right. You fire off a burst of tagged 
writes to the tape drive and you know the right things will happen 
especially if there's an error.

So Thor, why are you so entrenched in this? If you're going to add a bit, 
just add FUA. It does exactly what you want. It was designed to do what 
you want.

The other problem I see with this is that your barrier will wait for ALL 
previous i/o to complete. We only care if the journal writes are done. So 
say I just spewed 4 MB of data to a disk, and I want to write 256k to the 
journal. With FUA, I have to wait for the 256k to finish. With your 
proposal, I have to wait for a  4.25 MB to finish. That can't be higher 
performance.

Thinking about it, how is a barrier that waits for all i/o to be done in 
the no-cache case any different from a cache flush at the same place?

Oh lord. Your proposal will be worse than cache flushing! With a cache 
flush, we have to write all dirty data. With tagging, we have to complete 
all operations. Change my example above. Say I'm READING 4 MB of data. 
With a cache flush, I only have to wait for the 256k of write to happen. 
With tagging, I have to wait for ALL operations to complete, including the 
4 MB of read!

Please just add FUA support. It does exactly what you want.

Take care,

Bill

Attachment: pgpezrvt8jh2U.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index