tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Scsipi guru needed...



On Sun, Nov 30, 2008 at 03:58:35PM -0500, Thor Lancelot Simon wrote:
> On Sun, Nov 30, 2008 at 09:53:16PM +0100, Anders Magnusson wrote:
> > No, I don't know, but it is far more than 64k.  But that doesn't matter;
> > using bs=64k when dd:ing to the raw device gives much better performance
> > anyway.  Besides,  MAXPHYS should be fixed anyway.
> 
> When you say "MAXPHYS should be fixed anyway" what exactly do you have in
> mind?  Both endpoint devices and the buses they're on can have maximum
> transfer size constraints, so some kind of inheritance scheme is needed;
> and that looks to me (and to others who've looked at it, I believe) like
> a considerable amount of work.

Considerable indeed. What i envisioned some time ago was a buffer system that
has scatter/gather integrated with arbitrary lengths. That would simplify
genfs code and the likes to a lot.

Maybe replace the buffer code with uiomove(9)/mbuf(9) analog code?

One could (also) create a system where the FS asks the destination drive for
the maximum size it can transfer. By making it a vnode op, this size can be
trimmed down by whatever bus or drive code comes by.

> Of course SCSI disks can take *very* large transfers.  But most systems
> you can put SCSI disks on have other peripherals that are much, much
> less accomodating; and some impose arbitrary restrictions on DMA transfer
> size etc which will cause problems for _some_ ways you can attach SCSI
> disks but not others.  Ugh.

The specific SCSI driver could when called by the call on the device vnode
adjust the size to the devices recommended size and/or let scsipi limit it by
the bus it is connected to say by the bus_space_* code.

> RAID controllers like CISS are a particular problem (as is RAIDframe) as
> they really want single transfers large enough that they can split them
> into individual requests of reasonable size for each element in the array;
> so if the disks don't perform well until you get to, say, a 32K I/O size,
> and you have 7 data disks in the array, you can want a 224K transfer size;
> but it can be quite difficult to persuade the filesystem to issue reads
> and writes that large.  LFS does this much better (sigh).

UDF is currently limited only by MAXPHYS but could in theory issue far larger
reads and writes on say sequential media; on RMW media its limited by other
issues but transfers could be glued :)

With regards,
Reinoud

Attachment: pgpcuekpuLqYe.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index