tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Scsipi guru needed...
On Sun, Nov 30, 2008 at 09:53:16PM +0100, Anders Magnusson wrote:
>
> No, I don't know, but it is far more than 64k. But that doesn't matter;
> using bs=64k when dd:ing to the raw device gives much better performance
> anyway. Besides, MAXPHYS should be fixed anyway.
When you say "MAXPHYS should be fixed anyway" what exactly do you have in
mind? Both endpoint devices and the buses they're on can have maximum
transfer size constraints, so some kind of inheritance scheme is needed;
and that looks to me (and to others who've looked at it, I believe) like
a considerable amount of work.
Of course SCSI disks can take *very* large transfers. But most systems
you can put SCSI disks on have other peripherals that are much, much
less accomodating; and some impose arbitrary restrictions on DMA transfer
size etc which will cause problems for _some_ ways you can attach SCSI
disks but not others. Ugh.
RAID controllers like CISS are a particular problem (as is RAIDframe) as
they really want single transfers large enough that they can split them
into individual requests of reasonable size for each element in the array;
so if the disks don't perform well until you get to, say, a 32K I/O size,
and you have 7 data disks in the array, you can want a 224K transfer size;
but it can be quite difficult to persuade the filesystem to issue reads
and writes that large. LFS does this much better (sigh).
--
Thor Lancelot Simon
tls%rek.tjls.com@localhost
"Even experienced UNIX users occasionally enter rm *.* at the UNIX
prompt only to realize too late that they have removed the wrong
segment of the directory structure." - Microsoft WSS whitepaper
Home |
Main Index |
Thread Index |
Old Index