Subject: Re: Ultra ATA support?
To: Curt Sampson <cjs@portal.ca>
From: Brian C. Grayson <bgrayson@ece.utexas.edu>
List: port-i386
Date: 09/26/1997 10:47:13
Curt Sampson wrote:
> 
> On Thu, 25 Sep 1997, Brian C. Grayson wrote:
> 
> >   We are planning on purchasing some IDE hard drives which
> > support the relatively new Ultra ATA 33MB/sec mode.
> 
> Is there even that much point to this? I've not seen good 7200 rpm
> IDE hard drives, yet, and even the high-end 5400 RPM drives (SCSI
> or IDE) can barely saturate an old 10 MBps Fast Narrow SCSI-II bus
> when used in pairs.

  Well, as I said, I'm a bit clueless about this stuff.  Here are
some of the specs on the drives we are looking at
(http://www.maxtor.com/ftp/pub/ide/dm1750d.txt, the DiamondMax
drives).  From their web docs, and the mod dates from the docs,
these drives were first available in 3Q97.

  buffer size:  256K
  media transfer rate:	up to 14.0 MB/s
  IDE bus transfer rate:  up to 16.6MB/s with PIO mode 4, up to
    33MB/s with Ultra DMA.

  If all interactions really go through the buffer, then I think
the higher bandwidth will make a difference.  However, if the
IDE drive signals an interrupt when it has read the first of
several sectors into the buffer, so that the processor can be
accessing that sector while the drive works on the next sector
(overlapping operations), then the advantage won't be as great
-- then the media is truly the bottleneck.  But this latter method
requires the buffer to allow simultaneous accesses by the
processor and the drive, which is ``hard'', so I'd _guess_ that
the buffer doesn't work like that, i.e., it signals an
interrupt once all N sectors are in the buffer, or once it has
written all N sectors to disk.

  In other words, do multisector transfers look like:
[cpu sec1->buff][cpu sec2->buff]...[drive sec1->disk][drive sec2->disk] ...

or is it pipelined:
[ cpu sec1 -> buff][drive sec1 -> disk]
                   [cpu sec2 -> buff]  [drive sec2 -> disk]
		                     [cpu sec3 -> buff]    [drive sec3 -> disk]

  In the first case, improving the cpu/mem bandwidth directly
affects overall running time, while in the second case it only
shaves off the initial latency of the first sector access.
Since the code in wdc.c is structured as one big string-move, it
looks like the earlier non-pipelined case.
                          
  Brian
-- 
Brian Grayson (bgrayson@ece.utexas.edu)
Graduate Student, Electrical and Computer Engineering
The University of Texas at Austin
Office:  ENS 406       (512) 471-8011
Finger bgrayson@orac.ece.utexas.edu for PGP key.