Subject: Re: NCR Driver Problems
To: Don Lewis <gdonl@gv.ssi1.com>
From: Dave Rand <dlr@daver.bungi.com>
List: current-users
Date: 02/02/1996 20:25:30
[In the message entitled "Re: NCR Driver Problems" on Feb  1, 17:00, Don Lewis writes:]
> On Jan 31,  3:34pm, proprietor - Foo Bar And Grill wrote:
> } 
> } So, in other words, disksort() is now obsolete and should be replaced.
> 
> If the drive you're talking to only has a single actuator, then you still
> want to preprocess the queue with disksort() so that all the commands
> sent to the drive are for sectors that are close together.  The drive
> can then do the final optimization based on it's real time head and
> rotational position status.

disksort(), on modern drives, is pretty obsolete.

What is important is to pre-read data, preserve locality, and read/write
as much data in one command as is reasonable.

One reason for this is that 'geometry' of drives is meaningless.
What you think is a perfectly reasonable read request, not spanning
tracks, man end up causing a seek anyway.  With variable bit rates,
and sector packing, the number of sectors per track is no longer anything
that can be counted on.

The second problem is seeks themselves.  Seek time has not been linear
with number of tracks crossed for about 15 years.  There are several
algorithms employed, depending on 'how far', 'which direction', 'what
temperature', and other more bizzare factors.  A 5 track seek may
take as long (or longer) than a 50 track seek.  Certain seek patterns,
like seek to track zero, may have even more bizzare behavior (faster
or slower than equivilent-direction seek).  To examine this, turn
your drive's cache off (and any second-level caches in the controller),
and do several hundred seeks, from 1 to 'many' tracks. Now, do some
random direction seeks - plot the results.  Depending on the drive,
you will see 3 or more different 'humps' in the seek time, when plotted
against number of tracks.  Usually, seeks are broken into 'near',
'medium' and 'far', with special cases for boundarys.

The third problem is sector aliasing, or sector re-mapping.  Hopefully
a problem that can be ignored, it may cause an otherwise reasonable
drive to perform poorly under certain conditions.  A string of
bad sectors on a test directory, forcing bizzare, unexpected seeks to
the top of the drive to read the 'good' sectors...  impossible to
predict.

The last problem is the drive 'helping' with its first level cache.
Under DOS, these caches do good things.  Under UNIX - well, if it
doesn't really hurt the performance *too* bad, the drive software
engineer did a spectacular job.

These factors change on every new drive - sometimes even new models of
the same drive.  The are somewhere between hard, and impossible to
model (unless you have inside knowledge of the drive at hand).
So the best course, with modern drives, is to read and write as
much data as is possible in one operation, sequentially, and
hope for the best.

Yes, I did write drive firmware.

-- 
Dave Rand
Internet: dlr@daver.bungi.com