Subject: Re: RAW access to files
To: Chuck Silvers <chuq@chuq.com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 12/12/2001 12:12:53
In message <20011211235842.A7183@spathi.chuq.com>Chuck Silvers writes

[  access patterns with poor (or negatively correlated) locality
  where readahead is obviously not a win, and may do positive harm]
  
> [...] for applications that do large runs of sequential i/o, the same logic
>applies as well, if the application isn't going to access the data multiple
>times and it uses i/os of at least 64k.  read ahead doesn't gain you all
>that much when you're doing large i/os, especially on modern disks that
>do read ahead into the cache in the disk.  

Yes, that is a conventional wisdom.

I have some exeperience with very large apps, processing much more
than 2^32 bits of data (think a linear pass over a terabyte or so,
updating a one-gigabyte array as it goes). There, we found a huge
performance win from doing reads in 1 or 2 Mbyte blocks -- the size of
the disk buffer-- and using POSIX aio() or real threads to schedule
read-ahead of those 2mbyte chunks. (At the time, this design forced a
non-BSD solution.)

Mmap() was a nonstarter; the app didnt fit in physical memory anyway,
nevermind the linear once-only pass.

I didnt dare try an ffs with 1meg blocks and 128k fragments and
2-block readahead: would that have worked?