Subject: Re: DEV_B_SIZE
To: None <firstname.lastname@example.org>
From: Steve Byan <email@example.com>
Date: 01/31/2003 12:03:44
On Friday, January 31, 2003, at 11:50 AM, firstname.lastname@example.org wrote:
> In message <4912E0FE-3539-11D7-B26B-00306548867E@maxtor.com>, Steve
> Byan writes
>> I'd appreciate hearing examples where hiding the underlying physical
>> block size would break a file system, database, transaction processing
>> monitor, or whatever. Please let me know if I may forward your reply
>> to the committee. Thanks.
> If by "hide" you mean that there will be no way to discover the
> smallest atomic unit of writes, then you are right: it would be bad.
The notion is that such a disk would be instantly-compatible with
existing software, modulo performance issues. I suspect this is not the
case, and am searching for expert opinions in this matter.
> Provided we can get the size of the smallest atomic unit of writes
> in a standardized, documented, mandatory way, we will have no problem
> coping with it: Using a 4k size is no problem for our current
> filesystem technologies and device sizes.
Yes, I understand recompiling the world for 4K is possible. My question
is whether not doing so poses a data-integrity / fail-recovery risk.
> It was my impression that already many drives write entire tracks
> as atomic units, at least we have had plenty of anecdotal evidence
> to this effect ?
I'm not aware of any SCSI or ATA disks which do this; certainly no
Maxtor disk does. Count-key-data mainframe disks can be formatted to do
so, but such disks probably don't run Unix. Caching in ATA disks might
lead one to believe that the disk could corrupt an entire track, in the
sense that a panic ( aka bluescreen) or a power-failure would cause all
pending writes in its buffer to be lost, but even in ATA-land I don't
believe a power failure would result in more than one disk block
returning an uncorrectable read error.
Steve Byan <email@example.com>
333 South Street
Shrewsbury, MA 01545