Subject: Re: DEV_B_SIZE
To: None <email@example.com>
From: Steve Byan <firstname.lastname@example.org>
Date: 01/31/2003 13:41:35
On Friday, January 31, 2003, at 12:18 PM, email@example.com wrote:
> In message <F4D99E08-353D-11D7-B26B-00306548867E@maxtor.com>, Steve
> Byan writes
>> On Friday, January 31, 2003, at 11:50 AM, firstname.lastname@example.org wrote:
>>> In message <4912E0FE-3539-11D7-B26B-00306548867E@maxtor.com>, Steve
>>> Byan writes
>>>> I'd appreciate hearing examples where hiding the underlying physical
>>>> block size would break a file system, database, transaction
>>>> monitor, or whatever. Please let me know if I may forward your
>>>> to the committee. Thanks.
>>> If by "hide" you mean that there will be no way to discover the
>>> smallest atomic unit of writes, then you are right: it would be bad.
>> The notion is that such a disk would be instantly-compatible with
>> existing software, modulo performance issues. I suspect this is not
>> case, and am searching for expert opinions in this matter.
> I'm fine with that, as long as the disk somewhere in a data field
> we can query (if need be with a new request) exposes the smallest
> atomically writable unit.
> The only thing that exposes us to risk is we don't know the risk
> exists, so as long as the fact that a 4k physical sector size is
> used is not hidden from us, we can adapt.
But would existing code be functionally broken (perhaps with respect to
failure recovery) if it were to not be modified to adapt to a different
physical block size?
>> Yes, I understand recompiling the world for 4K is possible. My
>> is whether not doing so poses a data-integrity / fail-recovery risk.
Really? fsck can recover from losing 4K bytes surrounding the last
metadata block written?
Steve Byan <email@example.com>
333 South Street
Shrewsbury, MA 01545