Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: cgd on dk on 4k block size disk



On Mon, May 27, 2013 at 09:05:20AM +0200, Thomas Klausner wrote:

> I put a plain ffs on it.

How?

>         128  732566272      1  GPT part - NetBSD FFSv1/FFSv2

That's in 4k sector units.

> CANNOT WRITE: BLK 5398972736
> CONTINUE? [yn] y

That's in 512 byte units.

> I get this message for every 8th BLK now. Looks like a bug?

Apparently fsck thinks the disk is using 512 byte sectors
and tries to write 4k blocks (== 8 sectors) in 512 byte chunks
which doesn't work.


That's the consequence on how fsck determines how to do I/O.


It starts by asking the device driver about the physical sector
size with a fallback of DEV_BSIZE (512). The fallback is also
used when checking filesystem images with the -F option.

Then it reads the superblock and recalculates the sector size
according to the superblock parameters. This is because fsck
also works with "traditional" disks that cannot be queried
for the sector size and where the superblock is the only
source of information.

But that's where things go wrong if the superblock doesn't reflect
reality. The kernel on the other hand ignores the superblock
parameters because it doesn't need them, the disk driver handles
the transformation of block coordinates.


newfs should take care about this when writing the superblock, but
only if you use it directly on the disk. It fails when you first
create an image and then copy it to the disk.



Greetings,
-- 
                                Michael van Elst
Internet: mlelstv%serpens.de@localhost
                                "A potential Snark may lurk in every tree."



Home | Main Index | Thread Index | Old Index