[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: cgd on dk on 4k block size disk
On Wed, Jun 05, 2013 at 11:10:58PM +0200, Thomas Klausner wrote:
> I've restarted from scratch (gpt destroy) and created the file system
> with "newfs ... -S 4096 ..." which gave me a working file system which
> survived a forced fsck.
What sector size did newfs see when you didn't use -S 4096 ?
Did you just add -S 4096 and everything seemed to work? Or did you
also specify the filesystem size with -s ?
newfs gets the information for the disk in terms of
- sector size
- number of sectors
For an image file (-F option) or when stat() returns a size
for your block/char device (which ours regularly do not do)
the number of sectors is computed from that size.
Otherwise, the number of sectors is taken from the geometry
-S overrides only the sector size
-s overrides only the number of sectors
So, just specifying -S for a disk device keeps the number of sectors
but changes the sector size and thus the size of the disk.
Please check the output of newfs, it will tell you the assumed size
of the disk in bytes and in sectors.
> After an accidental reboot, I ran fsck again and got lots of warnings
> about 'can't write block SOMEHIGHNUMBER' (or similar) but fsck -y
> worked and the file system seems ok.
> Do you think it's probable that there are remaining issues where the
> block numbers are not written correctly, or do you think it was just
> random corruption from the reboot?
Were the high numbers in the range of 8 times the expected block
number or something really huge?
If something really huge, it could be random content (like some old cgd
data) in a place of a metadata block that wasn't written out before
If something only somewhat to large, it could also be badly computed
block numbers because of a wrong geometry used when creating the
Michael van Elst
"A potential Snark may lurk in every tree."
Main Index |
Thread Index |