Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: fsck failing during setinodebuf on medium sized raid



    Date:        Sun, 4 Sep 2022 09:54:37 -0700
    From:        "Stephen M. Jones" <cirr%sdf.org@localhost>
    Message-ID:  <5668C5A7-471A-4C20-A802-C5CF17A9256D%sdf.org@localhost>

  | amanda#newfs -02 -F -s 105468549120 /dev/rld1d

I assume the -02 is really -O2, but the problem here is that the
filesystem is being generated with so many inodes (2152008960) that
when interpreted as 32 bit signed numbers, some of them (the
final 4525312) have what appears to be negative inode numbers.

The quick fix (since it is hard to imagine that you really need
2 billion inodes) would be to reduce the number of cylinder groups
(from 65165) or the number of inodes/group (from 33024)
The -g -i or -n options to newfs can accomplish the latter (to a
degree, I still always end up with way more inodes than I need)
I don't know if there is an easy way (or any way) to change the
cylinder group size.

There are a whole bunch of bugs being exposed here, inode numbers in
the filesystem should never be negative, something is broken in fsck
and printing inode numbers as 64 bits (probably the %jd with an
(intmax_t) cast trick - which is probably not appropriate here, the
number of bits in an inode number depends upon the filesystem definition,
and is unrelated to the size of integers on the host system.

And of course, the system probably should not panic when it encounters
a negative inode number (which it never would if they're treated as
unsigned).

Your earlier panic quite likely was related - the system spreads out
files over cyl groups, eventually the final one (or two) are going to
be used, and then the panic that you saw fsck cause would happen.

kre



Home | Main Index | Thread Index | Old Index