Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: pathological cvs recursion ?
In Message <20151218022717.GA3932@slave.private>,
Paul Ripke <stix%stix.id.au@localhost>wrote:
=>On Thu, Dec 17, 2015 at 08:24:44PM -0500, gary%duzan.org@localhost wrote:
=>> =>
=>> => bch <brad.harder%gmail.com@localhost> writes:
=>> =>
=>> =>> I've run into this a few times:
=>> =>>
=>> =>> U
=>> =>>
=>> external/bsd/libc++/dist/libcxx/test/libcxx/experimental/containers/sequences/src/external/gpl3/binutils/dist/opcodes/aarch64-tbl.h
=>> =>>
=>> =>> where there are sub-trees seem to be recursively re-added (see
=>> =>> .../src/external/gpl3... as part of ./src/external/bsd/...).
=>> =>
=>> => I would unmount the fs and run fsck. I have seen some strange things
=>> => which were due to filesystem damage.
=>> =>
=>> => Then, I'd remove the subtree and update again.
=>>
=>> I've seen this quite a bit on the Xen DOMU that I use for building
=>> NetBSD. So often that I ended up umount/newfs/mount/checkout my src LV
=>> instead of just updating. Every once in a while I try an update;
=>> sometimes it is fine, but other times not. I also just saw it in a
=>> pkgsrc tree I updated on another box recently. After an rm -rf on the
=>> broken tree a subsequent update succeeded, but I expect it could happen
=>> again. In case it matters, I'm using an rsync repo clone and accessing
=>> it over ssh.
=>>
=>> Gary Duzan
=>
=>I've seen filesystem corruption, which I now believe to be caused by
=>"rsync --del" access patterns, a number of times over the last year.
=>For now, I've switched to "rsync --delete-delay", and yet to see a
=>re-occurence.
=>
=>Ref, long thread over the last year:
=>https://mail-index.netbsd.org/tech-kern/2014/08/29/msg017597.html
My repo filesystem seems ok.
# fsck -f /usr/netbsd-cvs
** /dev/rxbd3d
** File system is already clean
** Last Mounted on /usr/netbsd-cvs
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
660135 files, 10936349 used, 9707225 free (49689 frags, 1207192 blocks, 0.2% fragmentation)
I ran a level 0 dump on it, and it did not complain. FWIW, the
filesystem is WAPBL, the DOM0 is NetBSD 6.1_STABLE amd64, xbd3 is
an LV in vg0, and vg0 has a single PV on a RAID1 raidframe. Simple.
:-) The DOMU is 7.0_RC3 amd64.
I'll try tar on the mounted filesystem in case it is a kernel
filesystem issue. It feels more like a client issue, though. My
src filesystem is even more layered:
ffs+wapbl>lv>vg>2*pv>dk>gpt>xbd>DOM0-lv>vg1>pv>dk>gpt>raid0>2*raid1>2*dk>gpt.
Gary Duzan
Home |
Main Index |
Thread Index |
Old Index