Subject: Adding new disk; partitions not empty
To: None <port-mac68k@netbsd.org>
From: Christopher P. Gill <cpg@scs.howard.edu>
List: port-mac68k
Date: 06/28/1999 04:21:57
Greetings, all.
I've been following the discussion about adding a disk, and have recently
added a second NetBSD disk in my Quadra 800 (hitherto 40/500, GENERIC
kernel). Unfortunately, I think something has gone awry, and I'd like
some help. Essentially, my brand-new partitions are mountable, but don't
appear to be empty, appearing progressively more full in order of their
creation/indexing. A full (i.e., very long) description follows.
I formatted and partitioned the disk (SCSI ID 1) into six, using Apple's
Drive Setup 1.7.3. I started with HFS partitions, which seemed fine under
MacOS, then dismounted them and ran the Mkfs utility to convert the
partitions to NetBSD as follows:
root
swap
usr
usr
usr
usr
I didn't run the installer, since I already boot from sd0, and I'll use
dump/restore to make a backup of my root partion from sd0 on sd1. Booting
up in NetBSD (from sd0) shows that all my disks are recognized:
==>>
sd0 at scsibus0 targ 0 lun 0: <SEAGATE, ST3600N, 8674> SCSI1 0/direct
fixed
sd0: 500MB, 1872 cyl, 7 head, 78 sec, 512 bytes/sect x 1025920 sectors
sd1 at scsibus0 targ 1 lun 0: <IBM, CP30540 545MB !Q, ADB7> SCSI2 0/direct
fixed
sd1: 520MB, 2242 cyl, 6 head, 79 sec, 512 bytes/sect x 1065912 sectors
sd2 at scsibus0 targ 2 lun 0: <QUANTUM, LP80S 980809404, 2.9> SCSI2
0/direct fixed
sd2: 80MB, 921 cyl, 4 head, 44 sec, 512 bytes/sect x 164139 sectors
cd0 at scsibus0 targ 3 lun 0: <MATSHITA, CD-ROM CR-8004A, 2.0a> SCSI2
5/cdrom removable
disklabel reports that my partitions are present:
==>>
# disklabel sd1
# /dev/rsd1c:
type: SCSI
disk: CP30540 545MB !
label: fictitious
flags:
bytes/sector: 512
sectors/track: 79
tracks/cylinder: 6
sectors/cylinder: 474
cylinders: 2242
total sectors: 1065912
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # milliseconds
track-to-track seek: 0 # milliseconds
drivedata: 0
8 partitions:
# size offset fstype [fsize bsize cpg]
a: 82655 704 4.2BSD 0 0 0 # (Cyl. 1*-
175*)
b: 82655 83359 swap # (Cyl. 175*-
350*)
c: 1065912 0 unused 0 0 # (Cyl. 0 -
2248*)
d: 512 192 unknown # (Cyl. 0*-
1*)
e: 166406 326789 4.2BSD 0 0 0 # (Cyl. 689*-
1040*)
f: 411881 493195 4.2BSD 0 0 0 # (Cyl. 1040*-
1909*)
g: 160775 166014 4.2BSD 0 0 0 # (Cyl. 350*-
689*)
h: 160826 905076 4.2BSD 0 0 0 # (Cyl. 1909*-
2248*)
disklabel: boot block size 0
disklabel: super block size 0
I ran fsck on each partition (redundant lines omitted):
==>>
** /dev/rsd1a
2 files, 9 used, 39861 free (21 frags, 4980 blocks, 0.1% fragmentation)
** /dev/rsd1b
2 files, 9 used, 39861 free (21 frags, 4980 blocks, 0.0% fragmentation)
** /dev/rsd1e
2 files, 9 used, 80297 free (17 frags, 10035 blocks, 0.0% fragmentation)
** /dev/rsd1f
2 files, 9 used, 199002 free (18 frags, 24873 blocks, 0.0% fragmentation)
** /dev/rsd1g
2 files, 9 used, 77625 free (17 frags, 9701 blocks, 0.0% fragmentation)
** /dev/rsd1h
2 files, 9 used, 77651 free (19 frags, 9704 blocks, 0.0% fragmentation)
I answered "n" to each question about whether or not to mark that
filesystem clean.
I mount all the partitions on some test directories to check things out:
==>>
# mount -vr /dev/sd1a /bkproot
exec: mount_ffs -o ro /dev/sd1a /bkproot
/dev/sd1a on /bkproot type ffs (local, read-only)
# mount -vr /dev/sd1f /pkg
exec: mount_ffs -o ro /dev/sd1f /pkg
/dev/sd1f on /pkg type ffs (local, read-only)
# mount -vr /dev/sd1e /tmp2
exec: mount_ffs -o ro /dev/sd1e /tmp2
/dev/sd1e on /tmp2 type ffs (local, read-only)
# mount -vr /dev/sd1h /home2
exec: mount_ffs -o ro /dev/sd1h /home2
/dev/sd1h on /home2 type ffs (local, read-only)
# mount -vr /dev/sd1g /work
exec: mount_ffs -o ro /dev/sd1g /work
/dev/sd1g on /work type ffs (local, read-only)
So far so good. The problem is this - each of the new filesystems should
be empty, but df reports that almost all of them contain data, in a
suspiciously incremented manner:
==>
# df -k /bkproot /pkg /tmp2 /home2 /work
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/sd1a 39870 9 35874 0% /bkproot
/dev/sd1f 436691 237689 155332 60% /pkg
/dev/sd1e 237680 157383 56529 73% /tmp2
/dev/sd1h 514351 436700 26215 94% /home2
/dev/sd1g 157374 79749 61887 56% /work
I find the behaviour a little odd, but I won't rule out that I might be
doing something wrong. Do I need some incantation with disklabel or
newfs? Should I have done this in an entirely different manner? Is this
a bug? Is there some weirdness with this model drive?
/*======================================================================
"Don't die wondering..." http://www.cldc.howard.edu/~cpg
email: cpg@scs.howard.edu
chris out- Christopher P. Gill
peace. C.L.D.C. Senior System Operator (Ret.)
======================================================================*/