NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Zfs on NetBSD



On second thought, ' zpool scrub' worked as expected; the amount
initially copied was not enough to notice it.

It does tak a lot of memory, though - as expected, during the tar copy:
...
Memory: 8387M Act, 4101M Inact, 40K Wired, 38M Exec, 12G File, 31M Free
... (on a 20GB laptop).

On Sun, 28 Jul 2019 at 12:35, Chavdar Ivanov <ci4ic4%gmail.com@localhost> wrote:
>
> For quite a while I hadn't tried ZFS under NetBSD; it used to crash
> for me years ago under load and didn't seem much in the focus of the
> development, I think it is fair to say. Following this thread, I
> decided to give it a go. There was presently unused 32GB mSATA card in
> one of my laptops, which I unmounted; there was no need to clean the
> labels at all.
> On -current 8.99.51 from a few days ago everything seem to be working
> fine for me:
> ...
> modload zfs
> modstat zfs
> zpool create tank /dev/wd2d
> zpool status
> df -k
> ls -la /tank
> zfs create tank/t1
> zfs create tank/t2
> zfs create tank/t3
> df -k
> zpool status
> zpool scrub tank
> zpool status
> .....
> Some 13GB worth of packages were also tarred over with reasonable speed.
>
> I am not sure if 'zpool scrub' actually does something, though - even
> when there is some data on the disk, the subsequent 'zpool status'
> claims the scrub has finished straight away:
> ...
> # zpool status tank
>   pool: tank
>  state: ONLINE
>   scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 28 13:23:47 2019
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         tank        ONLINE       0     0     0
>           wd2d      ONLINE       0     0     0
>
> errors: No known data errors
> # zpool scrub tank
> # zpool status tank
>   pool: tank
>  state: ONLINE
>   scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 28 13:32:08 2019
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         tank        ONLINE       0     0     0
>           wd2d      ONLINE       0     0     0
>
> errors: No known data errors
> ......
>
> Chavdar
>
> On Sun, 28 Jul 2019 at 07:54, Greg Troxel <gdt%s1.lexort.com@localhost> wrote:
> >
> > Ron Georgia <netverbs%gmail.com@localhost> writes:
> >
> > > Yes, I do have /dev/zfs.
> > > $ ll /dev/zfs
> > > crw-------  1 root  wheel  190, 0 Jul 21 15:23 /dev/zfs
> > >
> > > I did find the zfs.mod, but get this error when trying to load it.
> > > $ sudo modload /stand/amd64/8.1/modules/zfs/zfs.kmod
> > > modload: /stand/amd64/8.1/modules/zfs/zfs.kmod: Program version wrong
> > >
> > > Which makes sense since I am booting kernel 8.99.51 NetBSD 8.99.51 (GENERIC)
> > > I did pull down the sets for NetBSD 8.99.51 (GENERIC) and unpacked base.tar.xz and modules.tar.xz. Then I copied the contents of stand to /stand/amd64/8.99.51. (Should I remove /stand/amd64/8.1?)
> >
> > Basically, you need consistent kernel and modules.  So if you have moved
> > to current permanently, yes, delete the 8.1 modules.
> >
> > Also, to run current zfs, it seems overwhelmingly likely that you want
> > to run the zfs userland binaries from current, not from 8.1.
> >
> > You may want to look at the various schemes for in-place updating, such
> > as INSTALL-NetBSD from pkgsrc/sysutils/etcmanage (my take on how to do
> > it), and sysupgrade (somebody else's take).
> >
> > I unpack all the sets except etc/xetc, unpack etc/xetc into
> > /usr/netbsd-etc, and then merge the etc changes.
> >
> > > [ 11951.1654531] WARNING: module error: module `zfs' built for `801000000', system `899005100'
> > > [ 12310.9533135] WARNING: module error: module `zfs' built for `801000000', system `899005100'
> > > [ 12509.3029082] WARNING: module error: recursive load failed for `zfs' (`solaris' required), error 2
> > > [  30.9168505] WARNING: module error: incompatible module class for `zfs' (3 != 2)
> > > [  30.9769426] WARNING: module error: incompatible module class for `zfs' (3 != 2)
> >
> > That really looks like you are loading 8.1 modules.  rm them, and maybe
> > you will get a different error.
> >
>
>
> --
> ----



-- 
----


Home | Main Index | Thread Index | Old Index