NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: zfs pool behavior - is it ever freed?



I have a bit of data, perhaps merged with some off list comments:

  People say that a 16G machine is ok with zfs, and I have seen no
  reports of real trouble.

  When I run my box with 4G, it locks up.

  When I run my box with 8G, I end up with pool usage in the 3 G to 3.5
  G range.  It feels like there's a limit as I've never seen it above
  3.5G.  vmstat -m says (after a lot of things happening):
    In use 1975994K, total allocated 3110132K; utilization 63.5%

  On machines I have handy to check without zfs (amd64 if not labeled):
    In use 198214K, total allocated 217912K; utilization 91.0%
       (1G, n9 rpi3, operating near RAM capacity)
    In use 67140K, total allocated 71664K; utilization 93.7%
       (1G, n9 rpi3, doing very little)
    In use 813025K, total allocated 864324K; utilization 94.1%
       (4G, n9, operates a backup disk (ufs2) and little else)
    In use 901729K, total allocated 975280K; utilization 92.5%
       (4G, n9, router and various home servers)
    In use 574035K, total allocated 652188K; utilization 88.0%
       (5G, n9, no building, mail+everything_else server)
    In use 2841803K, total allocated 3120148K; utilization 91.1%
       (24G, n9, 14G tmpfs, has built a lot of packages)
  
  On the zfs box, the big users are:
    zio_buf_512 dnode_t dmu_buf_impl zio_buf_16384 zfs_znode_cache


My conclusions:

  Generally in NetBSD pool usage for caching scales appropriately with
  RAM and/or responds to pressure.  That's why we see almost no reports
  of trouble expect for zfs.

  A machine without zfs that is in the 4G class will use 0.5-1G for pools.

  A 4G machine with zfs, and an 8G machine, tend to end up around 3.5G
  for pools.  It seems that zfs uses 2.5-3G, regardless of what's
  available.

  Thus it seems there is a limit for zfs usage, but it is simply
  sometimes too high depending on available RAM.

  Utilization is particularly poor on the zfs machine, 64% vs 88-94% for
  the rest.

  Our howto should say:

    32G is pretty clearly enough.  Nobody thinks there will be trouble.
    16G is highly likely enough; we have no reports of trouble.
    8G will probably work but ill advised for production use.
    4G will not work; we have no reports of succesful long-term operation

    When you run out, it's ugly.  External tickle after sync(8) works to
    reboot.  Other wdog approaches unclear.


Additional data welcome of course.


Home | Main Index | Thread Index | Old Index