Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: one remaining mystery about the FreeBSD domU failure on NetBSD XEN3_DOM0



At Sun, 11 Apr 2021 13:55:36 -0700, "Greg A. Woods" <woods%planix.ca@localhost> wrote:
Subject: Re: one remaining mystery about the FreeBSD domU failure on NetBSD XEN3_DOM0
>
> Definitely writing to a FreeBSD domU filesystem, i.e. to a FreeBSD
> xbd(4) with a new filesystem created on it, is impossible.


So, having run out of "easy" ideas, and working under the assumption
that this must be a problem in NetBSD-current dom0 (i.e. not likely in
Xen or Xen tools) I've been scanning through changes and this one, so
far, is one that would seem to me to have at least some tiny possibility
of being the root cause.


    RCS file: /cvs/master/m-NetBSD/main/src/sys/arch/xen/xen/xbdback_xenbus.c,v
    ----------------------------
    revision 1.86
    date: 2020-04-21 06:56:18 -0700;  author: jdolecek;  state: Exp;  lines: +175 -47;  commitid: 26JkIx2V3sGnZf5C;
    add support for indirect segments, which makes it possible to pass
    up to MAXPHYS (implementation limit, interface allows more) using
    single request

    request using indirect segment requires 1 extra copy hypercall per
    request, but saves 2 shared memory hypercalls (map_grant/unmap_grant),
    so should be net performance boost due to less TLB flushing

    this also effectively doubles disk queue size for xbd(4)


I don't see anything obviously glaringly wrong, and of course this is
working A-OK on my same machines with NetBSD-5 and a NetBSD-current (and
originally somewhat older NetBSD-8.99) domUs.

However I'm really not very familiar with this code and the specs for
what it should be doing so I'm unlikely to be able to spot anything
that's missing.  I did read the following, which mostly reminded me to
look in xenstore's db to see what feature-max-indirect-segments is set
to by default:

https://xenproject.org/2013/08/07/indirect-descriptors-for-xen-pv-disks/


Here's what is stored for a file-backed device:

backend = ""
 vbd = ""
  3 = ""
   768 = ""
    frontend = "/local/domain/3/device/vbd/768"
    params = "/build/images/FreeBSD-12.2-RELEASE-amd64-mini-memstick.img"
    script = "/etc/xen/scripts/block"
    frontend-id = "3"
    online = "1"
    removable = "0"
    bootable = "1"
    state = "4"
    dev = "hda"
    type = "phy"
    mode = "r"
    device-type = "disk"
    discard-enable = "0"
    vnd = "/dev/vnd0d"
    physical-device = "3587"
    hotplug-status = "connected"
    sectors = "792576"
    info = "4"
    sector-size = "512"
    feature-flush-cache = "1"
    feature-max-indirect-segments = "17"


Here's what's stored for an LVM-LV backed vbd:

162 = ""
 2048 = ""
  frontend = "/local/domain/162/device/vbd/2048"
  params = "/dev/mapper/vg1-fbsd--test.0"
  script = "/etc/xen/scripts/block"
  frontend-id = "162"
  online = "1"
  removable = "0"
  bootable = "1"
  state = "4"
  dev = "sda"
  type = "phy"
  mode = "r"
  device-type = "disk"
  discard-enable = "0"
  physical-device = "43285"
  hotplug-status = "connected"
  sectors = "83886080"
  info = "4"
  sector-size = "512"
  feature-flush-cache = "1"
  feature-max-indirect-segments = "17"


So "17" seems an odd number, but it is apparently because of "Need to
alloc one extra page to account for possible mapping offset".  It is
currently the maximum for indirect-segments, and it's hard-coded.
(Linux apparently has a max of 256, and the linux blkfront defaults to
only using 32.)  Maybe it should be "16", so matching max_request_size?



I did take a quick gander at the related code in FreeBSD (both the domU
code that's talking to this code in NetBSD, and the dom0 code that would
be used if dom0 was running FreeBSD), and besides seeing that it is
quite different, I also don't see anything obviously wrong or
incompatible there either.  (I do note that the FreeBSD equivalent to
xbdback(4) has a major advantage of being able to directly access files,
i.e. without the need for vnd(4).  Not quite as exciting as maybe full
9pfs mounts through to domUs would be, but still pretty neat!)

FreeBSD's equivalent to xbdback(4) (i.e. sys/dev/xen/blkback/blkack.c)
doesn't seem to mention "feature-max-indirect-segments", so apparently
they don't offer it yet, though it does mention "feature-flush-cache".

However their front-end code does detect it and seems to make use of it,
and has done for some 6 years now according to "git blame" (with no
recent fixes beyond fixing a memory leak on their end).  Here we see it
live from FreeBSD's sysctl output, thus my concern that this feature may
be the source of the problem:

hw.xbd.xbd_enable_indirect: 1
dev.xbd.0.max_request_size: 65536
dev.xbd.0.max_request_segments: 17
dev.xbd.0.max_requests: 32

--
					Greg A. Woods <gwoods%acm.org@localhost>

Kelowna, BC     +1 250 762-7675           RoboHack <woods%robohack.ca@localhost>
Planix, Inc. <woods%planix.com@localhost>     Avoncote Farms <woods%avoncote.ca@localhost>

Attachment: pgpA95pNX8gas.pgp
Description: OpenPGP Digital Signature



Home | Main Index | Thread Index | Old Index