Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: one remaining mystery about the FreeBSD domU failure on NetBSD XEN3_DOM0



At Wed, 14 Apr 2021 19:53:47 +0200, Jaromír Doleček <jaromir.dolecek%gmail.com@localhost> wrote:
Subject: Re: one remaining mystery about the FreeBSD domU failure on NetBSD XEN3_DOM0
> 
> You can test if this is the problem by disabling the feature in
> negotiation in NetBSD xbdback.c - comment out the code which sets
> feature-max-indirect-segments in xbdback_backend_changed(). With the
> feature disabled, FreeBSD DomU should not use indirect segments.

OK, first off the behaviour of the bug didn't change at all; but also
FreeBSD did not do quite what I expected as I didn't read their code as
carefully as before.  It seems they don't directly report the maximum
number of indirect segments -- somehow they hide that part.

From the dom0 side things look as I think they should,
i.e. feature-max-indirect-segments no longer appears in xenstore:

# xl block-list fbsd-test
Vdev  BE  handle state evt-ch ring-ref BE-path                       
2048  0   2      4     47     8        /local/domain/0/backend/vbd/2/2048
2064  0   2      4     48     10       /local/domain/0/backend/vbd/2/2064
768   0   2      4     49     11       /local/domain/0/backend/vbd/2/768
# xenstore-ls /local/domain/0/backend/vbd/2
2048 = ""
 frontend = "/local/domain/2/device/vbd/2048"
 params = "/dev/mapper/scratch-fbsd--test.0"
 script = "/etc/xen/scripts/block"
 frontend-id = "2"
 online = "1"
 removable = "0"
 bootable = "1"
 state = "4"
 dev = "sda"
 type = "phy"
 mode = "w"
 device-type = "disk"
 discard-enable = "1"
 physical-device = "43266"
 sectors = "62914560"
 info = "0"
 sector-size = "512"
 feature-flush-cache = "1"
 hotplug-status = "connected"
2064 = ""
 frontend = "/local/domain/2/device/vbd/2064"
 params = "/dev/mapper/scratch-fbsd--test.1"
 script = "/etc/xen/scripts/block"
 frontend-id = "2"
 online = "1"
 removable = "0"
 bootable = "1"
 state = "4"
 dev = "sdb"
 type = "phy"
 mode = "w"
 device-type = "disk"
 discard-enable = "1"
 physical-device = "43267"
 hotplug-status = "connected"
 sectors = "62914560"
 info = "0"
 sector-size = "512"
 feature-flush-cache = "1"
768 = ""
 frontend = "/local/domain/2/device/vbd/768"
 params = "/build/images/FreeBSD-12.2-RELEASE-amd64-mini-memstick.img"
 script = "/etc/xen/scripts/block"
 frontend-id = "2"
 online = "1"
 removable = "0"
 bootable = "1"
 state = "4"
 dev = "hda"
 type = "phy"
 mode = "r"
 device-type = "disk"
 discard-enable = "0"
 vnd = "/dev/vnd0d"
 physical-device = "3587"
 sectors = "792576"
 info = "4"
 sector-size = "512"
 feature-flush-cache = "1"
 hotplug-status = "connected"


However FreeBSD now says:

# sysctl dev.xbd
dev.xbd.2.xenstore_peer_path: /local/domain/0/backend/vbd/2/768
dev.xbd.2.xenbus_peer_domid: 0
dev.xbd.2.xenbus_connection_state: Connected
dev.xbd.2.xenbus_dev_type: vbd
dev.xbd.2.xenstore_path: device/vbd/768
dev.xbd.2.features: flush
dev.xbd.2.ring_pages: 1
dev.xbd.2.max_request_size: 40960
dev.xbd.2.max_request_segments: 11
dev.xbd.2.max_requests: 32
dev.xbd.2.%parent: xenbusb_front0
dev.xbd.2.%pnpinfo: 
dev.xbd.2.%location: 
dev.xbd.2.%driver: xbd
dev.xbd.2.%desc: Virtual Block Device
dev.xbd.1.xenstore_peer_path: /local/domain/0/backend/vbd/2/2064
dev.xbd.1.xenbus_peer_domid: 0
dev.xbd.1.xenbus_connection_state: Connected
dev.xbd.1.xenbus_dev_type: vbd
dev.xbd.1.xenstore_path: device/vbd/2064
dev.xbd.1.features: flush
dev.xbd.1.ring_pages: 1
dev.xbd.1.max_request_size: 40960
dev.xbd.1.max_request_segments: 11
dev.xbd.1.max_requests: 32
dev.xbd.1.%parent: xenbusb_front0
dev.xbd.1.%pnpinfo: 
dev.xbd.1.%location: 
dev.xbd.1.%driver: xbd
dev.xbd.1.%desc: Virtual Block Device
dev.xbd.0.xenstore_peer_path: /local/domain/0/backend/vbd/2/2048
dev.xbd.0.xenbus_peer_domid: 0
dev.xbd.0.xenbus_connection_state: Connected
dev.xbd.0.xenbus_dev_type: vbd
dev.xbd.0.xenstore_path: device/vbd/2048
dev.xbd.0.features: flush
dev.xbd.0.ring_pages: 1
dev.xbd.0.max_request_size: 40960
dev.xbd.0.max_request_segments: 11
dev.xbd.0.max_requests: 32
dev.xbd.0.%parent: xenbusb_front0
dev.xbd.0.%pnpinfo: 
dev.xbd.0.%location: 
dev.xbd.0.%driver: xbd
dev.xbd.0.%desc: Virtual Block Device
dev.xbd.%parent: 


For reference it said this previously (e.g. for dev.xbd.0):

dev.xbd.0.xenstore_peer_path: /local/domain/0/backend/vbd/2/2048
dev.xbd.0.xenbus_peer_domid: 0
dev.xbd.0.xenbus_connection_state: Connected
dev.xbd.0.xenbus_dev_type: vbd
dev.xbd.0.xenstore_path: device/vbd/2048
dev.xbd.0.features: flush
dev.xbd.0.ring_pages: 1
dev.xbd.0.max_request_size: 65536
dev.xbd.0.max_request_segments: 17
dev.xbd.0.max_requests: 32
dev.xbd.0.%parent: xenbusb_front0
dev.xbd.0.%pnpinfo: 
dev.xbd.0.%location: 
dev.xbd.0.%driver: xbd
dev.xbd.0.%desc: Virtual Block Device




For reference the bug behaviour remains the same (at least for this
simplest quick and easy test):

# newfs /dev/da0
/dev/da0: 30720.0MB (62914560 sectors) block size 32768, fragment size 4096
        using 50 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112, 11540352, 12822592, 14104832, 15387072, 16669312,
 17951552, 19233792, 20516032, 21798272, 23080512, 24362752, 25644992, 26927232, 28209472, 29491712, 30773952, 32056192, 33338432,
 34620672, 35902912, 37185152, 38467392, 39749632, 41031872, 42314112, 43596352, 44878592, 46160832, 47443072, 48725312, 50007552,
 51289792, 52572032, 53854272, 55136512, 56418752, 57700992, 58983232, 60265472, 61547712, 62829952
# fsck /dev/da0
** /dev/da0
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
CG 0: BAD CHECK-HASH 0x49168424 vs 0xe610ac1b
SUMMARY INFORMATION BAD
SALVAGE? [yn] n

BLK(S) MISSING IN BIT MAPS
SALVAGE? [yn] n

CG 1: BAD CHECK-HASH 0xfa76fceb vs 0xb9e90a55
CG 2: BAD CHECK-HASH 0x41f444c vs 0x5efb290e
CG 3: BAD CHECK-HASH 0xad63fe7e vs 0x7ab3861f
CG 4: BAD CHECK-HASH 0xfd2043f3 vs 0xadb781f4
CG 5: BAD CHECK-HASH 0x545cf9c1 vs 0xcec5661e
CG 6: BAD CHECK-HASH 0xaa354166 vs 0x7dd269d3
CG 7: BAD CHECK-HASH 0x349fb54 vs 0x3078e065
CG 8: BAD CHECK-HASH 0xab23a7c vs 0xc8aa7e98
CG 9: BAD CHECK-HASH 0xa3ce804e vs 0x205a6b0d
CG 10: BAD CHECK-HASH 0x5da738e9 vs 0x604d5ecf
CG 11: BAD CHECK-HASH 0xf4db82db vs 0xfef11ffc
CG 12: BAD CHECK-HASH 0xa4983f56 vs 0xc7e701c8
CG 13: BAD CHECK-HASH 0xde48564 vs 0x42072fba
CG 14: BAD CHECK-HASH 0xf38d3dc3 vs 0xad98cf7b
CG 15: BAD CHECK-HASH 0x5af187f1 vs 0xbacadeb1
CG 16: BAD CHECK-HASH 0xe07abf93 vs 0xe4ca225
CG 17: BAD CHECK-HASH 0x490605a1 vs 0xe2917802
CG 18: BAD CHECK-HASH 0xb76fbd06 vs 0xa895abc
CG 19: BAD CHECK-HASH 0x1e130734 vs 0x6a8bc135
CG 20: BAD CHECK-HASH 0x4e50bab9 vs 0x44719a4a
CG 21: BAD CHECK-HASH 0xe72c008b vs 0xadb0c6e9
CG 22: BAD CHECK-HASH 0x1945b82c vs 0x3aeca102
CG 23: BAD CHECK-HASH 0xb039021e vs 0xb99f957d
CG 24: BAD CHECK-HASH 0xb9c2c336 vs 0xd384be85
CG 25: BAD CHECK-HASH 0x10be7904 vs 0x649e2abf
CG 26: BAD CHECK-HASH 0xeed7c1a3 vs 0x95f79999
CG 27: BAD CHECK-HASH 0x47ab7b91 vs 0x3fb02d8b
CG 28: BAD CHECK-HASH 0x17e8c61c vs 0xa2b4ca67
CG 29: BAD CHECK-HASH 0xbe947c2e vs 0x65972e04
CG 30: BAD CHECK-HASH 0x40fdc489 vs 0x4219223f
CG 31: BAD CHECK-HASH 0xe9817ebb vs 0x36eb9a37
CG 32: BAD CHECK-HASH 0x3007c2bc vs 0xd1916e1d
CG 33: BAD CHECK-HASH 0x997b788e vs 0x5204f64d
CG 34: BAD CHECK-HASH 0x6712c029 vs 0xe291bcf0
CG 35: BAD CHECK-HASH 0xce6e7a1b vs 0x136ff032
CG 36: BAD CHECK-HASH 0x9e2dc796 vs 0x78ea85c8
CG 37: BAD CHECK-HASH 0x37517da4 vs 0x40c2cf31
CG 38: BAD CHECK-HASH 0xc938c503 vs 0x9b844ab6
CG 39: BAD CHECK-HASH 0x60447f31 vs 0x23129481
CG 40: BAD CHECK-HASH 0x69bfbe19 vs 0xa81f5e9
CG 41: BAD CHECK-HASH 0xc0c3042b vs 0xbd37ebd1
CG 42: BAD CHECK-HASH 0x3eaabc8c vs 0xfadfd8d1
CG 43: BAD CHECK-HASH 0x97d606be vs 0xf41513bc
CG 44: BAD CHECK-HASH 0xc795bb33 vs 0xad4e6069
CG 45: BAD CHECK-HASH 0x6ee90101 vs 0xbeab94a9
CG 46: BAD CHECK-HASH 0x9080b9a6 vs 0x2688acd1
CG 47: BAD CHECK-HASH 0x39fc0394 vs 0xb5a37e85
CG 48: BAD CHECK-HASH 0x83773bf6 vs 0xd779cc90
CG 49: BAD CHECK-HASH 0xe0d3fd3c vs 0xb8083ca
2 files, 2 used, 7612693 free (21 frags, 951584 blocks, 0.0% fragmentation)

***** FILE SYSTEM MARKED DIRTY *****

***** PLEASE RERUN FSCK *****

-- 
					Greg A. Woods <gwoods%acm.org@localhost>

Kelowna, BC     +1 250 762-7675           RoboHack <woods%robohack.ca@localhost>
Planix, Inc. <woods%planix.com@localhost>     Avoncote Farms <woods%avoncote.ca@localhost>

Attachment: pgpVYWc4pPuMt.pgp
Description: OpenPGP Digital Signature



Home | Main Index | Thread Index | Old Index