Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
RAIDframe (mirror) crash reconstruction higher priority than user ops?
NetBSD slave 9.0_STABLE NetBSD 9.0_STABLE (SLAVE) #1: Sat Apr 25 11:26:56 AEST 2020 stix@slave:/home/netbsd/netbsd-9/obj.amd64/home/netbsd/netbsd-9/src/sys/arch/amd64/compile/SLAVE amd64
Noticed after a couple of power loss due to circuit breaker trips, during
"parity" scan after boot, where raid0 is a mirror on wd{0,1}. Note the
asvc_t spiking on raid0:
slave:ksh$ iostat -xy raid0 wd0 wd1 1
device read KB/t r/s MB/s write KB/t w/s MB/s wait actv wsvc_t asvc_t wtime time
wd0 64.00 972 60.72 9.21 20 0.18 0.0 1.2 0.00 1.22 0.00 0.54
wd1 62.20 1007 61.17 9.20 20 0.18 0.0 3.6 0.00 3.47 0.00 0.91
raid0 13.18 35 0.45 9.35 19 0.18 2.7 3.6 49.87 65.87 0.20 0.93
wd0 64.00 1072 66.98 0.00 0 0.00 0.0 0.9 0.00 0.86 0.00 0.92
wd1 64.00 1071 66.91 0.00 0 0.00 0.0 6.4 0.00 6.00 0.00 1.00
raid0 0.00 0 0.00 0.00 0 0.00 1.0 6.0 0.00 0.00 1.00 1.00
wd0 64.00 181 11.30 13.49 134 1.77 0.0 0.1 0.00 0.37 0.00 0.09
wd1 56.62 208 11.49 13.45 134 1.77 0.0 6.6 0.00 19.28 0.00 1.00
raid0 15.83 32 0.49 13.45 134 1.77 1.0 5.6 6.07 33.84 0.33 1.00
wd0 64.00 283 17.68 14.73 46 0.66 0.0 0.2 0.00 0.57 0.00 0.18
wd1 56.77 342 18.94 14.73 48 0.68 0.0 3.6 0.00 9.13 0.00 0.60
raid0 14.12 49 0.68 16.00 44 0.68 8.6 5.3 92.01 56.90 0.65 1.00
wd0 64.00 1056 66.00 0.00 0 0.00 0.0 0.9 0.00 0.86 0.00 0.91
wd1 64.00 1057 66.06 0.00 0 0.00 0.0 6.4 0.00 6.07 0.00 1.00
raid0 0.00 0 0.00 0.00 0 0.00 2.0 6.0 0.00 0.00 1.00 1.00
wd0 64.00 917 57.34 0.00 0 0.00 0.0 0.6 0.00 0.63 0.00 0.57
wd1 63.16 931 57.45 0.00 0 0.00 0.0 6.6 0.00 7.08 0.00 1.00
raid0 11.33 24 0.26 0.00 0 0.00 1.6 5.8 68.03 245.42 0.83 1.00
wd0 64.00 671 41.94 0.50 1 0.00 0.0 0.2 0.00 0.35 0.00 0.24
wd1 62.03 697 42.21 0.50 1 0.00 0.0 6.9 0.00 9.85 0.00 1.00
raid0 8.56 20 0.17 0.00 0 0.00 0.0 6.0 0.00 300.11 0.00 1.00
wd0 64.00 761 47.57 0.00 0 0.00 0.0 0.5 0.00 0.70 0.00 0.53
wd1 61.77 797 48.06 0.00 0 0.00 0.0 6.4 0.00 7.99 0.00 1.00
raid0 17.67 33 0.57 0.00 0 0.00 0.0 5.8 0.00 174.51 0.00 1.00
wd0 64.00 608 37.98 0.50 1 0.00 0.0 0.3 0.00 0.42 0.00 0.26
wd1 61.97 636 38.51 0.50 1 0.00 0.0 5.2 0.00 8.11 0.00 0.87
raid0 15.63 25 0.39 0.00 0 0.00 0.0 5.8 0.00 231.57 0.00 0.99
wd0 64.00 781 48.81 0.50 1 0.00 0.0 0.7 0.00 0.86 0.00 0.67
wd1 62.19 810 49.17 0.00 0 0.00 0.0 5.4 0.00 6.70 0.00 1.00
raid0 14.73 30 0.43 0.00 0 0.00 0.1 4.9 2.00 165.73 0.02 1.00
wd0 64.00 377 23.55 12.90 24 0.30 0.0 0.4 0.00 0.89 0.00 0.34
wd1 56.31 389 21.38 12.34 22 0.27 0.0 6.0 0.00 14.60 0.00 1.00
raid0 15.77 62 0.95 12.34 22 0.27 1.8 5.2 21.79 61.81 0.38 1.00
wd0 64.00 600 37.49 6.60 4 0.03 0.0 0.4 0.00 0.74 0.00 0.45
wd1 62.04 705 42.69 6.60 5 0.03 0.0 5.0 0.00 6.98 0.00 1.00
raid0 14.07 28 0.38 6.60 5 0.03 0.3 4.3 7.88 129.14 0.11 1.00
wd0 64.00 363 22.66 10.44 25 0.25 0.0 0.3 0.00 0.84 0.00 0.31
wd1 57.71 417 23.53 10.14 21 0.20 0.0 6.5 0.02 14.93 0.01 1.00
raid0 15.27 53 0.80 10.14 20 0.20 1.5 5.7 20.14 77.00 0.61 1.00
wd0 64.00 1070 66.87 0.00 0 0.00 0.0 0.6 0.00 0.52 0.00 0.55
wd1 64.00 1069 66.81 0.00 0 0.00 0.0 6.7 0.00 6.26 0.00 1.00
raid0 0.00 0 0.00 0.00 0 0.00 2.0 6.0 0.00 0.00 1.00 1.00
wd0 64.00 868 54.25 14.00 8 0.11 0.0 0.3 0.00 0.40 0.00 0.35
wd1 63.27 881 54.42 12.75 8 0.10 0.0 6.8 0.00 7.68 0.00 1.00
raid0 13.69 13 0.17 12.75 8 0.10 1.8 6.0 85.75 288.20 0.95 1.00
wd0 64.00 926 57.90 0.00 0 0.00 0.0 0.6 0.00 0.61 0.00 0.56
wd1 63.69 932 57.99 0.00 0 0.00 0.0 6.6 0.01 7.12 0.01 1.00
raid0 16.00 6 0.09 0.00 0 0.00 1.4 6.0 233.42 1010.38 1.00 1.00
wd0 64.00 1075 67.18 0.50 2 0.00 0.0 0.2 0.00 0.21 0.00 0.22
wd1 64.00 1076 67.26 0.50 2 0.00 0.0 6.9 0.00 6.42 0.00 1.00
raid0 0.00 0 0.00 0.00 0 0.00 2.0 6.0 0.00 0.00 1.00 1.00
wd0 12.02 76 0.89 13.13 202 2.59 0.0 1.0 0.00 3.54 0.00 0.73
wd1 14.56 32 0.46 13.15 197 2.52 0.0 2.6 0.00 11.57 0.00 1.00
raid0 12.81 105 1.31 13.15 197 2.52 0.8 3.6 2.54 12.02 0.23 1.00
Needless to say, anything trying to do I/O during this time suffers
badly. I haven't looked into the priority of raidframe ops, is this
expected?
I appear to have BUFQ_PRIOCSCAN set; I wonder if this might be it. I think
I'll switch back to BUFQ_READPRIO...
--
Paul Ripke
"Great minds discuss ideas, average minds discuss events, small minds
discuss people."
-- Disputed: Often attributed to Eleanor Roosevelt. 1948.
Home |
Main Index |
Thread Index |
Old Index