NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
RE: kern/57404: Can't see NVMe drives on ASUS Rampage VI mb in DIMM slot
The following reply was made to PR kern/57404; it has been noted by GNATS.
From: Namdak Tonpa <maxim%synrc.com@localhost>
To: matthew green <mrg%eterna.com.au@localhost>, "gnats-bugs%netbsd.org@localhost"
<gnats-bugs%netbsd.org@localhost>
Cc: "kern-bug-people%netbsd.org@localhost" <kern-bug-people%netbsd.org@localhost>,
"gnats-admin%netbsd.org@localhost" <gnats-admin%netbsd.org@localhost>, "netbsd-bugs%netbsd.org@localhost"
<netbsd-bugs%netbsd.org@localhost>
Subject: RE: kern/57404: Can't see NVMe drives on ASUS Rampage VI mb in DIMM
slot
Date: Tue, 23 May 2023 20:07:20 +0000
--_000_PAVPR02MB9938C5E75A2E3CC0408CC512B0409PAVPR02MB9938eurp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Sure, here is my dmesg: https://gist.github.com/5HT/d00e14b9fbf73f3fcb332d1=
01a64feb9
From: matthew green<mailto:mrg%eterna.com.au@localhost>
Sent: Sunday, May 14, 2023 9:55 AM
To: gnats-bugs%netbsd.org@localhost<mailto:gnats-bugs%netbsd.org@localhost>; maxim%synrc.com@localhost<ma=
ilto:maxim%synrc.com@localhost>
Cc: kern-bug-people%netbsd.org@localhost<mailto:kern-bug-people%netbsd.org@localhost>; gnats-ad=
min%netbsd.org@localhost<mailto:gnats-admin%netbsd.org@localhost>; netbsd-bugs%netbsd.org@localhost<mailt=
o:netbsd-bugs%netbsd.org@localhost>
Subject: re: kern/57404: Can't see NVMe drives on ASUS Rampage VI mb in DIM=
M slot
> NetBSD localhost 9.3 NetBSD 9.3 (GENERIC) #0: Thu Aug 4 15:30:37 UTC 202=
2 mkrepro%mkrepro.NetBSD.org@localhost:/usr/src/sys/arch/amd64/compile/GENERIC amd64
> >Description:
> [ 1.042790] nvme3 at pci15 dev 0 function 0: vendor 15b7 product 5011=
(rev. 0x01)
> [ 1.042790] nvme3: NVMe 1.4
> [ 1.042790] nvme3: for admin queue interrupting at msix11 vec 0
> [ 1.042790] nvme3: WDS100T1X0E-00AFY0, firmware 614600WD, serial 2136=
HR449906
> [ 1.042790] nvme3: autoconfiguration error: unable to establish nvme3=
ioq1 interrupt
> [ 1.042790] nvme3: autoconfiguration error: unable to create io queue
>
> [ 1.042790] nvme4 at pci17 dev 0 function 0: vendor 144d product a808=
(rev. 0x00)
> [ 1.042790] nvme4: NVMe 1.3
> [ 1.042790] nvme4: for admin queue interrupting at msix11 vec 0
> [ 1.042790] nvme4: Samsung SSD 970 EVO Plus 1TB, firmware 2B2QEXM7, s=
erial S4EWNX0R946108Y
> [ 1.042790] nvme4: autoconfiguration error: unable to establish nvme4=
ioq1 interrupt
> [ 1.042790] nvme4: autoconfiguration error: unable to create io queue
>
> [ 1.042790] nvme5 at pci18 dev 0 function 0: vendor 144d product a808=
(rev. 0x00)
> [ 1.042790] nvme5: NVMe 1.3
> [ 1.042790] nvme5: for admin queue interrupting at msix11 vec 0
> [ 1.042790] nvme5: Samsung SSD 970 EVO Plus 1TB, firmware 2B2QEXM7, s=
erial S4EWNX0R946133P
> [ 1.042790] nvme5: autoconfiguration error: unable to establish nvme5=
ioq1 interrupt
> [ 1.042790] nvme5: autoconfiguration error: unable to create io queue
> >How-To-Repeat:
> You need the ASUS Rampage VI motherboard I can provide access to.
can you show the full dmesg? or at least, the cpus, and all the
nvme lines?
there's a problem with many cpus and several nvme devices in netbsd-9
that is partly solved in netbsd-10, but i'm not sure that 6 devices
will work, nor that it's exactly the same problem, but it certainly
fails to attach all the per-cpu interrupts due to running out. one
method to work around this would be to either on on "force_intx" or
turn off "mq" settings in the kernel (unfortunately, requires a
kernel build or early ddb to modify these variables):
sys/dev/pci/nvme_pci.c:67:int nvme_pci_force_intx =3D 0;
sys/dev/pci/nvme_pci.c:69:int nvme_pci_mq =3D 1; /* INTx: ioq=3D1,=
MSI/MSI-X: ioq=3Dncpu */
.mrg.
--_000_PAVPR02MB9938C5E75A2E3CC0408CC512B0409PAVPR02MB9938eurp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc=
hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of=
fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:42.5pt 42.5pt 42.5pt 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"#954F72" style=3D"word-wrap:bre=
ak-word">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Sure, here is my dmesg: <a href=3D"https://gist.gith=
ub.com/5HT/d00e14b9fbf73f3fcb332d101a64feb9">
https://gist.github.com/5HT/d00e14b9fbf73f3fcb332d101a64feb9</a></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<div style=3D"mso-element:para-border-div;border:none;border-top:solid #E1E=
1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class=3D"MsoNormal" style=3D"border:none;padding:0in"><b>From: </b><a hr=
ef=3D"mailto:mrg%eterna.com.au@localhost">matthew green</a><br>
<b>Sent: </b>Sunday, May 14, 2023 9:55 AM<br>
<b>To: </b><a href=3D"mailto:gnats-bugs%netbsd.org@localhost">gnats-bugs%netbsd.org@localhost</=
a>; <a href=3D"mailto:maxim%synrc.com@localhost">
maxim%synrc.com@localhost</a><br>
<b>Cc: </b><a href=3D"mailto:kern-bug-people%netbsd.org@localhost">kern-bug-people@ne=
tbsd.org</a>;
<a href=3D"mailto:gnats-admin%netbsd.org@localhost">gnats-admin%netbsd.org@localhost</a>; <a hr=
ef=3D"mailto:netbsd-bugs%netbsd.org@localhost">
netbsd-bugs%netbsd.org@localhost</a><br>
<b>Subject: </b>re: kern/57404: Can't see NVMe drives on ASUS Rampage VI mb=
in DIMM slot</p>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">> NetBSD localhost 9.3 NetBSD 9.3 (GENERIC) #0: T=
hu Aug 4 15:30:37 UTC 2022 mkrepro%mkrepro.NetBSD.org@localhost:/usr/src/=
sys/arch/amd64/compile/GENERIC amd64<br>
> >Description:<br>
> [ 1.042790] nvme3 at pci15 dev 0 function 0: v=
endor 15b7 product 5011 (rev. 0x01)<br>
> [ 1.042790] nvme3: NVMe 1.4<br>
> [ 1.042790] nvme3: for admin queue interruptin=
g at msix11 vec 0<br>
> [ 1.042790] nvme3: WDS100T1X0E-00AFY0, firmwar=
e 614600WD, serial 2136HR449906<br>
> [ 1.042790] nvme3: autoconfiguration error: un=
able to establish nvme3 ioq1 interrupt<br>
> [ 1.042790] nvme3: autoconfiguration error: un=
able to create io queue<br>
><br>
> [ 1.042790] nvme4 at pci17 dev 0 function 0: v=
endor 144d product a808 (rev. 0x00)<br>
> [ 1.042790] nvme4: NVMe 1.3<br>
> [ 1.042790] nvme4: for admin queue interruptin=
g at msix11 vec 0<br>
> [ 1.042790] nvme4: Samsung SSD 970 EVO Plus 1T=
B, firmware 2B2QEXM7, serial S4EWNX0R946108Y<br>
> [ 1.042790] nvme4: autoconfiguration error: un=
able to establish nvme4 ioq1 interrupt<br>
> [ 1.042790] nvme4: autoconfiguration error: un=
able to create io queue<br>
><br>
> [ 1.042790] nvme5 at pci18 dev 0 function 0: v=
endor 144d product a808 (rev. 0x00)<br>
> [ 1.042790] nvme5: NVMe 1.3<br>
> [ 1.042790] nvme5: for admin queue interruptin=
g at msix11 vec 0<br>
> [ 1.042790] nvme5: Samsung SSD 970 EVO Plus 1T=
B, firmware 2B2QEXM7, serial S4EWNX0R946133P<br>
> [ 1.042790] nvme5: autoconfiguration error: un=
able to establish nvme5 ioq1 interrupt<br>
> [ 1.042790] nvme5: autoconfiguration error: un=
able to create io queue<br>
> >How-To-Repeat:<br>
> You need the ASUS Rampage VI motherboard I can provide access to.<br>
<br>
can you show the full dmesg? or at least, the cpus, and all the<br>
nvme lines?<br>
<br>
there's a problem with many cpus and several nvme devices in netbsd-9<br>
that is partly solved in netbsd-10, but i'm not sure that 6 devices<br>
will work, nor that it's exactly the same problem, but it certainly<br>
fails to attach all the per-cpu interrupts due to running out. one<br=
>
method to work around this would be to either on on "force_intx" =
or<br>
turn off "mq" settings in the kernel (unfortunately, requires a<b=
r>
kernel build or early ddb to modify these variables):<br>
<br>
sys/dev/pci/nvme_pci.c:67:int nvme_pci_force_intx =3D 0;<br>
sys/dev/pci/nvme_pci.c:69:int nvme_pci_mq =3D 1; &nb=
sp; /* INTx: ioq=3D1, MSI/MSI-X: ioq=3Dncpu */<br>
<br>
<br>
.mrg.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_PAVPR02MB9938C5E75A2E3CC0408CC512B0409PAVPR02MB9938eurp_--
Home |
Main Index |
Thread Index |
Old Index