tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: [GSoC 2026] Userland PCI drivers



Hello,

On Tue, 10 Mar 2026 00:23:41 -0300
Oliver Miyar Ugarte <olivermu%estudante.ufscar.br@localhost> wrote:

> > > I've been working on the Userland PCI Drivers project for GSoC 2026
> > > (https://wiki.netbsd.org/projects/project/userland_pci/) and have a
> > > draft implementation of the first milestone, achieved by mapping PCI
> > > BARs from userspace via a new ioctl.
> > > (https://github.com/NetBSD/src/pull/74)
> > >
> > > This adds PCI_IOC_MAP_BAR to /dev/pci/pci_usrreq.c, allowing userspace
> > > to safely map device registers without using /dev/mem. I've tested it
> > > with QEMU's edu device and it returns the correct BAR offset and size.  
> >
> > You can already map PCI resources by their bus addresses via /dev/pci*,
> > and access config space via ioctl(PCI_IOC_BDF_CFG{READ|WRITE}).
> > That's what the Xserver uses.
> > See
> > https://cvsweb.netbsd.org/bsdweb.cgi/xsrc/external/mit/libpciaccess/dist/src/netbsd_pci.c?rev=1.23
> > and https://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libpci/
> >
> > What's missing is stuff like DMA and interrupts from userland.
> >
> > No idea why the project proposal mentions /dev/mem at all - it's not
> > portable ( there's a lot of supported hardware where PCI bus addresses
> > do not map 1:1 to physical addresses in CPU space, and others where you
> > can only see actual RAM through /dev/mem, not PCI space ) and requires
> > knowledge of the underlying hardware other than the device you're
> > trying to talk to.
> >
> > So, why the additional ioctl? You can already access config space, find
> > devices and their BARs, and mmap() them at offset == bus address
> > without any kernel changes.
> >
> > have fun
> > Michael  
> 
> 
> Thanks a lot for the feedback!
> 
> I can't believe I missed that existing infrastructure, I had tunnel
> vision on doing what the project proposal mentioned and didn't check
> sufficiently if it already existed.
> 
> I will focus my project on adding DMA and interrupts to userland since
> that's what's needed.
> Do you have any advice on that?

We also need a bunch of kernel APIs so drivers can be compiled and run
in both userland and kernel space. The project description specifically
mentions bus_space, which should be easy enough, and that alone would
allow to run a few simple drivers ( most framebuffer console drivers
for example ). This would need *some* hardware knowledge ( like IO
space access, which is memory mapped on most non-x86 hardware ).
Also, we would need things like PCI bus attachment glue to be provided
by a host process / library, which would implement enough of autoconfig
to call our driver's match and attach functions, hand them appropriate
data structures, device properties, etc.
That's where I would start.
Then there's another problem - most drivers provide interfaces to talk
to other drivers or kernel subsystems - the framebuffer example above
would need to attach a wsdisplay in order to receive instructions on
what to draw where. I'm not sure how much of that is available in rump
- the project description mentions network drivers so I would assume
that part is already there.
( full disclosure - I wrote a bunch of kernel drivers, many of them
graphics related, and a few Xorg drivers, but I have exactly zero
experience with rump )
Interrupts would be relatively easy - we'd need something in the kernel
to notify userland of interrupts ( kevent on /dev/pci? ) , and let
userland register, unregister and acknowledge interrupts, all hidden
from the driver which would just call the host processes
pci_intr_establish(), which would then call the interrupt handler as
appropriate.

have fun
Michael


Home | Main Index | Thread Index | Old Index