[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Implement mmap for PUD
I think mmap can work as follows:
- blktap(4) allocates shared ring memory
- pud(4) is attached to a parent blktap(4)
- userland asks mmap buffer (shared ring memory)
- UVM finds a VA range, attaches pud(4) there
- touching the buffer causes fault, uvm_fault -> udv_fault -> pud_mmap
- pud_mmap in turn calls the parent blktap(4)'s mmap, which returns
the ring buffers physical address
- the ring buffer address is entered to MMU (udv_fault -> pmap_enter)
- userland can access shared ring memory
On Sat, Sep 17, 2011 at 12:04 PM, Masao Uebayashi
> OK, I've re-read this topic. Your goal is to implement blktap on
> NetBSD/Xen, right?
> According to Xen wiki , blktap provides an interface for userland
> to handle block device I/O. I guess blktap gets inter-domain, shared
> ring memory from hypervisor. Dom0 userland mmaps the ring memory and
> handles I/O requests.
> pud(4) is different; it pretends a device driver backed by a real H/W.
> Kernel passes buffers to pud(4) so that pud(4) can read/write data
> from/to a real H/W, either PIO or DMA. PIO uses kernel address space
> to access passed buffers. DMA uses physical addresses.
> Here you want to mmap those buffers to userland, right? I don't think
> it's possible. Underlying pages of given buffers are marked "busy
> doing I/O" (PG_BUSY). Users (either vnode/anon owners) are not
> allowed to map those pages until I/O completes. If some pud(4)
> backend driver process suddenly tries to mmap those pages, VM will
> surely get upset.
> So a possible blktap(4) would look like:
> - run only in Xen Dom0
> - allocate (map) shared ring memory (both I/O requests and buffers)
> from hypervisor using Xen machine-dependent API
> - convert blktap I/O requests to NetBSD's bdev/strategy/struct buf format
> - and call pud(4) in kernel
>  http://wiki.xensource.com/xenwiki/blktap
Main Index |
Thread Index |