Subject: Re: Rev 1.19 of busdma.doc
To: None <tech-kern@NetBSD.ORG, thorpej@nas.nasa.gov>
From: Matthew Jacob <mjacob@feral.com>
List: tech-kern
Date: 11/09/1996 18:10:50
>NAS $Id: busdma.doc,v 1.19 1996/11/09 01:41:35 thorpej Exp $
>..
>int bus_dmamap_create __P((bus_dma_tag_t tag, bus_size_t size,
> int nsegments, bus_size_t maxsegsz, int flags,
> bus_dma_handle_t *dmahp));
>...
> BUS_DMA_WAITOK It is safe to wait (sleep)
> for resouces during this call.
>
> BUS_DMA_NOWAIT It is not safe to wait (sleep)
> for resources during this call.
It may be personal preference, but I only see the need for one flag
here- not two. Also, what about a callback mechanism if mappings that
aren't available now can be made at a later date?
>..
> RETURN VALUES
> Returns 0 on success or an error code to indicate mode of failure.
What kind of error code?
>...
>void bus_dmamap_unload __P((bus_dma_tag_t tag, bus_dma_handle_t dmah));
>...
> If given valid arguments, bus_dmamap_unload() always suceeds.
I'd argue for an error code return. That way you can do lazy hardware
cache flush evaluation with errors returned when things get boobed.
>...
>void bus_dmamap_sync __P((bus_dma_tag_t tag, bus_dma_handle_t dmah,
> bus_dmasync_op_t op));
>...
> BUS_DMASYNC_PREREAD Perform any pre-read DMA cache
> and/or bounce operations.
> BUS_DMASYNC_POSTREAD Perform any post-read DMA cache
> and/or bounce operations.
> BUS_DMASYNC_PREWRITE Perform any pre-write DMA cache
> and/or bounce operations.
> BUS_DMASYNC_POSTWRITE Perform any post-write DMA cache
> and/or bounce operations.
You might want to clarify what READS and WRITES are in respect of
(hardware designers and software engineers always seem to regard
these differently: A programmer will say "A READ (from the device)
is from the device to memory", while a hardware engineero will say
"A WRITE (from the device) of memory from the device to memory").
Also, since cache flushing can be expensive, I'd suggest that a length
and offset be added- that way you can maintain a large discontiguous
mapping set (e.g., ring buffers), but only flush what you need (currently
arrived receive buffer).
>...
>caddr_t bus_dmamem_alloc __P((bus_dma_tag_t tag, size_t size,
> int nsegments, bus_size_t alignment, struct proc *p,
> int flags));
>....
In a system with multiple busses alignment can change
as you move from one nexus to the other, so the initial
alignment (which is probably what is required by the leaf
node's dma engine). This fine insofar as it goes, but if
the leaf node then further subdivides that allocated
memory, it has to know that the alignment it requested
may be less restrictive than that which is now necessary.
How about making the alignment argument a pointer so
that the leaf node can know what is actually required?
Presumably the return value is in units that can get
plugged into a DMA engine, right? If so, then you'd
better make this a quad_t (to allow for physical addresses
for large systems to get passed).
In general, it looks good, although I'm sure there will
be 'clever' architectures that can't fit into this framework.