NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kern/58643: bus_dma(9) fails to bounce misaligned inputs requiring extra segments



Attached patch attempts to address the issue for x86.

The idea is that if userland attempts a uio transfer with, say,

{.iov_base = 0xabc00800, .iov_len = 0x1000},
{.iov_base = 0xdef00800, .iov_len = 0x1000},
{.iov_base = 0x12300800, .iov_len = 0x1000},

and maxsegsz=PAGE_SIZE, then (unless we get amazingly lucky and the
user's pages are physically contiguous), this requires six DMA
segments, for the ranges

[0xabc00800,0xabc01000)
[0xabc01000,0xabc01800)
[0xdef00800,0xdef01000)
[0xdef01000,0xdef01800)
[0x12300800,0x12301000)
[0x12301000,0x12301800)

If I got my arithmetic right (which I almost certainly didn't --
probably made some kind of fencepost errors), the new criterion always
allocates bounce buffers if this situation can happen, and doesn't if
it can't.

However, this might be very costly for drivers where this isn't really
an issue.  Maybe there should be a flag to opt into this, like
BUS_DMA_BOUNCEMISALIGNED.
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1724615712 0
#      Sun Aug 25 19:55:12 2024 +0000
# Branch trunk
# Node ID 4c129994ef155f8604d2b6acbd4493d8b73a084f
# Parent  cf7a8f9687ea781207542c43a006460dc134ea3b
# EXP-Topic riastradh-pr58643-busdmamisalignedbounce
x86/bus_dma(9): Prepare to bounce for misaligned inputs.

PR kern/58643: bus_dma(9) fails to bounce misaligned inputs requiring
extra segments

diff -r cf7a8f9687ea -r 4c129994ef15 sys/arch/x86/x86/bus_dma.c
--- a/sys/arch/x86/x86/bus_dma.c	Sat Aug 24 07:24:34 2024 +0000
+++ b/sys/arch/x86/x86/bus_dma.c	Sun Aug 25 19:55:12 2024 +0000
@@ -322,6 +322,18 @@ static int
 	if (map->_dm_bounce_thresh != 0)
 		cookieflags |= X86_DMA_MIGHT_NEED_BOUNCE;
 
+	/*
+	 * If we try to load a misaligned buffer, we may need to split
+	 * all of the pages of the buffer into two segments apiece --
+	 * one for the low part of the page, the next for the high part
+	 * of the page.
+	 *
+	 * If there's not enough segments to fit the maximum transfer
+	 * size split up this way, we may need to bounce.
+	 */
+	if (nsegments/2 < howmany(size, MIN(PAGE_SIZE, maxsegsz)))
+		cookieflags |= X86_DMA_MIGHT_NEED_BOUNCE;
+
 	if ((cookieflags & X86_DMA_MIGHT_NEED_BOUNCE) == 0) {
 		*dmamp = map;
 		return 0;


Home | Main Index | Thread Index | Old Index