Subject: kern/9868: raidframe misbehaves with MAXPHYS
To: None <gnats-bugs@gnats.netbsd.org>
From: None <Manuel.Bouyer@lip6.fr>
List: netbsd-bugs
Date: 04/13/2000 08:49:29
>Number:         9868
>Category:       kern
>Synopsis:       raidframe misbehaves with MAXPHYS
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Thu Apr 13 08:50:01 PDT 2000
>Closed-Date:
>Last-Modified:
>Originator:     Manuel Bouyer
>Release:        -current as of last week
>Organization:
	LIP6/RP
>Environment:
	
System: NetBSD paris 1.4X NetBSD 1.4X (HERA) #8: Thu Apr 13 12:46:26 PDT 2000 root@paris:/usr/src/sys/arch/i386/compile/HERA i386

/etc/raid0.conf:
START array
1 4 0
START disks
/dev/sd1e
/dev/sd2e
/dev/sd4e
/dev/sd5e
START layout
512 1 1 5
START queue
fifo 100

>Description:
	Setting up a RAID5 array with SectPerSU > maxphys (currently 64k)
	causes various misbehavior. I didn't analyse deeply the situation
	bus it seems that I/O with size > MAXPHYS are passed to the
	underlying device:
	ncr1: unable to load xfer DMA map, error = 22
	ncr1: unable to load xfer DMA map, error = 22
	ncr0: unable to load xfer DMA map, error = 22
	ncr0: unable to load xfer DMA map, error = 22
	(error 22 being EINVAL). I got these messages while issuing
	a 'raidctl -i'. Attempting to unconfigure the RAID will cause
	a panic.
>How-To-Repeat:
	setup a RAID with SectPerSU > 128 (=64k with 512-bytes sectors)
	and try to use it.
>Fix:
	I'm not sure RAIDframe should support such configs (I tried this
	while benchmarking a RAID5 stripe, and I'm not sure this would have
	improved performances vs SectPerSU=128), but in this case the
	configuration of the RAID should definitevy fail; and not create
	loosages later.
>Release-Note:
>Audit-Trail:
>Unformatted: