Subject: kern/1718: MAXBSIZE too low.
To: None <gnats-bugs@NetBSD.ORG>
From: Chris G. Demetriou <cgd@NetBSD.ORG>
List: netbsd-bugs
Date: 11/02/1995 20:55:42
>Number:         1718
>Category:       kern
>Synopsis:       MAXBSIZE changed from the Lite value, & performance suffered
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    kern-bug-people (Kernel Bug People)
>State:          open
>Class:          change-request
>Submitter-Id:   net
>Arrival-Date:   Thu Nov  2 21:20:01 1995
>Originator:     Chris G. Demetriou
Kernel Hackers 'r' Us
>Release:        NetBSD-current 951102
Measured on NetBSD/Alpha, but similar problem on all ports.

	MAXBSIZE was #defined to be MAXPHYS in 4.4BSD-Lite (and Lite2).
	It was changed to be 16k in NetBSD.  (If my memory serves me,
	the reason given at the time was to save kernel page table space
	on the i386.

	This drastically reduces the effectiveness of clustering (esp.
	write clustering), because the number of I/Os which can be combined
	is reduced.  (Coalsced I/Os must still fit into a buffer with
	maximum size of MAXBSIZE.)

	For instance, on a NetBSD/Alpha system with a 53c810 SCSI
	controller, doing I/O to an rz25 (not too fast, but not horrible)
	disk, with a standard file system (i.e. no special options to newfs,
	8k block size, maxcontig of 8, etc.), i see the following:


	# time dd if=/dev/zero of=/mnt/bar bs=1024k count=86
	86+0 records in
	86+0 records out
	90177536 bytes transferred in 105 secs (858833 bytes/sec)
	0.0u 9.1s 1:44.95 8.6% 0+0k 3+5548io 0pf+0w


	# time dd if=/dev/zero of=/mnt/bar bs=1024k count=86
	86+0 records in
	86+0 records out
	90177536 bytes transferred in 52 secs (1734183 bytes/sec)
	0.0u 7.5s 0:52.70 14.4% 0+0k 11+9684io 0pf+0w

	that's a 8.2% reduction in CPU time, and 50% reduction
	in elapsed times.

	For simple large-read tests (e.g. dd with 1MB block size),
	CPU time was reduced by between 20% and 40%, but elapsed
	time remained the same.  (CPU time reduction is because
	number of I/Os processed by the SCSI code would have been
	reduced.  Elapsed time probably stayed the same because of
	effectiveness of block read-ahead and on-disk buffering.)

	Time file system operations with a kernel with MAXBSIZE set to 16k
	(the default).

	The only thing i changed in my tests was the kernel that i booted,
	and the only difference there was that MAXBSIZE had changed.

	Obviously, the exact results you get will depend on your system
	configuration, and will be sensitive to driver quality and disk speed.
	In particular, systems which have a large per-I/O cost should see
	the most benefit from increasing MAXBSIZE.

	(1) Change the #define of MAXBSIZE in <sys/param.h> to be
		MAXPHYS, as it was in releases from Berkeley.

	(2) Don't be so quick to hack system constants which
		have wide-ranging performance implications.