Subject: Re: Compiler timings on varous MVII NetBSDs etc.
To: NetBSD Bob <nbsdbob@weedcon1.cropsci.ncsu.edu>
From: David Brownlee <abs@netbsd.org>
List: port-vax
Date: 01/23/2001 17:51:19
On Tue, 23 Jan 2001, NetBSD Bob wrote:

> I am trying to get an original Reno up on the thing with scsi mscp
> drives, but so far, no luck.  I did manage to get Tahoe up with
> esdi mscp drives.  That was the best I could do, so far.  I am
> still amazed at the 45 MINUTE Tahoe kernel compile using pcc.
> Why does gcc take over 24 hours?
>
	That is a rather unbalanced metric. NetBSD used to compile kernels
	on the vax with -O, which AFAIK compiles better code than pcc, but
	takes longer to do it. It now defaults to -O2, which includes
	additional CPU and memory expensive optimisations for small
	performance gains, thus taking even longer. The assumption being
	you'll spend much more time running in a kernel than compiling it.

	I agree this becomes very painful on the slower machines, but
	if we want to compare like with like, we should get the same
	compiler compiling the same code on different kernels, or
	different compilers under the same kernel.

	Ideally you would want to take a computationally expensive benchmark,
	that can compile under Ultrix (povray? some maths package?), and
	that the binary will run under emulation on NetBSD.

	Then benchmark it on the same hardware under Ultrix and the
	various NetBSD versions (each kernel compiled with the same set
	of hardware options, ie: no INET6 etc). That gives a baseline for
	kernel performance changes.

	Next compile the same program (with -static) under each version
	of NetBSD, and time it on that and later versions, which should
	give you an idea of how the compiler/library combinations have
	improved or not. It might also be worth trying gcc with -O and
	-O2 to see how much extra performance -O2 gives and how much
	time you have to pay for it..., oh, and -Os :)

> I don't think I can strip anything else out of the kernel config and
> still have a runnable system.  Are there 300K of ifdef's I can strip
> out of the kernel code to lean it up some?  I would love to try that!
>
	I think Chuck meant that #ifdefs could be added to reduce
	the size :)

> If 1.5 is a huge advance, and for the sake of discussion, I will take
> that point of view, then it needs a lot of optimizing on slow machines.

	It is. It does.

	Which brings us to the recently added tech-perform list. I believe
	someone is looking at getting a set of the commercial benchmarking
	sets to establish baseline performence for the various NetBSD
	versions.

	On a related note on i386, alpha, and I believe mips, Charles and
	Jason have reworked the syscall path to provide a 'lean and fast'
	path for the normal case, which has not hurt some benchmark
	figures. I wonder if it might be worth looking at on vax too?

		David/absolute		-- www.netbsd.org: No hype required --