Subject: Re: libedit
To: Jonathan Stone <jonathan@DSG.Stanford.EDU>
From: Jukka Marin <firstname.lastname@example.org>
Date: 12/18/1996 12:37:20
> I hope it's clear that what I said was that the cost of the
> _extra copies_ Dennis referred to was going to be down in the noise.
> That says nothing about the costs of running that many pppds.
Yes, well, I should have mentioned that I wasn't talking about the extra
copy specifically, but about optimizing the code in general. Sorry.
> My claim is that, if the copies are done in large sizes, you're
> __not__ going to notice a significant difference between copying that
> much data once, and copying it twice, not on any reasonable machine.
> Is that clearer now?
Yes, and I agree. The function calls (one per byte) will cost much more
than one extra block copy.
> Unless someone has already paid attention to kernel profiling on the
> port you're using, you may run into problems (e.g. the kernel locore
> functions aren't instrumented) which make this harder than it need
> be. But J.T. is actively doing kernel profiling on at least one m68k
> ports, so this shouldn't be too hard to get fixed.
Might be interesting to try this sometime - although it seems I'll dump
the last 68030 machine soon (unless I decide to make it a firewall).
Personally, I find our pentium machines fast enough without any further
optimizations, but it wouldn't hurt to get them going even faster ;-)
> But, _if_ you're running 1.2, _and_ your machine is exhausting
> physical memory (as shown by, e.g., a concurrent systat vmstat),
> then I'd suggest you try replacing the 1.2 sys/vm/vm_pageout.c with
> the -current version:
I'm already running that patch on one system and it seems to prevent the
complete 60-sec deadlocks. BTW, is there any description about how the
memory/VM system works? Our system with 64 MB of RAM sometimes uses
another 60 megs of swap and I can't see why - we were able to run the
same stuff on a machine with 16 MB of RAM and 32 MB of swap. Weird. ;-)
---> http://www.jmp.fi/~jmarin/ <---