Subject: process starvation?
To: None <port-i386@netbsd.org>
From: Andrew Gillham <gillhaa@ghost.whirlpool.com>
List: port-i386
Date: 05/13/1999 22:29:33
Hello,
I have been testing an older mainboard and cpu, an ASUS SP3G and an Intel
486DX4/100. With an AMD DX4/100 I was having some problems with certain
PCI options in the BIOS. With the Intel everything appears to be fine.
My method of testing is to run about 10 copies of "eatmem", a small
utility someone had posted way back that grabs memory and randomly
writes to pages. This abuse would consistently panic my AMD cpu unless
I disabled several default options in the BIOS. Anyway, my machine
has been running 1.4 with 13 copies of eatmem, each with 10MB allocated.
It has been running for about 24 hours now, and top shows me the
following interesting information. Note how 2372 and 2376 are gobbling
up the RAM/CPU, and the rest appear starved? Is this expected behavior?
I would expect the scheduler provide more equal treatment of these processes.
(other than the 5 niced to 20) All 13 processes were started up at
about the same time.
load averages: 13.40, 13.11, 12.83 14:09:03
32 processes: 8 running, 24 sleeping
CPU states: 83.4% user, 0.0% nice, 16.6% system, 0.0% interrupt, 0.0% idle
Memory: 20M Act 10M Inact 216K Wired 1144K Free 110M Swap 18M Swap free
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
2372 root 63 0 10M 9616K run 668:54 44.14% 44.14% eatmem
2376 root 62 0 10M 10M run 663:02 43.75% 43.75% eatmem
4037 root 49 0 276K 232K run 0:00 0.05% 0.05% top
2449 root 2 0 276K 400K sleep 13:33 0.00% 0.00% top
2369 root -5 0 10M 64K sleep 4:12 0.00% 0.00% eatmem
2367 root -5 0 10M 64K sleep 4:11 0.00% 0.00% eatmem
2373 root -5 0 10M 80K sleep 4:00 0.00% 0.00% eatmem
2368 root -5 0 10M 84K sleep 3:56 0.00% 0.00% eatmem
2375 root -5 0 10M 68K sleep 3:54 0.00% 0.00% eatmem
2374 root -5 0 10M 4K sleep 3:50 0.00% 0.00% eatmem
2228 root 2 0 72K 96K sleep 1:31 0.00% 0.00% ypbind
2360 root 68 20 10M 4K run 1:22 0.00% 0.00% eatmem
2361 root 68 20 10M 4K run 1:10 0.00% 0.00% eatmem
2329 root 68 20 10M 4K run 0:19 0.00% 0.00% eatmem
2267 root 18 0 16K 4K sleep 0:18 0.00% 0.00% update
2326 root 68 20 10M 4K run 0:17 0.00% 0.00% eatmem
2328 root 68 20 10M 4K run 0:16 0.00% 0.00% eatmem
2269 root -5 0 268K 28K sleep 0:13 0.00% 0.00% cron
2217 root -5 0 96K 36K sleep 0:12 0.00% 0.00% syslogd
2278 root 18 0 364K 4K sleep 0:01 0.00% 0.00% <csh>
4032 root 18 0 352K 4K sleep 0:00 0.00% 0.00% <csh>
2232 root 10 0 8648K 4K sleep 0:00 0.00% 0.00% <mount_mfs>
1 root 10 0 252K 4K sleep 0:00 0.00% 0.00% <init>
2240 root 10 0 16K 4K sleep 0:00 0.00% 0.00% nfsiod
Thanks.
-Andrew
--
-----------------------------------------------------------------
Andrew Gillham | This space left blank
gillham@whirlpool.com | inadvertently.
I speak for myself, not for my employer. | Contact the publisher.