Subject: Re: Is gcc slow? Or is our gcc slow?
To: None <current-users@NetBSD.ORG>
From: H. J|ngst, ISKP, Bonn <juengst@saph1.physik.uni-bonn.de>
List: current-users
Date: 04/11/1996 06:37:32
> >> [...] mount a MFS on /tmp [...]
> > MFS is really stupid. You are wasting resources for the most of
> > time.
>
> /tmp has the same problem, it's just that the "wasted" resource is disk
> space instead of RAM.
Yes.
/tmp splits the disk. MFS splits the virtual memory and increases the
need for more virtual memory (even if it allocates the space dynamicly,
I know). The swap partition splits the disk. Splitting resources means
wasting resources.
>
> > Common temporary resources for all users are the best way to produce
> > problems. Some of them can not be solved, of course. But others
> > like common temorary scratch directories are not neccessary.
>
> So, you would have ... what? It's not clear to me what you're
> proposing here. It sounds almost as though you're proposing to replace
> /tmp and /var/tmp and /usr/tmp with $HOME/tmp or some such.
First, I would try to avoid temporary files. In most cases it is not
neccessary. cc -pipe shows that for cc. Printer spoolers also do not
really need a backup of a file to print it.
Others like mail need space to keep the files. I would use the users
home directories (or any of their subdirectories) to keep the files.
Then it is possible to keep the machine alive for most people, even if
someone made a mistake. And a system manager can regulate it very nice
with quotas (tested:-).
I have seen too often that jobs here (physical data evaluation) have
stopped just because /tmp was full (multiusers - multiproblems;-).
>
> > If there is just one user who made a mistake and allocates the entire
> > space, then other users are not able to compile their program (and
> > most don't know why).
>
> "/tmp: file system is full" seems pretty clear to me, even to a novice.
>
> > MFS means thinking like a MSDOS user.
>
> No more so than /tmp - or /var/tmp - does already...unless you're
> alluding to the prevalence of ramdisks on MSDOS, which isn't what the
Yes.
> context sounded like to me.
>
> > A tuning guide might be also interesting, if the operation system can
> > not tune itself (like others).
>
> Hm, self-tuning, that sounds like (a) a really cool idea and (b) a
> maintenance nightmare when it goes wrong. When will you have a sample
> implementation ready for us to experiment with?
Peter (petersv@df.lth.se) has sent you the answer yet. He was talking
about "AUTOGEN" of VMS which adjusts the system parameters with feedback
data of the system (for the next reboot). System managers can control
AUTOGEN with minimum, maximum and additional contributions for specific
system parameters (SYSGEN). AUTOGEN also controls itself. For example it
does not generate parameters based on the feedback data if the statistic
of these data is not sufficient.
Additionally there is a performance analyzer available for VMS which does
the adjustment dynamicly for a running system. E.g. "quantum" (the time
between task switches) can be adjusted by the machine (and much more).
BTW, shared images (and libraries, of course) are well known in VMS.
But, I do not want to start a VMS discussion here (wouldn't help anyway).
Andrew's (gillhaa@ghost.whirlpool.com) measurement (make build with/without
-pipe and MFS) did show, that -pipe decreases the real time to compile
the sources. It also shows that MFS decreases it more than -pipe. I think
that transfers via pipes are much more simple than transfers via a file
system. It might be interesting to measure the cost of task switches.
I can't belief that it does really cost so much (it mustn't!). I would
guess that there is a problem with the transfer via pipes (e.g. bad buffer
implementation or something like that). It might be interesting to have a
look.
The previous small measurement did not show what happens if the harddisk
is busy (multiuser).
>
> der Mouse
>
> mouse@collatz.mcrcim.mcgill.edu
I am happy with the solution "/etc/mk.conf" now, but not at all... ==:-)
Henry