Port-vax archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: NetBSD/vax 11.0RC1



On Wed, Feb 18, 2026 at 09:03:01PM +0100, Johnny Billquist wrote:
> 
> And I seem to remember having seen some issues in the past when running with
> lots of memory, so I usually only run my NetBSD/vax with 128M.
> 
>   Johnny

Yesterday I went back and ran a build tools on 10.0 Release, just to be sure it
really does build to completion as I found back when 10.0 was released.

===> Tools built to /usr/obj/tooldir.NetBSD-10.0-vax
===> build.sh ended:      Wed Feb 18 07:37:15 EST 2026
===> Summary of results:
         build.sh command:    ./build.sh -U -u -O /usr/obj -m vax tools
         build.sh started:    Tue Feb 17 08:42:16 EST 2026
         NetBSD version:      10.0
         MACHINE:             vax
         MACHINE_ARCH:        vax
         Build platform:      NetBSD 10.0 vax
         HOST_SH:             /bin/sh
         No $TOOLDIR/bin/nbmake, needs building.
         Bootstrapping nbmake
         MAKECONF file:       /etc/mk.conf (File not found)
         TOOLDIR path:        /usr/obj/tooldir.NetBSD-10.0-vax
         DESTDIR path:        /usr/obj/destdir.vax
         RELEASEDIR path:     /usr/obj/releasedir
         Created /usr/obj/tooldir.NetBSD-10.0-vax/bin/nbmake
         Updated makewrapper: /usr/obj/tooldir.NetBSD-10.0-vax/bin/nbmake-vax
         Tools built to /usr/obj/tooldir.NetBSD-10.0-vax
         build.sh ended:      Wed Feb 18 07:37:15 EST 2026
===> .

and then a "top" before I shut the session down:

load averages:  0.00,  0.22,  0.59;               up 0+23:03:55      07:44:43
8 processes: 7 sleeping, 1 on CPU
CPU states:  1.9% user,  0.0% nice,  2.8% system,  0.0% interrupt, 95.3% idle
Memory: 30M Act, 4444K Exec, 22M File, 439M Free
Swap: 1024M Total, 1024M Free / Pools: 19M Used / Network: 

  PID USERNAME PRI NICE   SIZE   RES STATE       TIME   WCPU    CPU COMMAND
    0 root     125    0     0K 2992K vdrain      0:12  0.00%  0.00% [system]
  746 root      85    0    11M    0K wait        0:01  0.00%  0.00% login
  742 root      83    0    11M 4232K wait        0:01  0.00%  0.00% login
11511 kwellsch  43    0  5036K 1308K CPU         0:00  0.00%  0.00% top
17964 kwellsch  85    0  4892K 1656K wait        0:00  0.00%  0.00% sh
    1 root      85    0  4804K  924K wait        0:00  0.00%  0.00% init
  741 root      85    0  4892K  900K ttyraw      0:00  0.00%  0.00% sh
18531 root      85    0  2884K  900K ttyraw      0:00  0.00%  0.00% getty

I got lucky and caught when the gcc 10.5 copy of gimple-match.cc was building:

  PID USERNAME PRI NICE   SIZE   RES STATE       TIME   WCPU    CPU COMMAND
20409 root      27    0   123M  108M RUN         7:54 99.02% 99.02% cc1plus
20409 root      26    0   125M  112M RUN         8:49 99.02% 99.02% cc1plus
20409 root      25    0   127M  114M RUN         9:49 99.02% 99.02% cc1plus
20409 root      25    0   131M  116M RUN        10:49 99.02% 99.02% cc1plus
20409 root      25    0   131M  117M RUN        11:49 99.02% 99.02% cc1plus
20409 root      25    0   133M  119M RUN        12:49 99.02% 99.02% cc1plus
20409 root      25    0   135M  121M RUN        13:49 99.02% 99.02% cc1plus
20409 root      25    0   137M  123M RUN        14:49 99.02% 99.02% cc1plus
20409 root      25    0   139M  126M RUN        15:48 99.02% 99.02% cc1plus
20409 root      25    0   143M  128M RUN        16:48 99.02% 99.02% cc1plus
20409 root      25    0   145M  131M RUN        17:48 99.02% 99.02% cc1plus
20409 root      25    0   177M  163M RUN        18:48 98.97% 98.97% cc1plus
20409 root      25    0   191M  176M RUN        19:48 99.02% 99.02% cc1plus
20409 root      25    0   198M  184M RUN        20:47 98.97% 98.97% cc1plus
20409 root      25    0   203M  190M RUN        21:47 99.02% 99.02% cc1plus
20409 root      25    0   205M  192M RUN        22:47 98.97% 98.97% cc1plus
20409 root      25    0   208M  194M RUN        23:46 98.97% 98.97% cc1plus
20409 root      25    0   213M  200M RUN        24:46 99.02% 99.02% cc1plus
20409 root      25    0   217M  205M RUN        25:46 98.97% 98.97% cc1plus
20409 root      25    0   223M  210M RUN        26:46 99.02% 99.02% cc1plus
20409 root      25    0   240M  228M RUN        33:58 99.02% 99.02% cc1plus
20409 root      25    0   239M  227M RUN        34:43 99.02% 99.02% cc1plus
20409 root      25    0   247M  234M RUN        35:53 98.93% 98.93% cc1plus
20409 root      25    0   239M  227M RUN        36:53 99.02% 99.02% cc1plus
20409 root      25    0   237M  225M RUN        37:52 99.02% 99.02% cc1plus
20409 root      25    0   237M  225M RUN        38:52 98.97% 98.97% cc1plus
20409 root      25    0   237M  225M RUN        39:52 98.97% 98.97% cc1plus
20409 root      25    0   239M  227M RUN        40:51 98.97% 98.97% cc1plus
20409 root      25    0   239M  227M RUN        41:51 98.97% 98.97% cc1plus
20409 root      25    0   239M  228M RUN        42:51 98.97% 98.97% cc1plus
20409 root      31    0   243M  232M RUN        43:51 98.97% 98.97% cc1plus
20409 root      30    0   240M  229M RUN        44:50 99.02% 99.02% cc1plus
20409 root      29    0   240M  229M RUN        45:50 98.88% 98.88% cc1plus
20409 root      28    0   242M  230M RUN        46:50 99.02% 99.02% cc1plus

... clean from start to finish.

What I did notice from the gcc 12.5 build on 11.0 RC1 was that when the
process RSS hit 256M all hell broke loose.

So I think tomorrow I'm going to try a run with 256M memory and mount
a swap disk backed by SIMH on a tmpfs file system so I don't have to fret
about a real disk being thrashed... and see if I keep the RSS under 256M
will the compile actually complete...

From the build tools I ran earlier on 11 RC1

  PID USERNAME PRI NICE   SIZE   RES STATE       TIME   WCPU    CPU COMMAND
17661 root      30    0   188M  160M RUN        18:57 99.02% 99.02% cc1plus
17661 root      29    0   192M  164M RUN        19:57 99.02% 99.02% cc1plus
17661 root      28    0   189M  162M RUN        20:57 99.02% 99.02% cc1plus
17661 root      27    0   213M  186M RUN        21:57 99.02% 99.02% cc1plus
17661 root      26    0   254M  228M RUN        22:27 99.02% 99.02% cc1plus
17661 root      26    0   260M  233M RUN        22:32 99.02% 99.02% cc1plus
17661 root      26    0   274M  249M RUN        22:42 99.02% 99.02% cc1plus
17661 root      26    0   274M  248M RUN        22:57 99.02% 99.02% cc1plus
17661 root      25    0   282M  254M RUN        23:22 99.02% 99.02% cc1plus
17661 root      79    0   284M  256M RUN        23:30 91.16% 91.16% cc1plus
17661 root      79    0   284M  256M RUN        23:43 62.74% 62.74% cc1plus
17661 root      79    0   284M  256M RUN        23:58 52.49% 52.49% cc1plus
17661 root      79    0   284M  256M RUN        24:03 51.46% 51.46% cc1plus
17661 root      78    0   284M  256M RUN        24:16 50.78% 50.78% cc1plus
17661 root      79    0   284M  256M RUN        24:19 51.03% 51.03% cc1plus
	<hung>

It's as if there is an off-by-one on some shift somewhere, or a data type
overflow ... 512M memory but something thinks 256M is the "limit" ...


Home | Main Index | Thread Index | Old Index