tech-toolchain archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: building modules by build.sh (Re: exec: /sbin/init: error 8)



At Sun, 10 May 2009 01:55:20 +0900, Izumi Tsutsui 
<tsutsui%ceres.dti.ne.jp@localhost> wrote:
Subject: Re: building modules by build.sh (Re: exec: /sbin/init: error 8)
> 
> david%l8s.co.uk@localhost wrote:
> 
> > On Sat, May 09, 2009 at 05:07:56PM +0200, Quentin Garnier wrote:
> > > 
> > > My idea about build.sh vs. modules was:
> > > 
> > >  - provide a modules.tgz set
> > >  - have a build.sh target to build and install modules (I don't think
> > >    there is much gain splitting it in two targets)
> > >  - add an option to build.sh's sets and install targets to only install
> > >    the sets listed, so  you'd use e.g. ./build.sh -s modules install=/.
> > 
> > If you do that, and the new kernel doesn't work, how do you regress
> > back to the old one ?
> 
> There is no problem if kernels are versioned properly.
> If they have the same version, the new modules _should_ still work.
> If they have the different version, modules are installed different dirs.

That's a _huge_ pile of dangerous assumptions to make.

You're basically assuming that someone has already gone to all the
trouble of testing all the possible normal build options with every new
change and either fixed all the possible compatibility problems or, when
necessary, incremented the version numbers appropriately.

The person doing all that regression testing needs a way out too --
i.e. one that is easier than re-installing the both a working kernel
and/or modules again from some other bootable media.

In my experience with using systems with modular kernels in production
environments there are still far too many cases where things get so out
of whack that you do end up re-installing a working set again from
bootable installation media, i.e. even after all the polishing has been
done and you're using a fully regression tested release.  I.e. modular
dynamic-loading kernels are not really ever worth their real-world
troubles.

The only system I ever used that made a half-way workable compromise
between being able to provide a modular kernel along with all the safety
features of a single-blob kernel was AT&T's 3B2 System V Release 3.2
back in the early 1990's.  On that system the boot loader would either
read a manually specified system configuration file which would load and
link all of the specified object modules and then write out a new
single-file kernel image from the running set of modules that had just
been loaded, or it would boot an existing traditional single-blob binary
kernel image from a default or optionally manually specified file.  Now
I never did much real kernel development on that system beyond
recompiling the very few objects for which source was provided for the
purpose of tuning the system (though IIRC most tuning was done through
the system configuration file), plus adding a couple of small device
drivers I had written, but it was easy to see that if you built some new
modules that didn't work together then the system could easily be
recovered by booting the previously generated full kernel image file.
The key point here too is that the system never normally booted with the
module configuration mode -- you only did that manually if you wanted to
reconfigure the production kernel.  Normally the system always booted
(quickly) from the generated single kernel image file.

-- 
                                                Greg A. Woods
                                                Planix, Inc.

<woods%planix.com@localhost>       +1 416 218-0099        http://www.planix.com/

Attachment: pgpdH6RmtGPUY.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index