[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Parent "device" selection in kernel configuration
[apologies to Iain for previous "reply"... "premature send" <:-( ]
--- On Sun, 9/23/12, Iain Hibbert <plunky%rya-online.net@localhost> wrote:
> > I'm looking for some guidance in deciding criteria that
> > affect your (i.e., *my*) decision as to which of (for
> > example):
> > lpt* at acpi?
> > lpt* at isa?
> > lpt* at pnpbios?
> > to use in a particular (5.1.2/i386, in this case) kernel
> > config (and, of course, any other drivers that have a
> > "choice" of where they "attach" in the kernel config).
> Usually, there is not much difference.
> The lpt driver is actually implemented in
> src/sys/dev/ic/lpt.c; the
> attachment wrappers are found in eg
> src/sys/dev/isa/lpt_isa.c and
> src/sys/arch/i386/pnpbios/lpt_pnpbios.c.. if
> you examine the code, the wrappers only discover and map the
> IO port and
> IRQ used by the appropriate method, then call
> lpt_attach_subr() to hand
> over the information to the lpt driver itself.
OK. So the driver doesn't change -- nor the features available
to the "device", itself. E.g., it doesn't *add* (or remove)
any capabilities that aren't always present, regardless.
(consider something like a network interface... would *how*
it is attached affect whether or not "wake on LAN" was
Though I suspect there *could* be differences in which resources
are assigned to the device based on the algorithms implemented
in each of those wrappers?
> If you decided that all of your hardware could be found adequately
> through isa drivers and decided that you didn't care about ACPI for
> instance, then you could compile the kernel without it.. and it would
> end up smaller of course, but I doubt it would boot measurably faster.
> And, since the code is unused then it will not really affect performance.
OK. I posed the questions only to elicit some clarification of what
*differs* in the instantiation of the driver based on those different
If, for example, there were extra levels of indirection in the
ISR's one way or another. Or, some forced sharing of resources
in one approach that were not present in others...
Lots of this information sits in folks' heads instead of in any
formal documentation :<
> However, with modern x86 hardware (I have a ~6yr old laptop with 1GB RAM
> which is not in any way notable) I doubt that trimming my kernel down in
> this way from its current 13MB would make any significant difference, so
> I don't bother any longer. If you are working with more limited hardware
> though, it might be worth your time..
I'm trying to trim down the kernel for deployment on a cluster of
1GHz/1GB *diskless* machines. So, anything I can purge from the memory
footprint is space that I can use for something else. Code that is
effectively *dead* doesn't buy me anything at runtime. And, if one
approach tied up extra data structures that weren't necessary with
a different approach...
Eventually, I want to move the system onto "proprietary" (custom)
hardware in which case there is a real cost to all that RAM and
MIPS. So, my questions are intended to give me a feel for where
it's *easy* to "cut"... and where I'll have to work harder! :-)
Main Index |
Thread Index |