Subject: RE: Any resolution for LKM issues?
To: 'Bill Studenmund' <ws@tools.de>
From: Greg Kritsch <greg@evertz.com>
List: tech-kern
Date: 03/16/2001 13:42:34
I'm going to reiterate my point of view for fun, mainly because it doesn't
involve changing the compiler, just all the .h files from the kernel.

If you added a macro, say, LKMCALL, to the end of every kernel function
prototype, and then, when compiling an LKM, defined it to be "__attribute__
((longcall))", and otherwise left it as nothing, calls from LKMs to the real
kernel would be done as inefficient, 3 instruction long calls, and
everything else would be done with efficient, 1 instruction branches.

This would fix the problem for all architectures that have LKM trouble due
to limited branch range.

Now, someone has reported a problem with compiler builtin functions.  I
haven't looked into that problem, so maybe some additional fixing is
required there.

The second best solution, in my opinion, hasn't been expressed yet.  What if
some piece of code, say the linker, took any branch target that was too far
away and changed it to a branch to a little stub of code that does a long
call?  Perhaps this could even be integrated with the module loading tool
rather than the linker, for now.

I know NetBSD is all about machine independance and portability, but I
believe in efficiency as well.  This approach prevents us from having to
deal with using inefficient long calls all over the place unless we
explicitly ask the compiler not to use them (the shortcall attribute).  But
all the .h files do need to change.

No, I'm not volunteering.  I don't believe in LKMs.  Just making the
suggestions.

Gregory


> -----Original Message-----
> From: Bill Studenmund [mailto:wrstuden@zembu.com]
> Sent: Friday, March 16, 2001 1:20 PM
> To: Wolfgang Solfrank
> Cc: gr@eclipsed.net; port-macppc@netbsd.org; tech-kern@netbsd.org
> Subject: Re: Any resolution for LKM issues?
> 
> 
> On Fri, 16 Mar 2001, Wolfgang Solfrank wrote:
> 
> > Minor nit: it's segment 0 for the kernel code and segment E 
> for LKMs.
> 
> Ahhh..  Thanks. I forgot exactly which ones it was. I knew 
> they were more
> than 32 MB apart.
> 
> > Well, yes, that's an unfortunate side effect of the ppc 
> kernel layout:
> > 
> > While running kernel code, all but segments D & E are 
> mapped 1:1 between
> > virtual and real addresses.  This mapping is totally 
> invisible to uvm.
> > 
> > The kernel text/data/bss isn't part of what memory management calls
> > "kernel virtual memory", i.e. the address range accessible by kernel
> > virtual addresses known to uvm (which happens to be in segment E).
> > 
> > The kernel itself just happens to live somewhere in low 
> core and due to
> > the above 1:1 mapping can be accessed easily.  Any real memory that
> > isn't occupied by the kernel is managed by uvm, so we cannot easily
> > allocate contiguous real memory that is close enough to the kernel.
> > 
> > Therefore (and in order to avoid implementing a totally 
> different LKM
> > loading scheme) LKMs are allocated in the E segment (just 
> as any other
> > memory that is allocated by the kernel).
> 
> Unfortunatly that means that we MUST change the toolchain to 
> get them to
> work. :-(
> 
> > Hope this explains things a bit.
> > 
> > PS: If you wondered about segment D in the above, it is used to
> > access user virtual memory during copyin/copyout and friends.
> 
> Thanks!
> 
> Bill
>