Subject: Re: Why not track our xsrc with X11R6.6 from
To: NetBSD-current Discussion List <current-users@NetBSD.ORG>
From: Greg A. Woods <>
List: current-users
Date: 07/18/2001 19:35:37
[ On Wednesday, July 18, 2001 at 17:09:52 (-0400), Charles Shannon Hendrix wrote: ]
> Subject: Re: Why not track our xsrc with X11R6.6 from
> This is not true. You are talking about a kernel driver.

Of course I'm talking about a kernel driver -- that's exactly what I
said I was talking about!  In fact I'm *only* talking about kernel
drivers.  I care not how the rest of the application is structure or
what its components are called.  All I care is that there be a proper
kernel device driver programming interface that provides *all* of the
necessary hooks, but *only* to the necessary hardware.

If I want to run super-fast monster graphics applications then I'll
figure out how to run them on the bare hardware -- I do not ever want my
Xserver to be able to do anything but graphics, even if it is compromised.
It MUST NOT EVER have to run as root!

> > Or rather I absolutely do not ever want the Xserver to have to know anything
> > about the wiring of my system, or if possible even the type of the CPU(s)!  
> Then why are you opposed to XFree 4.x? It is all about removing such
> code from the X server.

I don't know 100% that I am.  I'm just stating my position so that
someone who knows better about 4.x can tell me whether or not it's going
in the same direction I want to go.

> One of the biggest reasons why so many graphics card creators never
> write drivers for UNIX is precisely because you have to build a custom
> X server for each card or family of cards. 

That can't be the reason.  If that's the reason then there are some
significantly ignorant graphics card creators.  From cursory examination
of the X11 Xserver source it seems that the amount of unique code
necessary to support any of Sun, HP, IBM, Dec, etc., workstations is a
very tiny amount of the total set of all code they share.  I can't
imagine it's all that hard to write such support code, especially given
the examples already present.  Given what I've heard from netbsd
developers it seems that the only major stumbling block is having
detailed hardware documentation, somthing graphics card creators must be
in possession of.  The kernel drivers for their cards are also rarely
very large and apparently quite easy to write too.

I suspect the problem is more to do with that "PC-centric" view of
things that everyone keeps talking about.....

Perhaps "PC-centric" is not exactly right -- maybe "non-unix" would be a
more applicable description....

> Your suggestion of putting the drivers in the kernel is even worse, both
> technically and from a manufacturer's point of view.

You misunderstand.  I don't think I'm talking about the same kind of
"drivers" you're talking about.  I'm talking (only) about the kind of
drivers Simon Burge spoke about and is (thankfully) working on,
i.e. kernel drivers that mediate the interface between an application
and the hardware _and_ which prevent an application from doing things it
has no right to do.  NetBSD is not a single-user operating system --
it's a multiuser system and I don't think anyone really wants to
unnecessarily comromise it's security just to make it easier to
integrate some third-party graphics system.

I don't particularly care how all of this is accomplished just so long
as the result is a kernel device driver interface which can guarantee
that a rogue process with access to /dev/fb (or whatever) can't also get
at my SCSI host adapter or anything else for that matter.  I'm under the
impression that this is what I have now, today, with the likes of Xsun.
It see no obstacles other than politics and philosophy to achieving the
same elegance and security on modern consumer-variety PCs.

You can't now, today, run an XFree86 Xserver on NetBSD without building
an explicilty insecure kernel.  That's what concerns me.  If tracking
XFree-4.x more closely means forcing even more platforms to become
insecure as X11 workstations then that would be very very bad, at least
in my opinion.

If on the other hand XFree-4.x is going to provide/use/require a cleaner
and more secure kernel driver interface then that would be good and
someone needs to show clearly how this works so that those of us leary
of existing XFree86 servers will have something to look forward to
insead of continuing to avoid.

> However, I think a proper driver layer between X (or whatever) and the
> hardware could very well help encourage better abstractions and better
> harware designs.

Yes, that's exactly what I'm talking about!

> You have to start somewhere, and XFree 4.x looks like a very good
> effort.

We already have a good start in the examples of Xservers and graphics
card drivers written by Sun, HP, DEC, IBM, et al (not to mention the
other already available non-XFree86 Xservers specific to NetBSD); and it
seems Simon's working on making them even better and making them work on
the new wscons framework in NetBSD.  It would seem, at least on the
exterior surface I see, that is ignoring the well earned
lessons of history and re-inventing things in what's at least a non-unix
way, if not exactly a PC-centric way.

> We've also seen how stupid it is to put graphics drivers in the kernel.

We have?  Where?  Certainly not in the Unix workstation world we haven't
(at least not in any of the ones who survived!).

> All we need is an interface to some basic card operations, just to be
> able to bootstrap the driver, and do so in a portable way.

Hmmm.... that seems to be exactly what Sun's bwtwo, cg*, tcx, etc. are.

> But that's you personally, he's talking about the project as a whole.

No, that's anyone and everyone who uses open source.  We do not any
longer live in a world of object-only code here!  We do not need, or I
believe even want, to follow the one-binary-does-all philosophy. It's
possible to arrange things such that even an untrained child can build a
new Xserver and/or new kernel if that's the way we want to go.

> There is no way you'd be able to get all that done in the kernel.

Huh?  It works just fine for NetBSD/{sparc,alpha,pmax(?),atari,amiga,etc.}

> Besides that, a lot of the driver code is binary only. Even if all
> manufacturers agreed to source releases,

Well, personaly I'd only ever use cards supported by freely available
source code....  I guess if you're and you are only just
barely adhereing to the "free" in your name then you might want to
support interfaces that allow vendors to plug object-only code into your
Xserver, but that doesn't necessarily mean you can't have a clean and
safe kernel driver API too.

> is the core team going to let
> nVidia and ATI start doing CVS commits?

What does that have to do with anything?

> Let's say that happens... does that mean every time nVidia releases a
> new card or just faster driver code I'm going to have to build a new
> kernel, possibly even a non-stable release?

Are you asumming everyone's a gamer, or are we talking about people who
want to have a good secure and stable workstation platform?

Obviously if you want to try out a bleeding edge card then you're going
to have to get the right software into the right places for it to work.
If that means building a new Xserver, then do so.  If that means
building a new driver and linking it into your kernel, then do so.  If
at the same time you want to build with the latest Xserver or kernel
code, then do so.  If you want to keep everything else stable though
then you should obviously just pull in only the necessary new code.

Note too that if the new code (or new hardware, for that matter) is less
stable than what you've got now then you'll probalby end up with a less
stable platform all around and so you'd better lay a trail of crumbs so
that you can back down to your previous more stable configuration!  ;-)

> > However even for the most complex card I can imagine that's still code
> > that I would find very valuable to have in my kernel.  I.e. putting it
> > in the kernel is well worth whatever resources it takes!  I _want_ that
> > code in my kernel!
> No you don't. If you knew how a modern GPU like the nVidia worked, there
> is no way you would say this. Every single GPU is totally different, far
> more different than CPUs are from one another.

I think you're still confusing what I want in the kernel with what
you're calling a "driver".  I'm only talking about kernel device
drivers.  If some hardware is at all worth using in a unix-like system
then it should be relatively easy to define a kernel device driver
interface that makes it possible to both cleanly access the device from
userland while at the same time protecting the rest of the system's
hardware from any rogue process that might have access to the graphics
device driver file (eg. /dev/fb), such as a compromised Xserver process,
from affecting anything but what you see on the screen.

The fun part comes when you try to define kernel device driver
interfaces that are applicable to an entire class of devices.  I don't
know whether this can be done for a wide variety of modern PC graphics
cards or not; or even if it's sane to attempt to do so.

> Well, you can't expect that of the average user. I don't want to have to
> rebuild my fscking X server every time there is a bug fix or new feature
> added to the driver either.

Why not?  unix != windows

> You really cannot do that. It only works for places like Sun because
> they have such limited graphics hardware.
> Not all graphics cards will even support writing on the framebuffer.


> I'm fairly certain you can't display anything at all with an nVidia
> GPU without actually programming their GPU. You can't just draw on the
> framebuffer to get the basics.

Oh well.  I'm sure there's a simple way to provide controlled access to
the GPU though.  You sure as heck don't have to access the PCI bus and
wander around on it just to find the GPU.

> The only standard for that sort of generic X server was VGA, and lot's
> of cards no longer support VGA graphics.

XGA?  (I'm not even sure I have any VGA cards left....)

> > In fact I'd rather bring the entire Xserver into the kernel (and of
> > course do all the work implied to make it secure enough to do so) than I
> > would to allow a user-land process have that kind of control over my
> > machine.  
> Eeewww...

That seems to be more or less what folks like NCD etc. have done to
build custom X11 terminals, and I'm quite happy with my NCDs, but then
they aren't general-purpose multi-user workstations and they don't have
the same kinds of security requirements.....

> Besides, most of what you want, is also a goal of the XFree project.

I've not seen that yet....  I see and hear instead about ever more code
in XFree that needs to peek and poke about in arbitrary places in my
machine and even to wander about figuring out what my PCI bus wiring
looks like so that it can find out where I've plugged in my graphics

> They want to keep the hardware specific code out of the server and the
> kernel, which is the way it should be.

The hardware specific code has got to go somewhere.  In this case
there's only the kernel and the Xserver process.  Which is it going to
be in?  I want to split it up in some logical way between the kernel and
the Xserver process so that I don't have to give the Xserver free run of
all my hardware and DMA transfers, etc.

> I use software that won't run well on that kind of system daily. I also
> don't like my CPU cycles wasted on slow X servers.  That's the whole point
> in having a GPU in the first place.

You've missed my point entirely it seems.  I run my Xserver on a
dedicated processor that does almost nothing but run the Xserver
processor and talk to the network.  If I were doing things that involved
graphics rendering operations that could best be done in a specialised
processor on the graphics card then that still wouldn't change anything
(other than I'd have an accellerated graphics card instead of just a
dumb framebuffer).

> There is more to this than human perception. Every cycle saved in the
> graphics pipline is a cycle that can be used for non-graphics work.

Well, yes, sort of.  Presumably you'd move to using an accellerated
graphics cards because either your main CPU is not powerful enough to
move 24 or 32-bit pixels fast enough across the PCI bus, or maybe
because the main and only CPU need to do some other tasks (like talk to
the network).  Maybe you want to pass 3D operations through your Xserver
to that 3D accellerator, etc.

Ultimately though if what happens on the screen happens at a rate fast
enough that you can't perceive the individual raw operations, then
that's fast enough.

Certainly you can optimise other parts of the Xserver so that they waste
fewer cycles, but the point of all this was to understand why the
Xserver needs to touch the bare hardware and it still seems obvious that
it really doesn't -- a GPU actually helps as it reduces your need to
move massive numbers of bits and bytes through an mmap()ed framebuffer
space.  In other words the imposition of a kernel device driver to
mediate DMA operations and such won't actually impair the Xserver's
ability to go fast enough such that you don't perceive the individual
drawing operations.  As it is I can just barely percieve the individual
operations on my piddly little 25MHz SS1+ with a plain dumb framebuffer!
Modern hardware is at least an order of magnitude faster in every
respect and that leaves an awful lot of room for whatever overhead a
decent kernel driver interface might still impose.

> No it isn't.  I don't want to watch X draw!

I'm not asking you to watch X draw things -- I'm telling you that once
you cannot perceive it draw something then you don't have to go any
faster.  Anything any faster is a waste of effort in the wrong direction!

> OK, this is true, but misses the point. If I have a server that can do 1
> million xstones, I also have a server that will use less CPU for those
> times when I'm doing exactly this, like you want it to.

That's only partly true too.  I almost never run any X11 application on
the same CPU as the Xserver -- even just the latency of context switches
annoys me.  Once upon a time on even slower hardware I even ran the
window manager on a third system just to try to smooth things out.

However as I say any modern consumer-grade PC hardware is already at
least an order of magnitude faster in every way than what I'm typing on
now which means that even with the imposition of a proper unix kernel
device driver interface there's still going to be one heck of a lot of
headroom for handling more bits per pixel and doing more intensive
main-CPU work without even adding any specialised graphics processor.

> You'll never make X smooth if you don't also cut down on CPU usage, and
> get the server latency down as much as possible.  

No, I don't ahve to "cut it down as much as possible" -- I only need to
cut it down to the point where I can no longer visually perceive any lag
in the drawing operations.  There are well known and very hard limits to
human visual perception.  Underneath in the implementation there are
many ways to do smart things that optimize operations so that what you
see on the screen looks "smooth" while not at the same time having to
hammer out bits to the graphics hardware that you cannot ever possibly

							Greg A. Woods

+1 416 218-0098      VE3TCP      <>     <>
Planix, Inc. <>;   Secrets of the Weird <>