Subject: Re: Why not track our xsrc with X11R6.6 from
To: Thor Lancelot Simon <>
From: Charles Shannon Hendrix <>
List: current-users
Date: 07/18/2001 23:59:55
On Wed, Jul 18, 2001 at 06:23:47PM -0400, Thor Lancelot Simon wrote:
> On Wed, Jul 18, 2001 at 05:09:52PM -0400, Charles Shannon Hendrix wrote:
> > 
> > We've also seen how stupid it is to put graphics drivers in the kernel.
> > All we need is an interface to some basic card operations, just to be
> > able to bootstrap the driver, and do so in a portable way.
> Yeah, to hell with security.  I *like* having to trust my X server as much
> as I trust my kernel!

I never said anything about trusting the X server, I don't want it
to run setuid root. Nothing about the XF 4.x model forces you to be
insecure. In fact, the model in 4.x is a step toward more abstraction
and safer servers and drivers. Given the huge change 4.x represents, I
don't really expect every issue to be solved until the 5.x releases.
I suspect XFree would like to see the various OS agree on a kernel
interface they can use so they aren't responsible for that piece. It
really should not be their job, not totally anyway.

At least the drivers are seperated from the X server, and that does
improve security and reliability, not to mention huge benefits in code
maintenance. You also have a model that will support better decisions in
the future. They can't do it all in a single version change. Just what
they have done so far is a huge change from XF 3.x.

In terms of reliability, you _will_ trust the driver wether you like it
or not. There are certain facts regarding PC hardware that are not going
away any time soon. I am not sure that even a framebuffer standard will
take all of those problems away, at least not for all graphics cards.

With the driver seperated from the server in XF 4.x, the driver is only
going to be doing what the server asks it to do. Within the limits of X
protocol, how can a userland program hurt the system? Even assuming for
the moment, that the driver's author puts no limits at all on what the
driver will touch, how would I kill a system using it's priveledges?

The only way I can think of is with DRI. You could ask for blitters to
locations not on the graphics card or something like that. Of course,
if the driver is well written, it won't do that. Obviously it will be
better when a standard kernel interface is in place that will enforce
such limitations.

I am hopeful that XF 4.x is the first step in this direction. I'm
wondering how things will work out, and if the various OS teams will
agree on a standard, or if there will have to be yet another layer in
XFree to interface to each OS.

> > Let's say that happens... does that mean every time nVidia releases a
> > new card or just faster driver code I'm going to have to build a new
> > kernel, possibly even a non-stable release?
> Uh, I don't think nVidia is a very good example.  A good chunk of their
> Linux drivers *does* live in the kernel.

Example of what?  I just used their name generically.  Insert any name you

I don't think their method is the best idea either. Neither do they
really, and they've said so. But their focus is on Windows and that's
what their driver team works on, so they continue to use their universal
driver model, and that's how they support Linux. There are some
improvements being talked about, but none of it really matters until
more work is done on some of the basics of graphics device support.

Given what I paid for XF 4.x (nothing), I think they are doing a great
job. Getting the performance my PC system gives me from a workstation
vendor with a "secure" X server would cost me nearly $5K at best.

"There are nowadays professors of philosophy, but not philosophers."