Subject: Re: Why not track our xsrc with X11R6.6 from
To: NetBSD-current Discussion List <current-users@NetBSD.ORG>
From: Charles Shannon Hendrix <>
List: current-users
Date: 07/19/2001 02:11:51
On Wed, Jul 18, 2001 at 07:35:37PM -0400, Greg A. Woods wrote:
> [ On Wednesday, July 18, 2001 at 17:09:52 (-0400), Charles Shannon Hendrix wrote: ]
> > Subject: Re: Why not track our xsrc with X11R6.6 from
> >
> > This is not true. You are talking about a kernel driver.
> Of course I'm talking about a kernel driver -- that's exactly what I
> said I was talking about!  

Sorry, I took that as you saying a driver never meant anything but a
piece of kernel code. My mistake.

> All I care is that there be a proper kernel device driver programming
> interface that provides *all* of the necessary hooks, but *only* to
> the necessary hardware.

Great, I'm with you on that.

> If I want to run super-fast monster graphics applications then I'll figure
> out how to run them on the bare hardware -- I do not ever want my Xserver to
> be able to do anything but graphics, even if it is compromised. It MUST NOT
> EVER have to run as root!

Neither do I. I don't remember anyone in the XF mailing lists or
anywhere else ever saying they wanted this. It's just a current reality
of the X server world that people are working on fixing.

> I don't know 100% that I am. I'm just stating my position so that
> someone who knows better about 4.x can tell me whether or not it's
> going in the same direction I want to go.

I think it is, but the necessary kernel pieces are just not there yet
across the platforms it supports. I assume that for the Solaris version,
they use the framebuffer devices, but don't really know that. It might
be that Solarix x86 doesn't have safe drivers either.

> That can't be the reason. If that's the reason then there are
> some significantly ignorant graphics card creators. 

That's what a lot of companies say. Take it however you want. I know
some of them are just stupid, but if I ran a company I would also not
want to build an X server for my card. I might rather just write a
driver that it can use. It's far less work and thus less money, and I
might very well be able to share code more easily with other non-UNIX
and/or non-X graphics systems.

> From cursory examination of the X11 Xserver source it seems that the
> amount of unique code necessary to support any of Sun, HP, IBM, Dec,
> etc., workstations is a very tiny amount of the total set of all code
> they share. I can't imagine it's all that hard to write such support
[ snip ]

It's a small part of the code, sure, but a lot of companies will find
it much easier to write a driver written to support a specific and
relatively stable API than to integrate it into X. They also have the
options of providing binary only drivers. I don't like it when they
do that, but this is yet another thing which isn't going to change
overnight, if it ever does.

> I suspect the problem is more to do with that "PC-centric" view of
> things that everyone keeps talking about.....

In that they focus on Windows, yes. But even if their focus was X, the
XF 4.x model is still better than the previous one.

> You misunderstand. I don't think I'm talking about the same kind of
> "drivers" you're talking about. 

No, I know what you really mean. But, you went on a tangent with the
"put the X server in the kernel" message and I just wanted to state
emphatically that putting that in the kernel is bad.  I don't think
we disagree here.

I also think XF 4.x is a step toward this eventuality.

> You can't now, today, run an XFree86 Xserver on NetBSD without
> building an explicilty insecure kernel. That's what concerns me. If
> tracking XFree-4.x more closely means forcing even more platforms to
> become insecure as X11 workstations then that would be very very bad,
> at least in my opinion.

This is no different than when they tracked XF 3.x, which has the same
problem, and is, I think, even worse.

> If on the other hand XFree-4.x is going to provide/use/require a
> cleaner and more secure kernel driver interface then that would be
> good and someone needs to show clearly how this works so that those
> of us leary of existing XFree86 servers will have something to look
> forward to insead of continuing to avoid.

I think the seperated driver model makes this easier to accomplish, in
addition to other benefits.

> We already have a good start in the examples of Xservers and graphics
> card drivers written by Sun, HP, DEC, IBM, et al (not to mention the

Servers which we cannot have of course, and on hardware that the vendor
controlled completely. That's a near idyllic world that doesn't exist
with PCs, though it's better than it has been these days.

> other already available non-XFree86 Xservers specific to NetBSD); and
> it seems Simon's working on making them even better and making them
> work on the new wscons framework in NetBSD. 

...which does nothing for FreeBSD, Linux, or any other OS that XFree
might need to support. It would be nice if there was a graphics hardware
abstraction across platforms, ideally one that was compatible too. Just
having it everywhere is a good first step.

> > We've also seen how stupid it is to put graphics drivers in the
> kernel.
> We have? Where? Certainly not in the Unix workstation world we haven't
> (at least not in any of the ones who survived!).

Windows NT since 4.0 put the drivers in the kernel. It sucks, and it
does definitely impact stability. They claimed you cannot get good
graphics speed if the graphics code is in userland.

> Hmmm.... that seems to be exactly what Sun's bwtwo, cg*, tcx, etc.
> are.

I'm not debating that: however, it was still done with a monolithic
kernel. Hopefully it's clear now that XF in 4.x and onward wants to
avoid doing that.

> No, that's anyone and everyone who uses open source. We do not any
> longer live in a world of object-only code here! We do not need, or I
> believe even want, to follow the one-binary-does-all philosophy. It's
> possible to arrange things such that even an untrained child can build
> a new Xserver and/or new kernel if that's the way we want to go.

Show me, don't tell me. I know how, but I don't want to do either most
of the time. I have better things to do with my time. I also like the
idea that a graphics driver could be updated and the update installed
without an X server rebuild.

> > Besides that, a lot of the driver code is binary only. Even if all
> > manufacturers agreed to source releases,
> Well, personaly I'd only ever use cards supported by freely available
> source code.... 

Then you'll never run the fastest cards out there. Even companies that
are willing often cannot release sources because they use technology
owned by third parties who won't let them. It's not a perfect world, and
lawyers work even longer hours than programmers sometimes.

I have idealistic attitudes, but if I want to run a graphics card that
will run OpenGL at more than a glacial pace, I have to use one for
which no source is available. That's just the reality of the current
situation. Hopefully it will change, but in the meantime I'm glad I can
at least use the hardware.

Graphics card manufacturers are among the most paranoid of all hardware

> I guess if you're and you are only just barely adhereing to the
> "free" in your name then you might want to support interfaces that allow
> vendors to plug object-only code into your Xserver, but that doesn't
> necessarily mean you can't have a clean and safe kernel driver API too.

Would they be more "free" if they left a large portion of their userbase
stranded with no support? I think XFree with a binary driver is a better
alternative than the far overpriced Xig proprietary servers. They want
$100 for a server for my graphics card (nVidia GF3) that doesn't run as
well as XFree and nVidia's binary driver.

XFree cannot control the policies of the graphics card makers, so I
think it is unfair to take stabs like that.  

> > Let's say that happens... does that mean every time nVidia releases
> > a new card or just faster driver code I'm going to have to build a
> > new kernel, possibly even a non-stable release?
> Are you asumming everyone's a gamer, or are we talking about people
> who want to have a good secure and stable workstation platform?

Are you assuming no one is? I happen to be both. I'm more interested in
the latter, but there is no technical reason I can't have both. I can
update many of the drivers on my system without even rebooting. I kinda
like that. It's stupid to have to shutdown, for example, when my tape
driver hands when I can just update it with a bug fix, reload it, and
get one with life.

Graphics cards are even more fussy, and the drivers more frequently

> The fun part comes when you try to define kernel device driver
> interfaces that are applicable to an entire class of devices. I don't
> know whether this can be done for a wide variety of modern PC graphics
> cards or not; or even if it's sane to attempt to do so.

I think it can be done. It's true that some video cards fudge the specs
a little, but you shouldn't design a system around the bad eggs. For
people with that hardware, they will simply have to run insecurely.
But for the rest, most PCI and AGP cards should be close enough to
get through a kernel driver init the same way, or very closely. In
fact, you can probably treat all of them as PCI to get started, then
use a separate AGP driver to do things like set the speed and other
AGP-specific operations.

It may even be that when cards first come out, only root X servers
will work until someone puts the proper driver in the various kernels.
Something needs to be done to make this easy for card manfacturers, or
they simply won't bother.

> > Well, you can't expect that of the average user. I don't want to
> > have to rebuild my fscking X server every time there is a bug fix or
> > new feature added to the driver either.
> Why not? unix != windows

Because maybe they have better things to do with their time. I know
I do. Just because I can do something doesn't mean I want to. I can
rebuild the intake on my car, but I still take it to a mechanic because
I rather do something else with my time.

> > I'm fairly certain you can't display anything at all with an nVidia
> > GPU without actually programming their GPU. You can't just draw on
> > the framebuffer to get the basics.
> Oh well. I'm sure there's a simple way to provide controlled access to
> the GPU though. You sure as heck don't have to access the PCI bus and
> wander around on it just to find the GPU.

The PCI stuff just hasn't been standard enough to abstract it away. For
the most part in Linux, you can get the information you need without
scanning the hardware. I don't see why XFree cannot look in /proc/pci
for the information it needs. Are you sure it doesn't? The Linux kernel
puts an awful lot of information about the hardware in /proc so you
don't have to bang on hardware.

> That seems to be more or less what folks like NCD etc. have done to
> build custom X11 terminals, and I'm quite happy with my NCDs, but then
> they aren't general-purpose multi-user workstations and they don't
> have the same kinds of security requirements.....

Yeah, but this is an X terminal.  I don't really care what they do.  Every one
of them is a proprietary black-box for the most part.

> I've not seen that yet.... I see and hear instead about ever more code
> in XFree that needs to peek and poke about in arbitrary places in my
> machine and even to wander about figuring out what my PCI bus wiring
> looks like so that it can find out where I've plugged in my graphics
> card.

Where did you hear that?  XF 4.x doesn't peek in your hardware any more
than XF 3.x does.  If XF 4.x does that on NetBSD, it's because it
can't get the info any other way mostly likely.  Under Linux, it
probably just reads /proc/pci and sees something like this:

Bus  1, device   0, function  0:
  VGA compatible controller: nVidia Corporation NV15 (Geforce2 GTS) (rev 164).
    IRQ 11.
    Master Capable.  Latency=248.  Min Gnt=5.Max Lat=1.
    Non-prefetchable 32 bit memory at 0xc6000000 [0xc6ffffff].
    Prefetchable 32 bit memory at 0xc8000000 [0xcfffffff].

Of course, at that point it does need root to access those memory areas,
but that should be taken care of in the future in a safer way. This
doesn't have register information of course, but I don't know that an
vVidia works that way.

A kernel driver for graphics cards could also provide information 
like this of course. 

> > They want to keep the hardware specific code out of the server and
> > the kernel, which is the way it should be.
> The hardware specific code has got to go somewhere. In this case
> there's only the kernel and the Xserver process. Which is it going to
> be in? I want to split it up in some logical way between the kernel
> and the Xserver process so that I don't have to give the Xserver free
> run of all my hardware and DMA transfers, etc.

No, there is the driver layer too, which is where most of the hardware
code should go. Also, the driver can be written to prevent things like
out of bounds DMA. If they don't, then even the kernel drivers you want
won't keep your X server stable.  There is a certain amount of trust
in a driver like that, since X hosing up can leave you unable to
get to your machine, even if it is still running.

This leads me to wonder why it is that so many graphics cards can't
even do a simple reset, and I mean even those on workstations. I've had
almost all types of UNIX machines' X servers lock me out, forcing me to
power off or login some other way to shut them down. X and the video
hardware need to be able to recover without killing the client programs.

> Well, yes, sort of. Presumably you'd move to using an accellerated
> graphics cards because either your main CPU is not powerful enough
> to move 24 or 32-bit pixels fast enough across the PCI bus, or maybe
> because the main and only CPU need to do some other tasks (like talk
> to the network). Maybe you want to pass 3D operations through your
> Xserver to that 3D accellerator, etc.

I don't care how fast my CPU is, I'm better off if I have a GPU doing
screen operations.

PCI bus is not fast enough to manipulate graphics for many applications,
let along do so without causing other bus devices to take a performance

> Certainly you can optimise other parts of the Xserver so that they
> waste fewer cycles, but the point of all this was to understand why
> the Xserver needs to touch the bare hardware and it still seems
> obvious that it really doesn't 

Maybe not the bare hardware, but it has to get fairly close or
performance quickly goes to pot.

> As it is I can just barely percieve the individual operations on my piddly
> little 25MHz SS1+ with a plain dumb framebuffer! 

Whatever you say... :) I spent many hours watching one of those things
take a few seconds on some of my more complex dvi files. Accelerated
cards were a must on those systems for me.

> Modern hardware is at least an order of magnitude faster in every respect
> and that leaves an awful lot of room for whatever overhead a decent kernel
> driver interface might still impose.

More like 100 orders of magnitude. It's staggering how powerful my AMD
system is in all areas compared to even my SS5. I/O is easily 5 times
faster, video is nearly 1000 times faster, and the CPU is probably as
much as dozens if not over 100 times faster.

Oh, you said "at least"... you are covered then... :)

> I'm not asking you to watch X draw things -- I'm telling you that
> once you cannot perceive it draw something then you don't have to go
> any faster. Anything any faster is a waste of effort in the wrong
> direction!


If I can draw a display and you cannot see it happen, that does NOT mean
I should stop there. If I can draw it 100 times faster still, then that
is 99 times worth of cycles that I can use for something else.

Yes, there is a point of diminishing returns, but most X servers are
not there yet.

> > OK, this is true, but misses the point. If I have a server that can
> > do 1 million xstones, I also have a server that will use less CPU
> > for those times when I'm doing exactly this, like you want it to.
> That's only partly true too. I almost never run any X11 application
> on the same CPU as the Xserver -- even just the latency of context
> switches annoys me. Once upon a time on even slower hardware I even
> ran the window manager on a third system just to try to smooth things
> out.

On the workstations running X that I thought we were talking about,
it's just about 100% true that offloading work to a GPU will speed up
operations on your CPU.

> > You'll never make X smooth if you don't also cut down on CPU usage,
> > and get the server latency down as much as possible.
> No, I don't ahve to "cut it down as much as possible" -- I only need
> to cut it down to the point where I can no longer visually perceive
> any lag in the drawing operations. 

But that could easily mean that said operation will eat 100% of your
CPU. If I have an X server that can operation 100 times faster than your
visual perception, I'm down to only 1% of my CPU, leaving the other 99%
to do non-graphics work.

You seem to think that making things faster means we'll be updating the
display faster than the 60-70Hz your eyes operate at, and that's not at
all the deal.  The display will only be doing the work you ask of it,
as fast as it possibly can.

Your perception is irrelevant in determining how fast to make the code.
You push until your speed gains are no longer worth the effort; the
point of diminsing returns.

Your perception only matters in game programming or other displays
designed for creation some form of visual simulation. Then you'll
definitely appreciate that the X server can render faster than your
eyes. The rest of the time you'll merely benefit from a faster system.

--  _________________________________________________
______________________/ armchairrocketscientistgraffitiexistentialist
 "It's a damn poor mind that can only think of one way to spell a
 word." -- Andrew Jackson