Subject: Re: Why not track our xsrc with X11R6.6 from X.org?
To: NetBSD-current Discussion List <current-users@NetBSD.ORG>
From: Charles Shannon Hendrix <shannon@widomaker.com>
List: current-users
Date: 07/18/2001 17:09:52
On Wed, Jul 18, 2001 at 12:58:10PM -0400, Greg A. Woods wrote:

> I think we're talking about completely different things when we each say
> "driver".  

Yep, that's it.

> I only ever mean something that is part of the kernel and
> which manages all the machine dependent aspects of hardware so that a
> machine (and hopefully architecture) _in_dpendent API can be presented
> for use by any application.  That's what a "driver" is in a unix-like
> operating system.  

This is not true. You are talking about a kernel driver. Otherwise,
a driver is any kind of middleman. For example, I once wrote drivers
for MacroView, a tool for creating real-time displays. The driver I
wrote was userland C code that abstracted the details of our hardware
and software so that MacroView saw it's standard interface. This let
us radically change our simulations code and hardware setup without
changing anything in the much larger and time-intensive MacroView
development.

A driver is not just kernel level interfaces to devices.

The graphics drivers you have talked about, such as those from Sun, are
very primitive. Such drivers don't handle even 5% of what is needed for
a modern graphics card.

> OK, so please tell me why those "card driver modules" can't be linked into a
> generic kernel framebuffer driver template [eg. wscons] (or even made
> loadable for those people too lazy to build their own kernels ;-) such that
> the right things can be done on the right sides of the kernel/userland
> boundary with the least amount of effort?

Those modules contain hardware-dependent code and present an API to
X11. There is a desire among various groups to make this work for a lot
of graphics systems. The idea would be universal hardware drivers that
would present an API to X11, or any graphics subsystem you want.

That's a hell of a lot of work, especially given the rapid advance of
graphics systems, but it's happening. It would help a lot of graphics
card companies would help, and if things were not so Microsoft-centric.

> > The specifics of graphics initialization are so incredibly specific to a
> > particular card or card revision that there is no possible way you want this
> > stuff in your kernel.

> Oh, but I DO!  I really Really REALLY do!  

Then buy yourself an NT system.... :)

> Or rather I absolutely do not ever want the Xserver to have to know anything
> about the wiring of my system, or if possible even the type of the CPU(s)!  

Then why are you opposed to XFree 4.x? It is all about removing such
code from the X server.

One of the biggest reasons why so many graphics card creators never
write drivers for UNIX is precisely because you have to build a custom
X server for each card or family of cards. 

Your suggestion of putting the drivers in the kernel is even worse, both
technically and from a manufacturer's point of view.

> Despite knowing how to do fancy things with fancy graphics hardware it's
> still just a user-land application, with network access, and a bloody big
> and complicated one at that.  There's absolutely no way I'm willing to give
> it either direct or indirect control over anything but the graphics card.  

The tight integration of video hardware with the rest of the system
means you'll have to in a lot of cases. Given the nature of the beast, I
don't see how you can ever totally avoid this.

However, I think a proper driver layer between X (or whatever) and the
hardware could very well help encourage better abstractions and better
harware designs.

You have to start somewhere, and XFree 4.x looks like a very good
effort.

> We've already seen dozens of examples of how the lines between the kernel
> and userland can be drawn much "better" without too much impedement to
> getting the job done "right".

We've also seen how stupid it is to put graphics drivers in the kernel.
All we need is an interface to some basic card operations, just to be
able to bootstrap the driver, and do so in a portable way.

There is more than one project on the various OSes working on ideas like
this, so perhaps some unification will occur in the future.

> > All you'd be doing is moving about 1/3rd of each of the XFree86 driver
> > modules and all the operating system glue to the kernel, taking up
> > valuable memory if the kernel is non-pageable.
> 
> No, I'd be moving one, and only one, and exactly one, card module into
> the kernel (per system/per card)....

But that's you personally, he's talking about the project as a whole.
There is no way you'd be able to get all that done in the kernel.
Besides that, a lot of the driver code is binary only. Even if all
manufacturers agreed to source releases, is the core team going to let
nVidia and ATI start doing CVS commits?

Let's say that happens... does that mean every time nVidia releases a
new card or just faster driver code I'm going to have to build a new
kernel, possibly even a non-stable release?

Bah! (waves paw)

> However even for the most complex card I can imagine that's still code
> that I would find very valuable to have in my kernel.  I.e. putting it
> in the kernel is well worth whatever resources it takes!  I _want_ that
> code in my kernel!

No you don't. If you knew how a modern GPU like the nVidia worked, there
is no way you would say this. Every single GPU is totally different, far
more different than CPUs are from one another.

The object-manipulation design of some of these chips is immensely
complex, and even the manufacturer learns new things about them as time
goes on. Think for a moment about OpenGL, and a hardware GPU desgined to
support it.

You definitely do not want that stuff in the kernel. It belongs in a
driver layer that sits under X, OpenGL, and any kind of DRI API.  

> I'm no more afraid, nor too lazy, to build my custom Xserver than I am to
> build my custom kernel.

Well, you can't expect that of the average user. I don't want to have to
rebuild my fscking X server every time there is a bug fix or new feature
added to the driver either.

> Yes a "generic" Xserver (just as a generic kernel) has place in the
> universe, but unlike a generic kernel such an Xserver should be designed
> to do just the bare minimum necessary to make the widest variety of
> video cards display a passable and usable rendition of X windows until
> you can get a proper one compiled/linked/downloaded/whatever and up and
> running.

You really cannot do that. It only works for places like Sun because
they have such limited graphics hardware.

Not all graphics cards will even support writing on the framebuffer.
I'm fairly certain you can't display anything at all with an nVidia
GPU without actually programming their GPU. You can't just draw on the
framebuffer to get the basics.

The only standard for that sort of generic X server was VGA, and lot's
of cards no longer support VGA graphics.

> In fact I'd rather bring the entire Xserver into the kernel (and of
> course do all the work implied to make it secure enough to do so) than I
> would to allow a user-land process have that kind of control over my
> machine.  

Eeewww...

> > And considering how rapidly some of the modules are being developed to
> > take in new cards and revisions, you'd be constantly updating your kernel.
> 
> But I only have one or a few cards in my system, and I wouldn't
> necessarily be running even as bleeding-edge X11 code as I do kernel
> code.....  (especially not on "production" workstations).

That still doesn't change the fact that the kernel would need to be
updated constantly. Is core going to give nVidia and ATI commit privs?

Besides, most of what you want, is also a goal of the XFree project.

They want to keep the hardware specific code out of the server and the
kernel, which is the way it should be.

> > Your system will do about 11,000 xstones on the local framebuffer. It
> > probably is faster than you need.
> 
> Exactly -- it's already faster than I need it to be!

I use software that won't run well on that kind of system daily. I also
don't like my CPU cycles wasted on slow X servers.  That's the whole point
in having a GPU in the first place.

> OK, so those are some very fast numbers.  What do they mean to human
> visual perception?  I suspect they're all stupidly over-kill for the
> real purpose of showing images to our human eyes.  

There is more to this than human perception. Every cycle saved in the
graphics pipline is a cycle that can be used for non-graphics work.

> Can't we please try to just get things working efficiently at human-eye
> speeds and be done with it?  

I sure hope not. My human eye can definitely tell when an X server is
anemic, and the rest of my applications slow down when it takes long
enough for me to see it.

> However it's a terrible ugly inelegant waste to be changing things on
> the screen faster than the human eye can perceive them.  

No it isn't.  I don't want to watch X draw!

> If the application interface (i.e. X11 protocol) requires that applications
> be able to send operations to the Xserver at stupidly fast rates then please
> find some way to have the Xserver calculate when it's appropriate to update
> the screen *before* it tries to do so.  All that work can be done in the
> user-land code with nary a call to the kernel graphics driver.

OK, this is true, but misses the point. If I have a server that can do 1
million xstones, I also have a server that will use less CPU for those
times when I'm doing exactly this, like you want it to.

And for those times when I have a complex document or CAD image on the
screen, I don't want to sit there and watch it draw the image.

> It's irrelevant how fast consumer PC hardware is if it can't be used
> effectively for the job it's intended to do.  Let's get on with making
> X11 graphics look good and smooth to the human eye, and not try just to
> make graphics benchmarks fast.

You'll never make X smooth if you don't also cut down on CPU usage, and
get the server latency down as much as possible.  

Multithreaded X event queues have helped alot, and if you really want
smooth graphics, you are going to end up eating resources to get that.
Most of the time you decrease efficiency to make things look better.
It's just like multithreading to make a UI smoother... it actually uses
more CPU to accomplish that.

-- 
shannon@widomaker.com  _________________________________________________
______________________/ armchairrocketscientistgraffitiexistentialist
 "All of us get lost in the darkness, dreamers turn to look at the
 stars" -- Rush