Subject: Re: Why not track our xsrc with X11R6.6 from X.org?
To: NetBSD-current Discussion List <current-users@NetBSD.ORG>
From: Greg A. Woods <woods@weird.com>
List: current-users
Date: 07/18/2001 12:58:10
[ On Wednesday, July 18, 2001 at 17:30:26 (+1000), Andrew van der Stock wrote: ]
> Subject: Re: Why not track our xsrc with X11R6.6 from X.org?
>
> you're missing the point of the modern XFree86 architecture. The server guts
> are platform neutral. The modular card drivers are mostly platform neutral.

I think we're talking about completely different things when we each say
"driver".  I only ever mean something that is part of the kernel and
which manages all the machine dependent aspects of hardware so that a
machine (and hopefully architecture) _in_dpendent API can be presented
for use by any application.  That's what a "driver" is in a unix-like
operating system.  Unix-like operating systems do not (if they know
what's good for them) allow user applications to have direct control
over arbitrary bits of system hardware, and they most definitely do not
want user applications to have to know about machine dependent features
such as bus wiring and slot assignments.

> The entire point of the modular architecture is to allow a vendor to write
> one driver for all platforms, and to have that work on XFree86 4.0 -> 4.9
> without change. So far, that's been the case, as the ABI has been relatively
> static (so far, with a few minor exceptions). Theoretically, it's possible
> for a well-written driver module to support all operating systems on that
> processor, for example, NetBSD, FreeBSD or Linux without recompilation.

OK, so please tell me why those "card driver modules" can't be linked
into a generic kernel framebuffer driver template [eg. wscons] (or even
made loadable for those people too lazy to build their own kernels ;-)
such that the right things can be done on the right sides of the
kernel/userland boundary with the least amount of effort?

> The specifics of graphics initialization are so incredibly specific to a
> particular card or card revision that there is no possible way you want this
> stuff in your kernel.

Oh, but I DO!  I really Really REALLY do!  Or rather I absolutely do not
ever want the Xserver to have to know anything about the wiring of my
system, or if possible even the type of the CPU(s)!  Despite knowing how
to do fancy things with fancy graphics hardware it's still just a
user-land application, with network access, and a bloody big and
complicated one at that.  There's absolutely no way I'm willing to give
it either direct or indirect control over anything but the graphics
card.  To do so would be ugly and inelegant on many levels and is why I
and others call XFree's design "PC-centric" (i.e. not because of modern
PCs, but because of the legacy of bad design IBM-PC descendants have to
live with).  We've already seen dozens of examples of how the lines
between the kernel and userland can be drawn much "better" without too
much impedement to getting the job done "right".

> All you'd be doing is moving about 1/3rd of each of
> the XFree86 driver modules and all the operating system glue to the kernel,
> taking up valuable memory if the kernel is non-pageable.

No, I'd be moving one, and only one, and exactly one, card module into
the kernel (per system/per card)....

However even for the most complex card I can imagine that's still code
that I would find very valuable to have in my kernel.  I.e. putting it
in the kernel is well worth whatever resources it takes!  I _want_ that
code in my kernel!

It's equally ugly and stupidly excessive to have an Xserver application
that can support all the intimate details of every type of card in the
universe, even if it is very carefully organised such that unneeded code
never gets paged into the working set.  I'm no more afraid, nor too
lazy, to build my custom Xserver than I am to build my custom kernel.
Yes a "generic" Xserver (just as a generic kernel) has place in the
universe, but unlike a generic kernel such an Xserver should be designed
to do just the bare minimum necessary to make the widest variety of
video cards display a passable and usable rendition of X windows until
you can get a proper one compiled/linked/downloaded/whatever and up and
running.

In fact I'd rather bring the entire Xserver into the kernel (and of
course do all the work implied to make it secure enough to do so) than I
would to allow a user-land process have that kind of control over my
machine.  I know very well that there's a difference between a
workstation and a firewall, but that's the point.  In most cases I think
the workstation is the more hostile environment!

(BTW, isn't some kernel memory pagable already?  I think it is.)

> And considering how
> rapidly some of the modules are being developed to take in new cards and
> revisions, you'd be constantly updating your kernel.

But I only have one or a few cards in my system, and I wouldn't
necessarily be running even as bleeding-edge X11 code as I do kernel
code.....  (especially not on "production" workstations).

> Your system will do about 11,000 xstones on the local framebuffer. It
> probably is faster than you need.

Exactly -- it's already faster than I need it to be!

> When we started getting the Matrox
> Millennium working, the first version of the driver that properly setup
> graphics mode (Radek simply copied in the 1024x768 RAMDAC details from
> Windows :-), we got 44,000 xstones with no acceleration. We were already the
> second fastest card at that time, even so. By the time we were finished, the
> Matrox Millennium I was capable of 735,000 xstones under the old
> "workstation"-derived XC (3.3.x) architecture and 950,000 xstones under the
> new 4.0 architecture on my dual PPro 200.

OK, so those are some very fast numbers.  What do they mean to human
visual perception?  I suspect they're all stupidly over-kill for the
real purpose of showing images to our human eyes.  Can't we please try
to just get things working efficiently at human-eye speeds and be done
with it?  When I'm using X11 I don't give a flying hoot about xstones or
any other kinds of benchmarks that measure things I can't see.  Sure
it's good to make the operations that get data down to the processors on
the card efficient so that they can do their part of the rendering job.
However it's a terrible ugly inelegant waste to be changing things on
the screen faster than the human eye can perceive them.  If the
application interface (i.e. X11 protocol) requires that applications be
able to send operations to the Xserver at stupidly fast rates then
please find some way to have the Xserver calculate when it's appropriate
to update the screen *before* it tries to do so.  All that work can be
done in the user-land code with nary a call to the kernel graphics
driver.

> This is why I always get a bit
> PO'd with the "workstation" snobs. Consumer PC hardware is more than just
> good - it kicks ass. No one uses banked frame buffers these days, EXCEPT for
> installation routines. Let's move on.

It's irrelevant how fast consumer PC hardware is if it can't be used
effectively for the job it's intended to do.  Let's get on with making
X11 graphics look good and smooth to the human eye, and not try just to
make graphics benchmarks fast.

-- 
							Greg A. Woods

+1 416 218-0098      VE3TCP      <gwoods@acm.org>     <woods@robohack.ca>
Planix, Inc. <woods@planix.com>;   Secrets of the Weird <woods@weird.com>