Subject: Re: Blitting
To: Chris Hopps <firstname.lastname@example.org>
From: Eduardo E. Horvath email@example.com <firstname.lastname@example.org>
Date: 03/24/1994 15:09:48
On Thu, 24 Mar 1994, Chris Hopps wrote:
> > If all rendering is done with the blitter, there is also a need to
> > allocate off-screen bitmaps (pixmaps) for the server to be able to
> > scribble into, so a user-level chipmem allocator seems necessary.
> This is best provided under views as part of the interface when
> blit functions are incorporated. When someone writes the blitter routines
> then we can extend the area that is mapped we can also play games later
> with VM.. No user accessable generic chipmem allocator should be needed
> I will work against it in every way possible. If there is no other way
> then there is no other way, but right now I don't see a problem with
> doing it through mmap() calls. This will allow us to have fun with
> VM when and if we want to support that allowing the allocation of more
> chip memory than is available.
This won't work well with mmap() calls, unless there is a function to mmap
all unused chip mem so the server can create its own chipmem_malloc().
What is needed it the ability to allocate and deallocate pixmaps. It is
probably not even necessary to mmap() the pixmaps, just store data,
retrieve data, and invoke graphic operations on them. Without this, all
off-screen rendering will need to be done with the processor. Two sets
of routines will be needed, one with ioctls for the blitter, and one with
direct CPU operations, making the server much bigger and probably much
slower than the ones we have now.
> > What are the ramifications of these functions on other graphic cards?
> If any other cards are added to dev/view then they need toeither
> 1) support the interfaces functions, or 2) dev/view needs to provide
> The reason /dev/view doesn't have blitter functions as part of the interface
> is becuase I didn't want to write the defaults for boards that had
> none. I also didn't want to write the vustom chip routines.
[ Putting on asbestos suit ]
I don't think we want to start cluttering the kernel up with CPU driven
rendering routines for stuupid frame buffers. X servers should query the
particular view device for supported operations and patch the jump table
appropriately. Unused routines are not part of the working set and need
never be paged in. (On the other hand if pixmaps are not supported, two
sets of routines are used and swapped in, and performance goes down the
[ Removing asbstos suit ]
Actually, if we want one server to run on any graphic card we may just
need 3 of each rendering routine:
1) Use ioctls and the blitter
2) CPU rendering for a bit-plane display
3) CPU rendering for a chunky-pixel display
However they are implemented, whether in the kernel or server, you will
need 3 different rendering routines. I would just prefer to leave as
many of them out of the kernel as possible.
Before someone brings up the idea of moving all of this into the server,
let me point out that the server does not have access to interrupt
handlers which sill make performance lousy. And giving the server access
to the custom chips and letting it trash your copper list also sounds
like a pretty bad idea.
Eduardo Horvath email@example.com
"Trust me, I am cognizant of what I am doing." - Hammeroid