Subject: Re: wsfont encoding
To: None <>
From: Marcus Comstedt <>
List: tech-kern
Date: 02/03/2001 00:38:09
>>>>> "Noriyuki" == Noriyuki Soda <> writes:

  >> This is much too restrictive for the general case.  If you have for
  >> example a 8859-15 codeset and a 8859-1 font, you should be able to see
  >> the glyphs for the overlapping characters.  This was my initial
  >> requirement.  So how will this be accomplished?

  Noriyuki> Just add the conversion table.

This is what I was worried about, that each conversion table would
have to be added manually.  That's what I asked about and which you
then replied with "no".  But if conversion tables have to be added for
each combination of input codeset and font encoding, then you _will_
have to add 7 tables if you have 7 font encodings enabled and want a
new input encoding you add to work with all of them.  To be required
to manually add 7 tables just to get one more input codeset sounds
unmanageble to me.

  Noriyuki> As I already said, all thing which can be supported by Unicode based
  Noriyuki> interface can be supported by codeset independent interface.
  Noriyuki> Because the latter is more general.

General often ends up meaning half-implemented, in my experience.
That's why I'm a little sceptical.  A Turing Machine is general as
hell, but it's not much use in everyday life, as it lacks the bits
that gets it anywhere close to real problems.  Therefore, less general
devices are used to perform actual work.

  >> Having support for different encodings as loadable modules is probably
  >> a good idea.  But I'm still curios about how this user configuration
  >> will work.  If I configure that I want to use ISO-2022 with (among
  >> others) 8859-15 codeset and that I want to have 8859-4 fonts, where
  >> will the translation tables/functions come from?

  Noriyuki> It can be implemented by dedicated module which breaks code sequence to
  Noriyuki> (font_encoding, font_index). But that is probably overkill for 
  Noriyuki> such simple and common requirement.

I'm not talking about breaking ISO-2022 code sequence into
(optimal_font_encoding, font_index), that's reasonably simple and we
have the guts of that code already in the wscons_emulvt100.  What I'm
talking about is recoding when a font with optimal_font_encoding is
not available.  Which probably would be (B) in your graph below.

  Noriyuki> Logically, the following (A) or (B) is considered as the place 
  Noriyuki> where such conversion will be done.

  Noriyuki> 		|
  Noriyuki> 	  (code sequence)
  Noriyuki> 		|
  Noriyuki> 		| .... (A) convert a codeset to another codeset.
  Noriyuki> 		v
  Noriyuki> 	[1] codeset handling layer
  Noriyuki> 		|
  Noriyuki> 	  (font_encoding, font_index)
  Noriyuki> 		|
  Noriyuki> 		| .... (B) convert a font_encoding to another font_encoding.
  Noriyuki> 		v
  Noriyuki> 	[2] rendering interface

  Noriyuki> Implmentation in (B) is easy, and it is likely what we currently have
  Noriyuki> (i.e. mapchar), and probably this is the suitable way for simple 
  Noriyuki> mapping like your example.

We have mapchar before my changes, which really didn't do anything,
and thus didn't solve (B).  Then we have mapchar after my changes,
which solves (B) if the first font_encoding is Unicode.  You have
still to show how generalizing this into allowing any font_encoding
will allow implementation to be "easy" while still not putting an
unreasonable burden on the user to configure the system.

  // Marcus