Font encodings, hyphenation and Unicode engines

The LaTeX team have over the past couple months been taking a good look at the Unicode TeX engines, XeTeX and LuaTeX, and making efforts to make the LaTeX2e kernel more ‘Unicode aware’. We’ve now started looking at an important question: moving documents from pdfTeX to XeTeX or LuaTeX. There are some important differences in how the engines work, and I’ve discussed some of them in a TeX StackExchange post, but here I’m going to look at one (broad) area in particular: font encodings and hyphenation. To understand the issues, we’ll first need a bit of background: first for ‘traditional’ TeX then for Unicode engines.

Knuth’s TeX (TeX90), e-TeX and pdfTeX are all 8-bit programs. That means that each font loaded with these engines has 256 slots available for different glyphs. TeX works with numerical character codes, not with what we humans think of as characters, and so what’s happening when we give the input

\documentclass{article}
\begin{document}
Hello world
\end{document}

to produce the output is that TeX is using the glyph in position 72 of the current font (‘H’), then position 101 (‘e’), and so on. For that to work and to allow different languages to be supported, we use the concept of font encodings. Depending on the encoding the relationship between character number and glyph appearance varies. So for example with

\documentclass{article}
\usepackage[T1]{fontenc}
\begin{document}
\char200
\end{document}

we get ‘È’ but with

\documentclass{article}
\usepackage[T2A]{fontenc}
\begin{document}
\char200
\end{document}

we get ‘И’ (T2A is a Cyrillic encoding).

This has a knock-on effect on dealing with hyphenation: a word which uses ‘È’ will probably have very different allowed hyphenation positions from one using ‘И’. ‘Traditional’ TeX engines store hyphenation data (‘patterns’) in the format file, and to set that up we therefore need to know which encoding will be used for a particular language. For example, English text uses the T1 encoding while Russian uses T2A. So when the LaTeX format gets built for pdfTeX there is some code which selects the correct encoding and does various bits of set up for each language before reading the patterns.

Unicode engines are different here for a few reasons. Unicode doesn’t need different font encodings to represent all of the glyph slots we need. Instead, there is a much clearer one-to-one relationship between a slot and what it represents. For the Latin-1 range this is (almost) the same as the T1 encoding. However, once we step outside of this all bets are off, and of course beyond the 8-bit range there’s no equivalent at all in classical TeX. That might sound fine (just pick the right encoding), but there’s the hyphenation issue to watch. Some years ago now the hyphenation patterns used by TeX were translated to Unicode form, and these are read natively by XeTeX (more on LuaTeX below). That means that at present XeTeX will only hyphenate text correctly if it’s either using a Unicode font set up or if it’s in a language that is covered by the Latin-1/T1 range: for example English, French or Spanish but not German (as ß is different in T1 from the Latin-1 position).

LuaTeX is something of a special case as it doesn’t save patterns into the format and as the use of ‘callbacks’ allows behaviour to be modified ‘on the fly’. However, at least without some precautions the same ideas apply here: things are not really quite ‘right’ if you try to use a traditional encoding. (Using LuaLaTeX today you get the same result as with XeTeX.)

There are potential ways to fix the above, but at the moment these are not fully worked out. It’s not also clear how practical they might be: for XeTeX, it seems the only ‘correct’ solution is to save all of the hyphenation patterns twice, once for Unicode work and once for using ‘traditional’ encodings.

What does this mean for users? Bottom line: don’t use fontenc with XeTeX or LuaTeX unless your text is covered completely by Latin-1/T1. At the moment, if you try something as simple as

\documentclass{article}
\usepackage[T1]{fontenc}
% A quick test to use inputenc only with pdfTeX
\ifdefined\Umathchar\else
  \usepackage[utf8]{inputenc}
\fi
\begin{document}
straße
\end{document}

then you’ll get a surprise: the output is wrong with XeTeX and LuaTeX. So working today you should (probably) be removing fontenc (and almost certainly loading fontspec) if you are using XeTeX or LuaTeX. The team are working on making this more transparent, but it’s not so easy!

LaTeX2e and Unicode engines: the detail

As I mentioned in my last post, the LaTeX team are working on various small but important improvements to the LaTeX2e kernel. One area we are looking at is adjusting how the ‘vanilla’ format works with Unicode engines. I’ve been asked for a bit more detail on this area, so I’ll try to fill in what’s going on with the ‘newer’ engines.

To date, the ‘vanilla’ LaTeX format (latex.ltx and associated files) has been pretty much engine-neutral with no attempt to differentiate anything other than to deal with differences between TeX2 (7-bit, released 1982) and TeX3 (8-bit, released 1990). However, the LaTeX formats that almost all users actually load are not just made by running

<engine> -ini latex.ltx

(or similar). The ‘format builders’ [principally the TeX Live team and Christian Schenk (MiKTeX] use a series of .ini files for building formats. For example, pdflatex.inicurrently says

% Thomas Esser, 1998. public domain.
\input pdftexconfig.tex
\scrollmode
\input latex.ltx
\endinput

(The .ini files are in the main common to both TeX Live and MiKTeX. For building a pdfLaTeX that makes sense: pdftexconfig.tex just sets up related to direct PDF output as opposed to working in DVI mode. Things get more complicated, though, when we look at the Unicode engines: some of the stuff is really ‘general’ and should be present in all LaTeX-based formats with these engines.

Both XeTeX and LuaTeX work with the entire Unicode range, so they need information on things like case mapping (\lccode/\uccode) and Unicode math handling (\Umathcode). The LaTeX format includes a \dump at the end, so without hacking about no code can be added after it’s loaded. More importantly, as XeTeX builds hyphenation into the format in the same way as ‘classical’ TeX the \lccode data needs to be right before the format loads the patterns. However, that can’t be done just by reading data before latex.ltx: it sets up the 8-bit range for the T1 encoding scheme. That’s an issue nowadays as the hyphenation patterns are nowadays stored in Unicode form: the stuff that happens ‘behind the scenes’ therefore (quite reasonably) assume that the Unicode engines can read these files with no ‘trickery’. To accommodate this, at the moment you’ll find that xelatex.ini includes

\input unicode-letters
% disable the \dump in latex.ltx
\expandafter\let\csname saved-dump-cs\endcsname\dump
\let\dump=\relax
\scrollmode
\input latex.ltx

and later

% Because latex.ltx sets up character code tables for T1 encoding by default,
% we need to reset values from unicode-letters that may have been overridden
\begingroup
\catcode`\@=11 \count@=128 % reset chars "80-"FF to category "other", no case mapp
ing
\loop \ifnum\count@<256
  \global\uccode\count@=0 \global\lccode\count@=0
  \global\catcode\count@=12 \global\sfcode\count@=1000
  \advance\count@ by 1 \repeat
\def\C #1 #2 #3 {\global\uccode"#1="#2 \global\lccode"#1="#3 } % case mappings (non-letter)
\def\L #1 #2 #3 {\global\catcode"#1=11 % category: letter
  \C #1 #2 #3 % with case mappings
  \ifnum"#1="#3 \else \global\sfcode"#1=999 \fi % uppercase letters have sfcode=999
  \global\XeTeXmathcode"#1="7"01"#1 % BMP letters default to class 7 (var), fam 1
  }
\def\l #1 {\L #1 #1 #1 } % letter without case mappings
[data lines]
\endgroup
\expandafter\let\expandafter\dump\csname saved-dump-cs\endcsname
\dump

There are some slight differences for lualatex.ini, but the general idea is the same. The need to ‘hack around’ the kernel is not great, and the team are much keener on the idea that it’s a documented feature that the Unicode engines are set up for a Unicode encoding (‘UC’) rather than for T1. (I’ll probably return to Unicode encodings in another context in a later post.)

As well as this important area, there are some things that are ‘tacked on’ to the formats by the .ini files but which apply only to one of either XeTeX or LuaTeX. For XeTeX, there is a need to manage the \XeTeXinterchartoks system, for which xelatex.ini currently does

%
% Allocator for \XeTeXintercharclass values, from Enrico Gregorio 
%
\catcode`\@=11
\newcount\xe@alloc@intercharclass % allocates intercharclass
\xe@alloc@intercharclass=\thr@@ % from 4 (1,2 and 3 are used by CJK, AFAIK)
\def\xe@alloc@#1#2#3#4#5{\global\advance#1\@ne
 \xe@ch@ck#1#4#2% make sure there's still room
 \allocationnumber#1%
 \global#3#5\allocationnumber
 \wlog{\string#5=\string#2\the\allocationnumber}}
\def\xe@ch@ck#1#2#3{%
 \ifnum#1<#2\else
  \errmessage{No room for a new #3}%
 \fi}
\def\newXeTeXintercharclass{%
 \xe@alloc@\xe@alloc@intercharclass\XeTeXintercharclass\chardef\@cclv} %at most 254

For LuaTeX, there are a couple of things in lualatex.ini that should be in the format. First, there is a difference in how this engine handles negative values of \endlinechar compared with other TeX engines. That requires a patch to LaTeX2e’s \@xtypein. More importantly, LuaTeX only actives the extensions to TeX if some Lua code is used

\begingroup
\catcode`\{=1
\catcode`\}=2
\directlua{
  % etex and pdftex primitives are enabled without prefixing
  % as well as extented Unicode math primitives (see below)
  tex.enableprimitives('', 
    tex.extraprimitives('etex', 'pdftex', 'umath'))
  % other primitives are prefixed with luatex (see below)
  tex.enableprimitives('luatex', 
    tex.extraprimitives('core', 'omega', 'aleph', 'luatex'))
  }
\endgroup

This has to come right at the start of the build process, but is another thing that can sensibly go into latex.ltx. The team also wonder if all of the primitives should have their ‘natural’ names without the luatex prefix.

All of this can be added to latex.ltx without altering what users have available and without breaking LaTeX2e for pdfTeX users. The team have these changes made in the development version of the kernel. There are other things yet to be finalised, but it’s highly likely the next release of the LaTeX2e kernel will (finally) recognise the Unicode engines and bring this stuff ‘in house’.