Case changing: solving the challenges in TeX

I wrote recently about handling UTF-8 input in Lua, and in particular the fact that doing text manipulation needs a bit of care. One area that I’ve been looking at recently is doing case changing operations. We’ve been looking at this for expl3, so I thought it would be worth looking at this in a bit of detail. I’m going to mainly focus on the results rather than implementation: the latter is important when it affects the output but not really otherwise (except for the team!).

Background

The first thing to think about is what case changing is needed for. We’ll see in a bit that TeX uses ‘case changing’ for something very different from what we might think of as changing case in ‘text’. First, though, let’s look at what those ‘normal’ requirements are. The Unicode Consortium have looked in detail at this: take a look at the standard for all of the detail. The common situations are:

  • ‘Removing’ the case from text to allow ‘caseless’ comparisons (‘case-folding’). This is primarily used ‘internally’ by code, and tends traditionally to be handled by simply lower casing everything before some comparison. The Unicode approach has some slight differences between case-folding and lower-casing, but it’s relatively straight-forward.
  • Upper-casing ‘text’. Here, all characters that have a case mapping are changed to the upper-case versions. That’s a relatively simple concept, but there is a bit more to it (as we’ll see).
  • Title- or sentence-casing ‘text’. The concept here is usually implemented by upper-casing the first character of a phrase, or of each word, then to lower-case the rest. Again, the Unicode specs have a bit more to say on this: there are some character(s) that should not be upper-cased at the start of a word in this context but need a special ‘title-case’ character. (For example, in Dutch ‘IJ’ at the start of words should both be upper-cased.)

Just to make life a bit more fun, there are also some language-dependent rules for case changing, and some places where the outcome of a case change depends on the context (sigma at the end of words is the most obvious example). So there are a few challenges if we want to cover all of this in TeX. We’ve also got to think about the ‘TeX angle’: what does ‘text’ mean, how do we handle math mode, etc.

TeX primitives

TeX provides two primitives for changing case, \lowercase and \uppercase. These are powerful operations, and in particular are very often used for something that’s got very little to do with case at all: making characters with non-standard category codes. As that isn’t a ‘real’ case change at all, I won’t look at it further here, other than noting that it means we need those primitives for something even if we do case changing another way entirely!

Sticking with changing case of ‘text’, \uppercase and \lowercase rely on the fact that each character has a one-to-one mapping for upper- and lower-casing (defined by \uccode and \lccode). Assuming these are not ‘do nothing’ mappings, they allow a simple replacement of characters

\uppercase{hello} => HELLO
\lowercase{WORLD} => world

With XeTeX and LuaTeX, these mappings are set up for all sensible UTF-8 codepoints (‘characters’). However, the are one-to-one mapping with no context-awareness: that makes it impossible to cover some parts of the Unicode definitions I’ve mentioned (at least using the primitives directly). They also change everything in the input, which makes handling things like

\uppercase{Some text $y = mx + c$}

a bit tricky (there are ways, of course!).

Another TeX concern is ‘expandability’: \uppercase and \lowercase are not expandable. That means that while we can do

\uppercase{\def\foo{some text}}

and have \foo defined as SOME TEXT, the apparently ‘obvious’ alternative

\edef\foo{\uppercase{some text}}

doesn’t have the expected result (\foo here is defined as \uppercase{some text}). Moreover, it means we can’t use the primitives inside places where TeX requires expansion. As a result, things like

\csname\lowercase{Some-input}\endcsname

result in an error. Of course, there are always ways around the problem, but I think it looks a lot ‘nicer’ for the user if a way can be found to do these operations expandably. As we’ll see in a bit, that is doable if we accept a few restrictions.

Case folding

If we want to implement case changing without using \lowercase and \uppercase then we have to have some form of iterative mapping over the ‘text’ input. Doing that while keeping the code expandable is doable if we accept a few restrictions, which I’ll come to in a bit. One to mention now is that the code here assumes e-TeX is available and that we have the \pdfstrcmp primitive or equivalent functionality: pdfTeX, XeTeX and LuaTeX all cover these requirements.

For ‘case-folding’ we can make some simplifications which make this the most straight-forward situation to set up. First, case-folding is a one-to-one change with no context-dependence: nice and easy. Secondly, as this is needed only for ‘internal’ stuff and not for ‘text’ to be typeset we can assume that everything can be handled as a (TeX) string by applying \detokenize. That avoids issues with things like escaping math mode, brace groups and the like. Setting up an expandable mapping is then relatively straight-forward, and the issue becomes simply how do with actually change the case of each character.

With a list of over 1000 possible characters to case-fold, comparing each and every one to find a hit would get slow. Luckily, Bruno Le Floch spotted that we can divide up that long list into ‘bite sized’ chunks by using the last two digits of the character code of the input, giving 100 short lists, each of which is realistic just to look through. (For those interested in the internals, the final comparison is done using \str_case:nnF, which is an expandable string-based selection using \pdfstrcmp.)

Putting everything together lead to the documented interface

\str_fold_case:n { <input> }

which does exactly what it says: folds the case of the input, which is treated as a string. The only real point to note here is that with pdfTeX it doesn’t make sense to talk about UTF-8 as the engine doesn’t support it. Thus the changes here are restricted to ASCII (A-Z): for a string that’s clear, but life is a bit more murky for ‘text’ input. I’ll come back to that below.

Case changing

Real case changing provides a few more challenges. Looking first at the Unicode definitions, there are both context- and language-dependent rules to worry about. It turns out that there are relatively few of these, so a bit of work with some hard-coding seems to cover most of them. That does require a bit of ‘bending the rules’ to fit in with how TeX parses stuff, so there may yet be more work to do here!

As we are now looking at text which might have a variety of TeX tokens in it then doing the mapping raises issues. It turns out that we can do an expandable mapping provided we accept that any brace groups end up with { and } as the grouping tokens even if that wasn’t true to start with (a bit of an edge-case but we have to specify these things!). (Note that this does require both e-TeX and \pdfstrcmp, so it’s not true for ‘classical’ TeX.) However, that raises an interesting issue: should stuff inside braces be case changed or not? At the moment, we’ve gone for ‘no’, as that’s very much like the BibTeX approach

title = {Some text with including a {Proper-Name}}

which also makes the code a bit easier to write. However, it’s not quite clear if this is the best plan: I’ll point to one open question below.

Another question is what category codes should apply in the output. For the folding case, it was easy: everything is treated as a string so the output is too. That’s not the situation for general text, but at the same time it seems sensible to assume that you are case changing things that will be typeset (‘letters’). Again, this is rather more of a concepts than a technical question.

Answering these questions, or at least taking a documented position on them, it’s possible to define functions such as

\tl_lower_case:n { <text> }
\tl_upper_case:nn { <language> } { <text> }

that implement the case changing I’ve outlines. As this is very much a ‘work in progress’ those names are not fixed: there’s a feeling that perhaps \text_... might be more ‘sensible’ (the input should be ‘well-behaved’). What’s needed is some testing: we thing the idea is a good one, but at the moment it’s not clear we’ve got all of the ideas right!

Notice the versions that know about languages: the idea is that these will get things like Turkish dotted/dotless-i correct. Of course, that assumes you know the language the input is in, but hopefully that’s normally true!

One thing to note here is again the pdfTeX case. As we are dealing with ‘engine native’ input, it’s only set up to do changes for the ASCII range. That’s fine, but it leaves open the question of LICR text. For example,

 \tl_upper_case:n { \'{e} }

currently doesn’t do anything as there are braces around the e. I’m not sure what’s best: skipping brace groups is generally easier for the user, but they probably would be surprise by this outcome! (With XeTeX or LuaTeX, the input would hopefully be é so the problem doesn’t arise.)

Conclusions

Case changing is a tricky thing to get right. We’ve made some progress in providing a ‘clear’ interface in expl3 that can cover not only UTF-8 input but also language-dependence. What’ needed now is some testing and feedback: we hope these things are useful!

LuaTeX: Manipulating UTF-8 text using Lua

Both the XeTeX and LuaTeX engines are natively UTF-8, which makes input of non-ASCII text a lot easier than with pdfTeX (certainly for the programmer: inputenc hides a lot of complexity for the end user!). With LuaTeX, there is the potential to script in Lua as well as program in TeX macros, and that of course means that you might well want to do manipulation of that UTF-8 input in Lua. What might then catch you out is that it’s not quite as simple as all that!

Lua itself can pass around arbitrary bytes, so input in UTF-8 won’t get mangled. However, the basic string functions provided by Lua are not UTF-8 aware. The LuaTeX manual cautions

The string library functions len, lower, sub, etc. are not UNICODE-aware.

As a result, applying these functions to anything outside the ASCII range is not a good idea. At best you might get unexpected output, so

tex.print (string.lower ("Ł"))

simply prints in Ł (with the right font set up). Worse, get an error as for example

tex.print (string.match ("Ł","[Ł]"))

results in

! String contains an invalid utf-8 sequence.

which is not what you want!

Instead of using the string library, the current correct approach here is to use slnunicode. Again, the LuaTeX manual has some advice:

For strings in the UTF-8 encoding, i.e., strings containing characters above code point 127, the corresponding functions from the slnunicode library can be used, e.g., unicode.utf8.len, unicode.utf8.lower, etc.

and indeed

tex.print(unicode.utf8.lower("Ł"))

does indeed print ł. There are still a few things to watch, though. The LuaTeX manual warns that unicode.utf8.find returns a byte range and that unicode.utf8.match and unicode.utf8.gmatch fall back on non-Unicode behaviour when an empty capture (()) is used. Both of those can be be allowed for, of course: they should not be big issues.

There’s still a bit of complexity for two reasons. First, there’s not really much documentation on the slnunicode library, so beyond trying examples it’s not so easy to know what ‘should’ happen. For example, case-changing in Unicode is more complex than a simple one-to-one mapping, and can have language-dependencies. I’ll probably return to that in another post for a TeX (or at least XeTeX/LuaTeX) take on this, but in the Lua context the problem is it’s not so clear quite what’s available! In a way, the second point links to this: the LuaTeX manual tells us

The slnunicode library will be replaced by an internal UNICODE library in a future LuaTeX version.

which of course should lead to better documentation but at the price of having to keep an eye on the situation.

Over all, provided you are aware that you have to think, using Lua with Unicode works well, it’s just that it’s not quite as obvious as you might expect!

WinEdt version 6

WinEdt is a popular (La)TeX editor on Windows. A new major release, version 6, has just appeared on the WinEdt homepage. The interface has received a lot of attention, with new icons and menu arrangements. One thing that a lot of people have been hoping for is UTF-8 support, but the release notes say

Avoid sending major feature requests: they will have to wait for version 6.1 (unicode, folding and all)!

So for the moment people who need UTF-8 will have to look elsewhere. Another point to note for users of version 5.5 is that this is not a free upgrade (WinEdt is shareware).

Babel woes

I’ve had a couple of babel-related bug reports for siunitx. The first was with the Spanish language option, where the \percent unit misbehaves as \% is redefined by babel. My fix for this has then caused a problem with the Italian language option, as it makes ” an active character. So the latest release (v1.2d) fixes this too: apologies for the problems.

All of this makes more more eager than ever that a general move to UTF-8 input happens. This should cut down on the need for babel. At the same time, an updated LaTeX kernel (LaTeX3) which includes more functionality would also help package authors know what to expect.

Improving LaTeX for the user

I’ve been discussing some points about the future of (La)TeX with various people, and some key issues come to mind. Most LaTeX users do not want to meddle with the internal parts of LaTeX or TeX. In an ideal world, I suspect most users would like to need little beyond the correct document class to get things “just right” in their layout. Perhaps a few simple settings, but really little more than that.

With the correct packages and class loaded, you can do many things in LaTeX. However, you really shouldn’t need to load specific support for hyperlinks, T1 encoding, basic font changing, creating new float types and so on in 2009 (let alone 2010, 2011, etc.). The efforts of the LaTeX3 team have to date focussed on programming and to a lesser extent document design. How things will work at the user level is much less clear.

I’d suggest that a real focus on getting something for users would be the best way forward. This might mean less improvement internally, but I’d think that a LaTeX kernel which could do everything in The LaTeX Companion would be pretty successful, even with few changes “under the hood”. This would mainly be a re-coding excercise from existing packages, which in a way is similar to what I’ve tried to do with siunitx. Much of the basics in siunitx are taken from other packages (at least in terms of user interface), but it brings several ideas together in one place. The same idea could easily be applied to the kernel. Of course, this might leave some of the clever ideas for LaTeX3 out of the code at this stage, but I’d hope would get momentum behind a more regularly updated system.

One particular area to think about is fonts. With both XeTeX and LuaTeX able to handle system fonts directly, the basic LaTeX system seems very antiquated. At present, LaTeX3 only requires e-TeX, not LuaTeX (in contrast to ConTeXt Mark IV). Should the LaTeX team say something like:

For current testing purposes, only the e-TeX extensions are needed, but this is likely to change. XeTeX or LuaTeX will be required to run the release version of LaTeX3 with full functionality.

I’d say yes, as I think that it’s time to move on from complex font installation and usage restrictions. I’d also be very tempted to say that LaTeX3 will assume UTF-8 input unless otherwise specified (as both XeTeX and LuaTeX are native UTF-8 systems).

This type of approach will make LaTeX easier to use, and I’d hope to see it arrive! After all, users are the TeX community.