## Goodies from DANTE

I recently discovered that you can read issues most of Die TeXnische Komödie online. My German is good enough to get the gist of the articles, and there is some interesting stuff there. I’ve always meant to join DANTE, if only to support their core CTAN server, and so this seemed like an ideal opportunity. So in the post this week I got the various goodies that they send out: I particularly like the sticker!

What’s particular interesting for me is the section in Die TeXnische Komödie detailing the various TeX-related meet-ups that take place in Germany: there are a lot! Here in the UK, I’m pleased that UK-TUG manages a couple of training courses and a single speaker meeting/AGM every year. It’s quite a contrast!

## Programming LaTeX3: More on expansion

In the last post, I looked at the idea of expandability, and how we can use x-type expansion to exhaustively expand an argument. I also said that there was more to this, and hinted at two other argument specifications, f– and o-type expansion. We need these because TeX is a macro-expansion language, and while LaTeX3 coding does hide some of this detail it certainly does not get rid of all of it. Both of these forms of expansion are somewhat specialised, but both are also necessary!

## Full (or forced) expansion

The f-type (‘full’ or ‘force’) expansion argument is in some ways similar to the x-type concept, as it’s about trying to expand as much as possible. So

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nx \l_tmpc_tl{ \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

and

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nf \l_tmpc_tl{ \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

give the same result: everything is expanded, and \l_tmpc_tl contains ‘foo’. There are two crucial differences, however. First, x-type variants are not expandable, even if their parent function was. On the other hand, f-type expansion is itself expandable. So something like

\cs_new:Npn \my_function:n { \tl_length:n {#1} }
\int_eval:n { \exp_args:Nf \my_function:n { \l_tmpa_tl } + 1 }

will work as we’d want: \l_tmpa_tl will be expanded, then processed by \my_function:n and the result will be evaluated as an integer. Try that with \exp_args:Nx and it will fail.

The second difference is what happens when we hit a non-expandable token. With x-type expansion, TeX will look at the next thing in the input, and so tries to expand everything in the input (hence ‘exhaustive’). On the other hand, f-type expansion stops when the first non-expandable token is found. So

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { bar }
\tl_set:Nf \l_tmpc_tl { \l_tmpa_tl \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

will show

foo\l_tmpb_tl

That happens because the f in foo is not expandable. So f-type expansion stops before it gets to \l_tmpb_tl, while x-type expansion would keep going.

The second point is important as it means that some functions will not give the expected result if used inside an f-type expansion. We show this in the code documentation for LaTeX3: functions which fully expand inside both f– and x-type expansions are shown with a hollow star, while those that only work inside an x-type expansion are shown with a filled star.

As you might pick up, f-type expansion is somewhat specialised. It’s useful when creating expandable commands, but for non-expandable ones x-type expansion is usually more appropriate.

## Expanding just the once

As TeX is a macro expansion language, there are some tasks that are best carried out, or even only doable, using an exact number of expansions. To allow a single expansion, the argument specification o (‘once’) is available. To use this, you need to know what will happen after exactly one expansion. Functions which may be useful in this way have information about their expansion behaviour included in the documentation, while token list variables also expand to their content after exactly one expansion. Examples using o-type expansion tend to be low-level: perhaps the best example is dropping the first token from some input, so

\tl_set:No \l_tmpa_tl { \use_none:n tokens }
\tl_show:N \l_tmpa_tl

will show ‘okens’.

As with f-type expansion, expanding just once is something of a specialist tool, but one that is needed. It also completes the types of argument specification we can use, so we’re now in a position to do some more serious LaTeX3 programming.

## Text blocks on both sides of the header

A while ago, I wrote a short series on creating a CV in LaTeX. I’ve made a few adjustments recently, both to my CV and my letter of application, and one issue came up in both of them: putting text on both sides of the header.

For my CV, I wanted to split the address block I put at the top into two parts: address on one side, phone numbers on the other. The easiest way to do that turns out to be a tabular

\noindent
\begin{tabular}{@{}p{0.5\textwidth}@{}p{0.5\textwidth}@{}}
\raggedright
\textbf{Name} \\
...
&
\raggedleft
\null               \par
Tel.\ xxx xxxxxx    \\
Mobile yyy yyy yyyy \\
someone@some.domain
\end{tabular}


There are a few things to notice here. First, I’ve made the table columns take up all of the width of the page by using @{} to remove any inter-column space, then divided the available space up exactly. I’ve used \raggedleft to push the phone numbers to the right-hand margin, and have forced a blank line at the top of the phone number block so the name comes above everything.

For my letter, I wanted the two blocks again but needed both to be ragged right and with the right-hand one pushed to the right margin. That needs a couple of tables

  \begin{tabular}{@{}p{0.5\textwidth}@{}p{0.5\textwidth}@{}}
\raggedright
\toname
\\
\hfil
&
\raggedleft
\begin{tabular}[t]{l@{}}
\ignorespaces
\\[1 em]%
\@date
\end{tabular}
\end{tabular}


(This is an adjustment of the standard letter class, hence the various storage macros.) What you’ll notice here is that I’ve used a nested tabular purely to get the alignment right: the [t] argument is vital to get both blocks to line up at the top of the page.

Both of these are quite easy once you know how, but it took a while to get them spot-on!

## arara: Making LaTeX files your way

Building a LaTeX source of any complexity means doing more than a single LaTeX run, for example requiring BibTeX or MakeIndex runs along with multiple LaTeX passes. There are several ways to automate this: you can build your own script or use auto-build tools such as latexmk or Rubber. These tools work by checking for changes in the various auxiliary files that LaTeX creates, so they can work out how many runs are needed. However, a lot of users prefer to retain control of building, and so do the various steps by hand. There is now a tool that leaves the user in control but which helps to automate building: arara by Paulo Cereda. Arara is a Java-based system, which will automatically run the tools you ask it to based on comments in your source. It’s also up to you to set up the tools you want: you can already get quite a selection thanks to Marco Daniel. How does this work then? In your LaTeX source, you have something like

% arara: pdflatex
% arara: bibtex
% arara: pdflatex
% arara: pdflatex

which as you might guess does the classic pdfLaTeX, BibTeX, pdfLaTeX, pdfLaTeX cycle (assuming you have created rules called

pdflatex and bibtex). Life gets a bit more interesting when you start adding options to the different tools. For example, if you want to allow shell escape for just one file, you can do

% arara: pdflatex: { shell : yes }
% arara: bibtex
% arara: pdflatex: { shell : yes }
% arara: pdflatex: { shell : yes }

without needing to leave it on for everything. As you can edit the rules easily, it’s very easy to add specialist options for the way

you work, even if no-one else would ever be interested in them. It also makes it easy to run both pdfLaTeX and traditional dvips routes without having to alter the settings in your editor: just add the appropriate arara rules to your files, and the correct route is chosen automatically. Arara is very much in development at the moment, and that means there are a few rough edges. For example, you have to set up the right bits and pieces yourself: no installer just yet! However, it looks like a great way to have control over exactly what gets run without needing to script everything yourself.

## biblatex: A team to continue the work

I posted a while ago about biblatex, looking for news of the author, Philipp Lehman. He’d been very active in replying to bug reports up to about 5 months ago, since when no-one has heard from him. As I said before, that’s a big concern well beyond LaTeX work, but it’s also left a question over continued development of biblatex.

Philipp Lehman had discussed a number of plans with the lead developer of Biber, Philip Kime. Unsurprisingly, Philip Kime has been keen to make sure that development of biblatex continues, but he wanted some help with the styles and any ‘hard-core’ TeX stuff. So a small team of ‘biblatex maintainers’ has been set up:

We’ve got two separate tasks. First, we want to deal with issues with the current release of biblatex (v1.7). These are tracked on the SourceForge site, and there are a few outstanding which are being looked at. Second, we want to continue the work that Philipp Lehman had planned. That work is taking place on GitHub, and there are some big issues to tackle.

Perhaps the biggest single item on the horizon for biblatex 2.0 is dropping BibTeX support, and going Biber-only. That’s something that Philipp Lehman has been planning for some time: the reality is that supporting BibTeX is increasingly awkward, and it’s making adding new features increasingly complex (and bug-prone). Of course, dropping BibTeX support will be a significant change, but it’s been on the horizon for some time, and will open the way to further extending the data model biblatex (and Biber) user.

Of course, if Philipp Lehman does return (and we hope he does), then the ‘team’ will be very happy to hand back to him. For the moment, sticking with the roadmap seems the best way forward.

## Programming LaTeX3: Expandability

In the last part, I looked at integer expressions, and how they can be used to calculate integer values. What I did not do was say exactly what can go inside an integer expression. That’s because it links in to a wider concept, and one that is very familiar to TeX programmers: expandability.

## What is expandability?

To understand expandability, we need to think about what TeX does when we use functions. TeX is a macro expansion language, and as I’ve already said that means that LaTeX3 is too. When we use a function in a place where TeX can execute all of the built-in commands (‘primitives’), we don’t really need to worry about that too much. However, there are places where life is more complicated, as TeX will only execute some of the primitives. These places are ‘expansion contexts’. In these places, only some functions will work as expected, and so it’s important to know what will and will not work.

## LaTeX3 and expandability

For traditional TeX programmers, understanding expandability means knowing the rules that TeX applies to decide what can and cannot be expanded. For LaTeX3, life is different as the documentation includes details of what will and will not work. If you read the documentation, you will see that some functions are marked with a star: those are expandable. For the moment, we won’t worry about the non-starred functions, other than to note that we can’t use them when we need expandability.

So sticking with our starting point, integer expressions, if we look at a function like \int_eval:n, this leads to the conclusion that the argument can only contain

• Numbers, either given directly or stored inside variables
• The symbols +, -, *, \, ( and )
• Functions which are marked with a star in the LaTeX3 documentation, plus any arguments these themselves need.

That hopefully makes some sense: \int_eval:n is supposed to produce a number, and so what we put in should make sense for turning into a number. The same idea applies to all of the other integer expression functions we saw last time.

## Expansion in our own functions

Integer expression functions make a good example for expandability, but if that was the only area that expansion applied it would not be that significant. However, there are lots of other places where we want to carry out expansion, and this takes us back to the idea of the argument specification for functions. There are three expansion related argument specifiers: f, o and x. Here, I’m going to deal just with x-type expansion, and will talk about f– and o-type expansion next time!

So what is x-type expansion? It’s exhaustive: expanding everything until only non-expandable content remains. We’ve already seen the idea that for example \tl_put_right:NV is related to \tl_put_right:Nn, so that we can access the value of a variable. So it should not be too much of a leap to see a relationship between \tl_set:Nn and \tl_set:Nx. So when we do

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nx \l_tmpc_tl { \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

TeX exhaustively expands: it expands \l_tmpb_tl, and finds \l_tmpa_tl, then expands \l_tmpa_tl to foo, then stops as letters are not expandable. Inside an x-type expansion, TeX just keeps going! So we don’t need to know much about the content we are expanding.

With a function such as \int_eval:n, the argument specification is just n, so you might wonder why. The reason is that x-type functions are (almost) always defined as a variant of an n-type parents. So functions that have to expand material (there is no choice) just have an n. (There are also a few things that \int_eval:n will expand that and x-type argument will not, but in general that’s not an issue as it only shows up if you make a mistake.)

## Functions that can be expanded

I’ve said that functions that can be expanded fully are marked with a star in the documentation, but how do you make your own functions that can be expanded? It’s not too complicated: a function is expandable if it uses only expandable functions. So if all of the functions you use are marked with a star, then yours would be too.

There is a bit more to this, though. We have two (common) ways of creating new functions

• \cs_new:Npn
• \cs_new_protected:Npn

The difference is expandability. Protected functions are not expandable, whereas ones created with \cs_new:Npn should be. So the rule is to use \cs_new_protected:Npn unless you are sure that your function is expandable.

## LaTeX2e hackers note: What about \protected@edef?

Experienced TeX hackers will recognise that x-type expansion is build around TeX’s \edef primitive. If you’ve worked a lot with LaTeX2e, you’ll know that with any user input you should use \protected@edef, rather than \edef, so that LaTeX2e robust commands are handled safely.

If you are working with LaTeX2e input, you’ll still need to use \protected@edef to be safe when expanding arbitrary user input. All LaTeX3 functions are either fully-expandable or engine-protected, so don’t need the LaTeX2e mechanism, but of course that is not true for LaTeX2e commands.

## Die TeXnische Komödie online

TeX use is very strong in German speaking countries, and that means that the German-speaking TeX group, DANTE, is also very active. Their members magazine, Die TeXnische Komödie, has until recently only been available to members in print. However, it’s now available online. Non-members can see editions which are over a year old, similar to the case for TUGBoat.

## siunitx: v2.5 and beyond

Anyone who watches the BitBucket site for siunitx development will have noticed that I’ve been adding a few new features. As I’ve done for every release in the 2.x series, new options means a new minor revision, and so these will all be in v2.5. I’ve also revised some of the behaviour concerning math mode, so there are now very few options which automatically assume math mode.

Looking beyond v2.5, I have some bigger changes I’d like to make to siunitx. When I initially developed the package, it was very much a mixture of things that seemed like a good idea. The work for version 2 meant a lot of changes, and a lot more order. However, I’ve learnt more about units, LaTeX and programming since then, and that means that there are more changes to think about.

The internal structure is quite good, but I need to work on some parts of the code again. For users, of course, that won’t show up, but it is important to me. It’s also not so straight-forward: the .dtx is about 17 000 lines long! However, there are also some issues at the user level. In particular, I think I’ve offered too many options in some areas, for example font selection. Revising those will alter behaviour, but it will also improve performance and the clarity of some edge cases. However, that is not such easy work and will take a while. I’ve got lots of other TeX commitments (plus of course a life beyond LaTeX), so these changes will wait a while yet. So once v2.5 is finalised I’d expect to have little change in siunitx for some time: probably until at least the autumn, and quite possibly the end of the year.

## biblatex status

Over the past few years, the biblatex package has been developed by Philipp Lehman to be the leading method for creating bibliographies in LaTeX. The combination of biblatex with Biber is even more powerful: arguably the most complete solution to database-driven bibliographies available.

Philipp Lehman has done a massive amount of work to get us to this position, and has until recently been very reactive to user feedback. However, over the past few months no-one has heard from him. Philip Kime has posted to (de.)comp.text.tex about this: it seems that the TeX community as a whole know very little about Philipp Lehman beyond his e-mail address! (A number of avenues have regrettable given no more information.)  Any information, or indeed contact from Philipp, would be great.

Philip Kime has some patches for biblatex which are not in the CTAN release and which improve interaction with Biber. He’s therefore created a ‘caretaker’ clone of biblatex on GitHub, so that these are no ‘lost’.

What happens next of course depends on whether Philipp Lehman reappears: clearly the best outcome. There are no urgent bug fixes or changes needed at the moment, so until after TeX Live 2012 is finalised there is no appetite to take any further action. That can’t go on for ever, of course, so if there is still no contact by the time of the TeX Live freeze then steps will have to be taken. Anyone interested in volunteering to help if that is needed should get in touch!