TUG2018: Day three

The final day of TUG2018 followed the conference banquet, which of course meant that there were a few tired (or missing!) delegates.

Before coffee

The first talk of the day came from S. K. Venkatesan, focussing on his WeTeX tool, and the link to countability of computing problems.

We then moved to Paulo Cereda (on behalf of Susanne Raab), looking at the TikZducks package. He started by pointing out that whilst drawing ducks is fun, there is serious coding behind it. He showed us a range of examples of how keyval settings allow a wide range of (wacky) customisation of duck drawings. A particular highlight was rendering of Don Knuth as a TikZduck.

After the break

Once we’d all refuelled, Jaeyoung Choi took up the podium to describe work on using MetaFont directly inside FreeType. He laid out the advantages of MetaFont, and the problems for use by font designers. He then moved to look at the particular challenges faced in developing CJK fonts: the very large number of characters, and resulting significant time/cost investment required. With modern computing power, this can be solved using MetaFont to parametrise this large number of glyphs. Jaeyoung demonstrated a GUI which allows control of the appearance of characters in an (almost) interactive way. He then moved on to look at how to integrate MetaFont directly into the TrueType rasteriser.

The final talk came from Will Robertson on fontspec and unicode-math. He started by showing us some issues in the fonts in books for children, before looking over unicode-math. He showed how it handles complex maths, allowing re-use of copied material and changing the style of output. He then looked at the development approach he’s taken in ‘cleaning up’ unicode-math and fontspec. He covered various aspects of the expl3/l3build/Git(Hub) workflow he’s now perfected. He then moved on to fontspec, talking about the background, current interfaces and possible future developments. It was great final talk: wide-ranging, thought-provoking and fun.

With the formal business done, we headed to the roof of IMPA for the traditional conference photography. After a lunch break, it was off for most of us to the excursion to Sugarloaf Mountain, and the end of the meeting proper.

TUG2018: Day Two

The second day of TUG2018 picked up with a few announcements for those us here at IMPA, before we moved on to the business end.

Early morning session

Frank Mittelbach started the day’s proceedings, talking about his doc package for literate programming. He explained the background, what works and more importantly what didn’t. The success of doc as a standard make change challenging, but at the same time there is a need for updates. He then laid out goals for a new version: back-compatibility, new mark-up and out-of-the-box hyperref support. He showed us the features for creating new mark up. There are some wrinkles, for example that hyperref support still has to be manually activated. Frank wrapped up by pointing to the testing version, and gave us a likely release date (for TL’19).

I then gave my first talk of the day, looking at expl3 concepts related to colour and graphics. I outlined the LaTeX2e background, what is happening with the LaTeX2e drivers and then moved on to my expl3 experiments. First I talked about colo(u)r, and the idea of colour expressions as introduced by xcolor. These are trivial to work out in expl3 due to the expandable FPU we have. I then looked at creating graphics, particularly how I’ve been inspired by pgf/TikZ. I showed how I’ve used the fact that pgf has a clear structure, and mapped that to expl3 concepts. I showed some examples of the existing drawing set up, and where I’ll be going next.

After coffee

We returned after coffee for a short talk from Boris Veytsman on tackling an apparently simple issue: putting leaders level with the first line of a long title! He showed that this is a non-trivial requirement, and how as a contractor he has to explain this to his customers. He then showed how he solved the issue, leading to a lively discussion about other possible approaches.

I then came back for my second talk of the day, I talked about siunitx. I started by explaining the history of the package, starting with the initial comp.text.tex post that led to its creation. I outlined the core features, present from version 1, and why I’ve re-written now twice. I finished by promising a first alpha version of version 3: that’s available here.

Frank then returned for a morning of symmetry, talking about compatibility requirements. He talked about the historical situation, starting from Knuth’s introduction of TeX and taking us through the development of LaTeX, PDF support and Unicode engines. He then moved on to look at the LaTeX2e approach to compatibility, starting with the 1994 approach, fixltx2e. He explained how that was intended to work, and why it didn’t. The new approach, latexrelease, tackles the same problems but starts with the idea that it applies to both the kernel and to packages. Frank covered the idea of rollback in packages, and how this works at the user and developer levels. Frank finished off with some thoughts about the future, and the fact that most new users probably pick up these ideas without issue.

After lunch

Our conference Chair, Paulo Ney de Souza, took the first slot after lunch to speak about how he’s approached a major challenge, managing the abstracts for the upcoming ICM2018 meeting. His talked ranged over topics such as citation formatting, small device output, production workflows and dealing with author preambles. He covered the wide range of tools his team have assembled to automate PDF creation from a heterogeneous set of sources. His wide-ranging talk was a tour de force in automated publication.

After a brief break, we moved to Tom Hejda (who TeX-sx users know as yo’), on his tool yoin. He explained that his current workflow for producing journal issues is currently a mix of a range of tools, and this is likely not long-term sustainable. He then moved to showing how yoin can be used to compile both the master file for an issue and, as required, each article within it.

The last talk of the day was from Joachim Heinze, formerly of Springer. He talked about journal publishing, and how online accessibility of publications has changed the landscape for publishers. He gave an entertaining look into this world, posing the question ‘Where is the information we have lost in data?’.

With the formal business done, some of the group remained at IMPA for a workshop on R and Knitr, led by Boris Veytsman. Later, we all met up again for the conference dinner at Rubaiyat Rio.

TUG2018: Day one

Most of the foreign delegates for TUG2018 met up by last night at the conference hotel, and chats over breakfast continued. Then it was down to the minibus to head to the venue, IMPA.

Opening session

After a brief introduction from the conference chair, Paulo Ney de Souza, the floor was handed to Roberto Ierusalimschy to start us with a bang: an overview of Lua development. He gave us an insight into how Lua grew from early beginnings, and how it got picked up by games developers: a really big part of Lua’s importance. He then gave us an insight into the two key aspects of Lua’s success: the ability to embed and extend the language. That’s led to Lua being embedded in a range of applications, particularly games but also devices as varied as cars and routers. We had a lively question session, ranging from Unicode support to what might have been done differently.

We then moved on to Eduardo Ochs, talking about using Lua as a pre-parser to convert ‘ASCII art’ into complex mathematical diagrams. He explained the pre-history: the origin of the ASCII art as comments to help understand complex TeX code! After a summary of the original pre-processor, he showed how using Lua(TeX), the processing can be done in-line in the file with no true pre-processing step. He showed how this can be set up in an extensible and powerful way.

Morning post-coffee

After the coffee break (plus cake), we reconvened for three talks. Mico Loretan started focussing on his package selnolig. He started by showing us examples of ‘unfortunate’ ligatures in English words, and how they can appear when suppressed by babel and by selnolig. He then focussed in on the detail: what a ligature is, why they are needed and how different fonts provide them. He moved on to detail why you need to suppress ligatures, in particular where they cross morpheme boundaries. Mico then gave us a very useful summary of how the linguistics work here and how they need to link to typography. After showing us the issues with other approaches, he moved on to detail of how selnolig uses LuaTeX callbacks to influence ligatures ‘late’ in processing. His rule-based interface means that ligatures can be suppressed for whole classes of words.

I spoke next, focussing on l3build. I gave a brief overview of LaTeX testing, from the earliest days of the team to the current day. I covered why we’ve picked Lua for our current testing set-up, what works and what (currently) doesn’t.

Paulo Cereda then talked about his build tool, arara. He started with an overview of other tools, before explaining how arara is different: it’s a ‘no-guesswork’ approach. He showed us the core, simple, syntax, before moving on to a timeline of releases to date. He summed up the new features in version 4.0, before moving to a series of live demonstrations. These started with simple ideas and moved on to new, complex ideas such as conditionals and taking user input. He then finished by looking to the future, both of arara and of araras (parrots).

After lunch

We started back after lunch with a couple of slides from Barbara Beeton, sadly absent from the meeting, presented by TUG President Boris Veytsman.

Will Robertson then took the podium. He started with some non-TeX thoughts on questions he gets as an Australian. His koala pictures were particularly fun. His talk then moved to his work with the Learning Management System (LMS) used by his employer. This system (Canvas) has a programmable API for controlling information made available to students. He laid out the issues with the documentation he had: a very large, unmaintainable word processing document. Will talked about various tools for creating HTML from LaTeX, the workflow he has chosen, and then showed more detail on the system he is using, LaTeXML. He then expanded on how using LaTeXML plus scripting, he can populate the LMS in a (semi)automated way, making his work more efficient.

The second speaker in the ‘Australian panel’ session was Ross Moore. Ross started with a demo of why tagging PDFs is needed: making the information accessible not just to people but widely to the computer, to allow re-use in alternative views. He expanded on the drivers for this, in particular legal requirements for accessible documents.

After afternoon break

Our next talk came in remotely from Sandro Coriasco. He started by outlining the team involved in this work, focussed on making material accessible to the blind. The aim of their work has been targetted at mathematical formula, generating ‘actual text’ which can then be used by screen readers or similar. He then showed that this makes additional useful information available to e.g. screen readers.

We then had a non-TeX talk: Doris Behrendt on GDPR. She started by looking at the EU Official Journal on the GDPR, and we had an excursion into the font used for typesetting (Albertina). She then gave details of the regulations, along with a number of extremely amusing examples of how people have approached them.

Presentations over, the TUG AGM took place, concluding the formal business of the day.

TUG2018 Preview

The TUG2018 meeting starts tomorrow in Rio de Janerio, Brazil, and the delegates have begun to collect together (many of us are staying at the Everst Rio Hotel). I’ll be trying to write up notes each day to summarise the talks, discussions, etc., but you’ll also be able to watch live. There’s also a chat room on TeX StackExchange dedicated to the meeting.

Informal discussions are already ongoing (the LaTeX team members have been hard at it since breakfast), so it should be a productive time.

TUG2015 Beyond the formal

I’ve given a summary of the ‘formal’ business of each session at TUG2015 over the past few days:

Of course, there was a lot more to the meeting beyond the talk sessions. Stefan Kottwitz covered some of them in a TeX-sx blog post, including a picture of (most of) the TeX-sx regulars in attendance.

It was great to meet people I’ve come across over the years but haven’t met in person: I think the only delegate I’d met before was David Carlisle (who lives less than an hour’s drive from home). So each coffee and lunch break was a (quick) chance to at least say hello to people.

I’m told we’ve not had a proper LaTeX team meeting for 10 years: certainly not before whilst I’ve been on the team. So a lot of the team for me (and the other LaTeX3 people) was taken up with a long list of ‘agenda’ items. We just about got through them by doing to evenings, the last afternoon (before the banquet) and breakfast the day after the conference finished! Hopefully we’ll manage something a bit more regular in the future!

TUG2015 Day Three

The final day of the meeting offered another exciting mix of talks.

Morning three

Session one: Publishing

The day started with Kaveh Barzagan (new TUG President) and Jagath AR from River Valley Technology. They gave us an insight into the serious publishing end of the TeX world. Kaveh showed us the some of the ‘interesting’ features one sees in XML workflows, and explained how TeX can enable a XML-first approach. Jagath then showed us the way that they can integrate rich content into PDFs within this method.

Next was Joachim Schrod, who focussed on dealing with lots of documents: the needs of an online bank. Joachim explained the challenges of created hundreds of thousands of documents a month. In contrast to books or similar, these documents are all very similar and have very limited requirements. What is needed is resilience and speed. We saw how using LaTeX with knowledge of ‘classical’ TeX techniques (DVI mode and \special) can deliver key performance enhancements. He also told us about the challenges, particularly the human one: hiring (La)TeX experts for maintenance is not easy.

The third talk of the session came from S. K. Venkatesan and focussed on using TeX algorithms for scroll-like output. He showed how TeX compares with browsers in this area

Session two

After coffee, I was back on again to talk about \parshape. I talked about different elements of design of text which are best implemented at the primitive level using \parshape. I showed that we can provide interfaces for different aspects of the shape without the end user needing to know about the back-end detail. My talk was quite short but we got a lot of discussion!

Next was Julien Cretel, who talked about ideas for implementing Haskell-like functionality in TeX. Julien explained what he enjoys about functional languages, what has already been done in TeX and what he’d like to achieve. In particular, he focussed on tree data structures.

The final morning talk came from Hans Hagen. He started by showing us one of the challenges of grid setting: how to adjust design to accommodate overheight items. There are a lot of challenges, and he explained that there are probably more ways of tackling the problem than users! He then talked about bibliographies in ConTeXt and the reimplementation recently undertaken to use a flexible approach to cover many times of formatting. Hans finished with ‘ASCII math’ parsing, where all mathematics is represented with plain text. Here, Hans had the issue that the input format is rather flexible and not well defined.


After lunch, we had the group photo: there should be a lot of pictures available, given the number of budding photographers! We then reconvened for the final session.

Boris Vetysman gave his third talk of the meeting, looking at how we can arrangement for parts of files to be omitted from the output. He described two situations, missing out irrelevant data and omitting sensitive data. Boris showed how to tackle these two challenges by skipping material in the first case, and by stripping the sources in the second.

The final talk came from Enrico Gregorio (TeX-sx user egreg) and his recollections as a spurious space catcher. Enrico showed a collection of ‘interesting’ code, either with missing %, extra % or a curious mixture of both. He then showed us how to fix them, moving from manually setting catcodes for spaces and the like to using expl3 to avoid spacing issues, but also to avoid re-inventing the wheel.

Q&A Session

The final session was a free-form Question and Answer session. This led to interesting discussions on BibTeX databases, templates and source highlighting. It also meant we (formally) found out where the next meeting will be: Toronto, some time in early summer 2016.

TUG2015 Day Two

The second day of the meeting had a morning of talks and then the afternoon for the conference outing to the Messel Pit.

Morning two

Session one

The day started with with a talk from Pavneet Arora telling us about something a bit different: detecting water leaks in property. Pavneet focussed on what most users want, the output rather the interface, and how this might lead us to a ‘TeX of Things’. He explained how he’s using TeX as part of a multi-tool chain to provide insight into water flow, using ConTeXt as the mechanism for making reports. All of this was based on Raspberry Pi to target embedded systems.

Tom Hejda then told us about his work creating two document classes: one for a journal and one for producing a thesis, both linked to his university. He contrasted the needs of users for these two document types. He showed us how he’d tackled this, with very different interfaces for two.

Next was Boris Veytsman one creating multiple bibliographies. He started at the end: looking at the ways you can access reference lists. You might want to look at references by date, by author, by reference callout or indeed by something else. Boris explained how he’s learned from his earlier multibibliography package to create a new package nmbib. This allows the user to select one or more views of the bibliography in the output.

Session two

After the coffee break, Boris returned along with Leyla Akhmadeeva looking at supporting a new medicinal institute in Russia. Leyla is a neurologist and laid out the needs for training doctors. Setting up a new institution in Bashkortostan means developing new communication templates. Boris showed us the requirements for multi-language documents following the Russian formal standard. He showed us the challenges of following those standards, particularly for when one of the languages (Bashkir) doesn’t currently have any hyphenation patterns available. He also talked about the design challenges of creating a beamer style using the colour elements from potentially clashing logos.

We then heard from Paul Gessler on converting Git logs into pretty-printed material using TikZ. Paul told us how he got started on the project, answering a question on TeX-StackExchange and getting pulled in by feature requests. He showed us his plans for turning Git branches into PDFs, and also how people have used Git branching to map the Paris Metro!

Question and answer session

The morning session finished with a Q&A to the TUG board. Topics were varied but the focus was on how we attract new users and new developer, and what is the meaning of a user group today. There’s a lot to talk about there, and we broken for lunch with informal chats going on.


The afternoon today features a visit to the Messel Pit. It will be an opportunity to talk about lots of things across the whole group attending. I’ll aim to report back later on the key topics.

TUG2015 Day One


The TUG2015 meeting proper started today, but people started meeting up yesterday. I arrived quite late (and indeed later than I’d expected), but a ‘welcome committee’ of TeX-sx regulars were outside the hotel when I got here! It was nice to finally be able to put some faces to the names.

Morning one

For those of us staying at the conference hotel, there was a chance to meet up with other people over breakfast. We then congregated in the meeting room, which filled up nicely as we got up to the business end: the start of the meeting proper.

The organisers have split the talks into broad topics, which makes summarising things a bit easier!

Session one: PDF Output

After a (very) short opening by outgoing TUG President, Steve Peter (in excellent German), we moved on to three talks broadly focussed on aspects of PDF production, and in particular creating ‘rich’ PDFs. Ross Moore started us off by looking at how he’s been tackling making semantic information available in the PDF output from maths. He’s tackling that using dedicated comments (read by his package) and PDF ‘tool tip’ comments. We then heard from Olaf Drümmer from the PDF Association about accessible PDFs: PDF/UA. These developments again keep semantic information in the PDF itself, so it can be parsed by for example screen readers. Ross then returned in a two-hand talk with Peter Selinger to explain work on updating pdfx to generate PDF/A files from pdfTeX and LuaTeX. They told us about the technical challenges and the improvements users will see in there use of the package.

Session two: Unicode

Session two focussed on the challenges with using Unicode-compliant engines, XeTeX and LuaTeX. I started off, talking about how we can get data from Unicode into the engines for text processing. I focussed on two area, setting up things like \catcode and doing case changing. (I’ll probably post the slides an a summary.) Will Robertson then talked about deal with maths in Unicode, and in particular the challenges of matching up the way Unicode describe maths characters with the way (La)TeX describes them. He looked at some of the decisions he made in the unicode-math and how he’s revisiting them. That ran nicely into the final talk of the morning: the GUST TeX team’s first talk from Piotr Strzelczyk on Unicode maths fonts. He focussed on the detailed challenges the team have faces.

Session three: Futures

After lunch (for me, a LaTeX3 chat with Will Robertson and Bruno Le Floch), we headed back for what I think I’d call a ‘futures’ session. Bogusław Jackowskigv have us a ‘big picture’: what to do now that they have a ‘complete’ set of the OpenType maths fonts they set out to develop. We heard about the idea of fonts beyond the rigid box model and other exciting horizons. Frank Mittelbach then gave on overview of LaTeX kernel stability over the last 21 years. He looked at recent changes to how the team are making sure that the kernel stays stable while still fixing bugs and how that will work in the future. Hans Hagen then gave us something to think about: ‘what if?’. He talked about how TeX has developed based around the limitations of computers, data structures and ideas over time. The conflicts of the desires of users with technology and with developers was familiar to anyone who does any development.

Session four: News

The final session of the day focussed on ‘announcements’. Joachim Schrod gave us an overview of the structure of CTAN, telling us about the different interfaces for different users, and how the different parts interact. The talk gave us an insight into the hard work that goes on every day making TeX available to us all. I then popped back up for a short announcement about the status of the UK TeX FAQ since it moved to a new server. The formal business finished with a memorial for the losses in the last year to key (ex-)TeX people: Pierre MacKay, Richard Southall and Hermann Zapf. Three moving insights.

Preparations for TUG2015

TUG2015 takes place next week in Darmstadt, and as it’s the first time I’ve been able to go to a TUG meeting I’m really looking forward to it. The programme and participant list both look excellent and it will be good to meet several people in person who I’ve only known to date by e-mail. I’ve managed to end up with three talks to give, so I’m hard at work getting them ready (and hoping I can get something in writing for TUGboat too!). As well as the formal business, there will be lot so chances to chat, not least with several other people on the LaTeX team.