Forward to the Dark Ages of Document Processing!

I recently finished and submitted an article for a journal. Apart from submissions in “*.doc / *.docx, *.rtf or *.odt,” as it says on the journal’s Web site, the agreement I signed actually also permits submissions in “XML according to TEI as per [the journal’s] schema.” As the article is ultimately published in TEI—the HTML is generated from TEI, and you can even download the TEI—this would only seem logical, but see my earlier rant on publishing in digital humanities (I still have to say a few more words on this topic, but this’ll have to wait for some other time). I certainly don’t subject myself to writing an article in Word, so it was clear that I would submit it in TEI.


How to go about writing an article in TEI? I don’t have a problem writing XML “by hand,” but the as TEI is not really intended for writing, the markup is rather verbose. The journal’s requirements don’t help either. For example, the journal mandates citations in footnotes; as this a purely presentational issue, you’d expect that this would be handled by the XSLT stylesheed that transforms the TEI to HTML for display—but in fact it requires that every reference is explicitly marked up as footnote, i.e., surrounded by <note type="footnote">…</note>. This gets tedious quite quickly. So I looked for a more comfortable and convenient alternative. Of course I could have hacked something together, but it turns out that the TEI output produced by pandoc is pretty close to what the journal seems to want, so I figured I would only have to maybe do a little postprocessing in XSLT.1

Citations and bibliographic references are obviously an important issue when writing a scholarly article. I already have all my bibliographic references in a BibTeX file, the question is, how do I get them into the article? Pandoc has the pandoc-citeproc filter that does exactly this. In particular, for TEI output it inserts the formatted references (rather than, say TEI <biblStruct> elements), which is exactly what the journal wants.

Thus I decided to write the article in pandoc’s variety of Markdown. I’m a bit more familiar with Org mode, and pandoc also supports its citation syntax in Org mode. However, I normally use org-ref for references in Org mode, which uses a different syntax. If I have to do things differently anyway, I thought, I can also simply use Markdown, which has the advantage that it is the “native language” of pandoc and should thus have the best support.

Writing in Markdown (using Markdown Mode for Emacs) works quite well. If I were to do this more often, I’d certainly want something like org-ref to look up and directly insert references, but this wasn’t a real problem here.

When the article neared completion, I started testing the pandoc TEI output with the journal’s TEI-to-HTML stylesheet. The first thing I discovered was that pandoc does not sanitize IDs. For example, if you assign the ID fig:foo to a figure (as you would do in LaTeX), that ID ends up unchanged in an xml:id attribute, where “:” is not permitted—so the document is not valid. This is arguably a bug in pandoc; to work around it without changing the IDs used in the source file, I wrote a small Lua filter to replace all : in IDs with -. Lua filters are—obviously—a very powerful feature of pandoc. The documentation is not terribly clear, though, and not having written any Lua code before, I had to consult the Lua documentation as well. But if you know what you want, it’s actually quite easy, and the code for this filter is as trivial as you would expect.

Next, I had to write a custom template for TEI. By default, pandoc doesn’t output an abstract for TEI documents—let alone two abstracts, and I also needed various additional metadata in the TEI header. Again, if you know what you want, modifying a pandoc template is not very hard. The next thing than was to write the little XSLT stylesheet for postprocessing the output to massage it into the form the journal’s stylesheet expects. Some of the transformations are rather trivial, such as mapping <seg type="code"> to <code>. But I found I also needed to number headers and figures, and produce a list of figures. Again, things you would normally not hardcode in the TEI file but rather leave for the rendering stage. All in all, I ended up with an XSLT stylesheet almost 200 lines long. Phew!

I then addressed the question of citations. pandoc-citeproc can read BibTeX databases, so luckily I only had to look at the formatting. For specifying the formatting of references, pandoc-citeproc uses Citation Style Language (CSL). Each and every journal in the humanities apparently has to invent its own style, and this particular journal is no exception. Needless to say, the journal does not provide a CSL style—authors are apparently expected to do this manually, and obviously most of them don’t mind. But since the CSL project maintains “a crowdsourced repository with over 8000 free CSL citation styles,” I assumed I would certainly find the journal’s style or at least a similar one. However, while the author guidelines give you only one example for each of the major publication types (book, article, etc.), great care had obviously been taken to ensure that the style does not match any of these over 8000 styles. So, I figured I also had to write a CSL style.

I haven’t done this before, but CSL uses an XML syntax, and once you grok the basic concepts, it’s relatively easy to derive a new style from an existing one (even though in the end not much remained from the model). CSL matches the domain pretty well, so writing a style is pretty straightforward. For a case like this one, where the task mainly consists in putting the elements of a reference—author, title, etc.—into the right order, insert punctuation, and take some decisions depending on the publication types, a CSL style is probably easier to write than, say, a BibTeX or biblatex style file. However CSL isn’t a full programming language and you can’t, for example, suppress URLs that match a certain pattern or do arbitrary string transformations.

In the end I could automatically produce a TEI file from Markdown, which I could submit for review. However, writing the filters, transformations, and styles cost me about two days, all in all. Given that this is 2018, this is way too much work just for writing a regular scholarly article. Now, let me be very clear, this is not pandoc’s fault: pandoc is a great tool, and without it, it would have certainly taken even longer.

Now, you may say, why don’t you just use Word like everybody else? Well, apart from the fact that Word is a pain to use and not suited at all to the task (and would thus slow me down considerably), I would still have to somehow handle the references. Without a matching citation style, I’d first have to write a style for Endnote, Citavi, Zotero, or some other reference manager, that I’d also first have to learn to use. Of course, you can also type in and manually format references. In the end, it suspect any other way would have taken just as long.

This brings me to my main point: The state of the art in scholarly publishing (even if you only consider the technical aspects) is abysmal. As of 2018, LaTeX (with BibTeX or biblatex) remains pretty much the only comprehensive authoring solution. Writing a paper for, say, an ACM or ACL conference is easy: there are official document and reference styles, you literally just have to write your paper.2 The point here is not that LaTeX is “better,” but rather that there is a clearly defined path to the submission, and authors do not have to concern themselves with formatting of the document or the references: this is all taken care of automatically. This also means that no conversion and no manual interventions are required, which invariably introduce errors. In addition, the system is open source and highly portable, you can use it on any platform and with any editor you want.

On a more general level of kulturkritik, while I sat there hacking on all kinds of files, it occurred to me that document processing had steadily advanced since the 1960s, starting with troff macro packages, Scribe, LaTeX, SGML, XML, and in particular the emergence of the XML ecosystem (XPath, XSLT, XQuery, etc.) based on common concepts (Infoset, DOM). It would be easy to define terse markup languages like Markdown as SGML document types (see From Wiki to XML, through SGML for an example) or to write parsers that create a DOM representation on which XML tools could directly operate (see Steven Pemberton’s ideas on Invisible XML; in 2011 I also wrote an as yet unpublished paper on similar ideas).

Unfortunately, most people didn’t get it that its not about the (admittedly crufty) XML syntax but rather the data structure it describes (the SGML and XML communities also never really did a very good job communicating it). As, during the XML hype, XML syntax was increasingly used for all kinds of structured data, people started to notice how cumbersome it is for such applications—for which it had notably never been designed. Instead of working to improve it, it became fashionable to bash XML and reject it altogether; The Rise and Rise of JSON is a great example, but Markdown as well. We are now on our way back to the dark ages of document processing: a multiplication of markup languages and dialects that are at best defined by regular expressions, at worst just by the intended translation to HTML. HTML 5, the latest and greatest version HTML, looks like an SGML application, could very well be one, but isn’t—it isn’t even formally defined. The central idea of SGML, namely that document structure can be rigorously validated independent of formatting, has now for the most part been abandoned. As we’re returning to ad-hoc wiki-style markup, we’re also abandoning the idea of semantic markup, as these languages are not extensible.

I’m sure there is a lesson to be learned…


  1. In fact, the output would validate with the journal’s Relax NG schema, so I could have left it at that. I guess I’m too nice.

  2. And we haven’t even considered mathematical or chemical formulas, linguistic examples, or music notation…


Author: Michael Piotrowski

Computational linguist, computer scientist, professor of digital humanities at the University of Lausanne.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search