Workshop on Computational Historical Linguistics at NoDaLiDa 2013

It’s been a couple of weeks now that I attended the NoDaLiDa 2013 workshop on Computational Historical Linguistics, where I gave an invited talk.  The workshop—and Oslo in general—was a very pleasant experience.  The organizers (chaired by Þórhallur Eyþórsson from the University of Iceland) had booked a great hotel room for me, no, a suite actually, larger than my apartment in Mainz 🙂 Unfortunately I couldn’t stay for the main conference because I had to continue to Paris, but at least I could attend the conference dinner, which took place at the Oslo Opera House.  Very nice!

Now, the workshop.  It was very interesting, with a good mix of topics.  NLP for historical texts is still a highly experimental field, and the papers presented at the workshop reflected this.  Apart from the two invited talks given by Seth Kulick (on Treebank analysis using derivation trees) and myself (on Historical NLP and the Digital Humanities), six papers were presented, which I’ll try to summarize here.

Malin Ahlberg, Peter Andersson: Towards automatic tracking of lexical change: Linking historical lexical resources

The authors described ongoing work on linking lexical resources for Old and Modern Swedish (coming from digitized print dictionaries) to the Swedish FrameNet++ resources via the Saldo lexicon for contemporary Swedish.  The linking is based on the lemmas and on additional information from the dictionaries, such as the (modern-language) glosses.  At the moment, coverage and accuracy of the linking are limited—as this is obviously no easy task—but I think this is important research, as it tackles the issues of semantic and grammatical change, which are currently severely under-researched in NLP for historical texts.

Gerlof Bouma, Yvonne Adesam: Experiments on sentence segmentation in Old Swedish editions

Yvonne presented some very interesting work on identifying sentence boundaries in historical Swedish texts.  Sentence boundary detection is relatively straightforward in English newspaper text, but in historical texts, punctuation is as standardized as spelling (i.e., not at all).  Since most NLP tools are designed to operate on sentence-segmented text,  splitting the text into sentence-like units is likely to be one of the first things you need to do, unless you’re working with an edition that regularizes punctuation.  Since punctuation and capitalization aren’t reliable indicators for sentence boundaries in historical texts, it’s not sufficient to disambiguate periods, but the task is more akin to finding sentence boundaries in spoken text.

The authors presented several experiments on automatic segmentation of Old Swedish texts into sentence-like units and could show that a model that combines clues from punctuation, capitalization, and lexical content is able to improve upon a simple capitalization baseline, especially in terms of precision.  The authors noted, however, that the segmentation quality of all models is still insufficient.  The main reason for this is the variation between documents with respect to how boundaries are marked, and, you guessed it, spelling.  The latter means that statistical models have trouble finding lexical clues.

Another issue is that sentences as we know them from standard written language today, are a relatively recent invention; older texts often just ramble on and on.  That’s why I’ve used the term “sentence-like units” above: if a text is not really structured into sentences you’ll obviously have a hard time finding their boundaries.

I found this a very important paper because it brings up many fundamental issues that I haven’t seen discussed before.

Stefanie Dipper, Simone Schultz-Balluff: The Anselm Corpus: Methods and perspectives of a parallel aligned corpus

Stefanie presented ongoing work on the St. Anselm corpus.  This is a parallel corpus consisting of about 50 versions of the medieval text Interrogatio Sancti Anselmi de Passione Domini, written in different Early New High German, Middle Low German, and Middle Dutch dialects.  In particular, she described how the corpus has been used in studying historical lexical semantics and historical syntax.

Kimmo Koskenniemi: Finite-state relations between two historically closely related languages

Kimmo presented early experimental work on modeling correspondences between historically related languages using finite-state transducers, namely between Finnish and Estonian.  A very interesting talk, and I’m curious of future results, especially the application of the concept to other languages.

Eva Pettersson, Beáta Megyesi, Jörg Tiedemann: An SMT approach to automatic annotation of historical text

Jörg Tiedemann presented this paper describing an approach for normalizing the spelling of historical texts that applies methods from statistical machine translation (SMT) on the character-level. The approach relies on the availability of parallel corpora in historical and modern spelling.  As usual in MT, the corpora are first aligned, and then translation models are trained on the aligned corpora. While similar approaches have been used for SMT between closely related languages, AFAIK, they haven’t been used for spelling normalization yet.  The authors evaluated their approach on Icelandic and Swedish historical corpora, demonstrating its feasibility.  The approach is obviously very alluring, as it promises to reduce the amount of manual work and relies on off-the-shelf SMT tools.  However, at the moment, the approach requires parallel corpora differing only in spelling, i.e., tokenization and syntactic structures must be identical.

Speaking of MT, I think it would be interesting to discuss whether historical language stages should be treated as closely related languages (in the MT sense) instead of treating them as “misspelled” modern language (the current approach). A normalization approach that is also able to handle (limited) reordering and differences in tokenization would clearly go beyond most of the currently available work on spelling normalization and represent a real breakthrough.  But we aren’t there yet …

Jordi Porta, José-Luis Sancho, Javier Gómez: Edit transducers for spelling variation in Old Spanish

Jordi Porta from the Real Academia Española presented a system for the analysis of Old Spanish word forms using weighted finite-state transducers.  In contrast to most other work on spelling normalization, the authors’ system makes use of existing linguistic knowledge, in particular a modern lexicon, phonological information, and rules describing the evolution of Spanish from the Middle Ages.  The authors showed interesting and very promising evaluation results on texts from different periods, obtained in all datasets show significant improvements, but for me, the most important aspect here is the use of linguistic knowledge, which is currently not very popular in computational linguistics.  However, I still believe in the principle “don’t guess if you know.”  For me, this is definitely some of the most interesting recent work on spelling normalization.

So much for my summary of the workshop.  If you want to know more, check out the workshop proceedings.


Leave a Reply

Your email address will not be published. Required fields are marked *