Natural language generation (NLG) is the task of generating natural language from a meaning representation. Rule-based approaches require domain-specific and manually constructed linguistic resources, while most corpus based approaches rely on aligned training data and/or phrase templates. The latter are needed to restrict the search space for the structured prediction task defined by the unaligned datasets.
In this talk we will discuss the use of imitation learning for structured prediction, which learns an incremental model that handles the large search space while avoiding explicitly enumerating it. We
will show how we adapted the Locally Optimal Learning to Search (Chang et al., 2015) framework which allows us to train against non-decomposable loss functions such as the BLEU or ROUGE scores while not assuming gold standard alignments. Furthermore, we will present an analysis of the results which examines common issues with NLG evaluation.
Gerasimos Lampouras is a postdoc at the University of Sheffield, working with Andreas Vlachos on developing domain-independent Natural Language Generation frameworks using Imitation Learning.
Previously he was a postdoc at the Machine Reading group at University College London working with Andreas Vlachos and Sebastian Riedel. Before that he received his PhD from Athens University of Economics and Business while supervised by Ion Androutsopoulos.
Speaker home page: http://glampouras.github.io/