string(10) "newsevents"
August 30, 2012

Paraphrasing and textual entailment recognition and generation

  • October 4, 2024 till October 4, 2024
  • Main Lecture Room

Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often very similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation.
In this thesis, we focus on paraphrase and textual entailment recognition, as well as paraphrase generation. We propose three paraphrase and textual entailment recognition methods, experimentally evaluated on existing benchmarks. The key idea is that by capturing similarities at various abstractions of the inputs, we can recognize paraphrases and textual entailment reasonably well. Additionally, we exploit WordNet and use features that operate on the syntactic level of the language expressions. The best of our three recognition methods achieves state of the art results on the widely used MSR paraphrasing corpus, but the simplest of our methods is also a very competitive baseline. On textual entailment datasets, our methods achieve worse results. Nevertheless, they perform reasonably well, despite being simpler than several other proposed methods; therefore, they can be considered as competitive baselines for future work.
On the generation side, we propose a novel approach to paraphrasing sentences that produces candidate paraphrases by applying paraphrasing rules and then ranks the candidates. Furthermore, we propose a new methodology to evaluate the ranking components of generate-and-rank paraphrase generators. A new paraphrasing dataset was also created for evaluations of this kind, which was then used to show that the performance of our method’s ranker improves when features from our paraphrase recognizer are also included. Furthermore, we show that our method compares well against a state of the art paraphrase generator, and we present evidence that our method might benefit from additional training data. Finally, we present three in vitro studies of how paraphrase generation can be used in question answering, natural language generation, and Web advertisements.

Skip to content