Recurrent models and lower bounds for projective syntactic decoding

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

The current state-of-the-art in neural graphbased parsing uses only approximate decoding at the training phase. In this paper aim to understand this result better. We show how recurrent models can carry out projective maximum spanning tree decoding. This result holds for both current state-of-the-art models for shiftreduce
and graph-based parsers, projective or not. We also provide the first proof on the
lower bounds of projective maximum spanning tree, DAG, and digraph decoding.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Number of pages10
VolumeVolume 1 (Long and Short Papers)
PublisherAssociation for Computational Linguistics
Publication date2019
Pages251-260
DOIs
Publication statusPublished - 2019

Keywords

  • Neural Graph-based Parsing
  • Approximate Decoding
  • Recurrent Models
  • Projective Maximum Spanning Tree
  • Shift-Reduce Parsers

Fingerprint

Dive into the research topics of 'Recurrent models and lower bounds for projective syntactic decoding'. Together they form a unique fingerprint.

Cite this