Adversarial Decomposition of Text Representation

Alexey Romanov, Anna Rumshisky, Anna Rogers, David Donahue

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review


    In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.
    Original languageEnglish
    Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
    Number of pages11
    Publication date1 Jun 2019
    Publication statusPublished - 1 Jun 2019


    • Adversarial Decomposition
    • Text Representation
    • Social Register Conversion
    • Diachronic Language Change
    • Continuous Style Representation


    Dive into the research topics of 'Adversarial Decomposition of Text Representation'. Together they form a unique fingerprint.

    Cite this