Adversarial Decomposition of Text Representation

Alexey Romanov, Anna Rumshisky, Anna Rogers, David Donahue

    Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

    Abstract

    In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.
    OriginalsprogEngelsk
    TitelProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
    Antal sider11
    Publikationsdato1 jun. 2019
    Sider815-825
    StatusUdgivet - 1 jun. 2019

    Emneord

    • Adversarial Decomposition
    • Text Representation
    • Social Register Conversion
    • Diachronic Language Change
    • Continuous Style Representation

    Fingeraftryk

    Dyk ned i forskningsemnerne om 'Adversarial Decomposition of Text Representation'. Sammen danner de et unikt fingeraftryk.

    Citationsformater