ITU

Accelerated High-Quality Mutual-Information Based Word Clustering

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

Accelerated High-Quality Mutual-Information Based Word Clustering. / Ciosici, Manuel R; Assent, Ira; Derczynski, Leon.

Proceedings of The 12th Language Resources and Evaluation Conference. Marseille, France : European Language Resources Association, 2020. p. 2484-2489.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Ciosici, MR, Assent, I & Derczynski, L 2020, Accelerated High-Quality Mutual-Information Based Word Clustering. in Proceedings of The 12th Language Resources and Evaluation Conference. European Language Resources Association, Marseille, France, pp. 2484-2489. <https://www.aclweb.org/anthology/2020.lrec-1.303.pdf>

APA

Ciosici, M. R., Assent, I., & Derczynski, L. (2020). Accelerated High-Quality Mutual-Information Based Word Clustering. In Proceedings of The 12th Language Resources and Evaluation Conference (pp. 2484-2489). European Language Resources Association. https://www.aclweb.org/anthology/2020.lrec-1.303.pdf

Vancouver

Ciosici MR, Assent I, Derczynski L. Accelerated High-Quality Mutual-Information Based Word Clustering. In Proceedings of The 12th Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association. 2020. p. 2484-2489

Author

Ciosici, Manuel R ; Assent, Ira ; Derczynski, Leon. / Accelerated High-Quality Mutual-Information Based Word Clustering. Proceedings of The 12th Language Resources and Evaluation Conference. Marseille, France : European Language Resources Association, 2020. pp. 2484-2489

Bibtex

@inproceedings{18899d4bbb764fa9be4f25976e24ebce,
title = "Accelerated High-Quality Mutual-Information Based Word Clustering",
abstract = "Word clustering groups words that exhibit similar properties. One popular method for this is Brown clustering, which uses short-range distributional information to construct clusters. Specifically, this is a hard hierarchical clustering with a fixed-width beam that employs bi-grams and greedily minimizes global mutual information loss. The result is word clusters that tend to outperform or complement other word representations, especially when constrained by small datasets. However, Brown clustering has high computational complexity and does not lend itself to parallel computation. This, together with the lack of efficient implementations, limits their applicability in NLP. We present efficient implementations of Brown clustering and the alternative Exchange clustering as well as a number of methods to accelerate the computation of both hierarchical and flat clusters. We show empirically that clusters obtained with the accelerated method match the performance of clusters computed using the original methods.",
author = "Ciosici, {Manuel R} and Ira Assent and Leon Derczynski",
year = "2020",
month = may,
day = "1",
language = "English",
pages = "2484--2489",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
publisher = "European Language Resources Association",

}

RIS

TY - GEN

T1 - Accelerated High-Quality Mutual-Information Based Word Clustering

AU - Ciosici, Manuel R

AU - Assent, Ira

AU - Derczynski, Leon

PY - 2020/5/1

Y1 - 2020/5/1

N2 - Word clustering groups words that exhibit similar properties. One popular method for this is Brown clustering, which uses short-range distributional information to construct clusters. Specifically, this is a hard hierarchical clustering with a fixed-width beam that employs bi-grams and greedily minimizes global mutual information loss. The result is word clusters that tend to outperform or complement other word representations, especially when constrained by small datasets. However, Brown clustering has high computational complexity and does not lend itself to parallel computation. This, together with the lack of efficient implementations, limits their applicability in NLP. We present efficient implementations of Brown clustering and the alternative Exchange clustering as well as a number of methods to accelerate the computation of both hierarchical and flat clusters. We show empirically that clusters obtained with the accelerated method match the performance of clusters computed using the original methods.

AB - Word clustering groups words that exhibit similar properties. One popular method for this is Brown clustering, which uses short-range distributional information to construct clusters. Specifically, this is a hard hierarchical clustering with a fixed-width beam that employs bi-grams and greedily minimizes global mutual information loss. The result is word clusters that tend to outperform or complement other word representations, especially when constrained by small datasets. However, Brown clustering has high computational complexity and does not lend itself to parallel computation. This, together with the lack of efficient implementations, limits their applicability in NLP. We present efficient implementations of Brown clustering and the alternative Exchange clustering as well as a number of methods to accelerate the computation of both hierarchical and flat clusters. We show empirically that clusters obtained with the accelerated method match the performance of clusters computed using the original methods.

M3 - Article in proceedings

SP - 2484

EP - 2489

BT - Proceedings of The 12th Language Resources and Evaluation Conference

PB - European Language Resources Association

CY - Marseille, France

ER -

ID: 85114970