Abstract
This paper presents a case study of discovering and classifying verbs in large web-corpora. Many tasks in natural language processing require corpora containing billions of words, and with such volumes of data co-occurrence extraction becomes one of the performance bottlenecks in the Vector Space Models of computational linguistics. We propose a co-occurrence extraction kernel based on ternary trees as an alternative (or a complimentary stage) to conventional map-reduce based approach, this kernel achieves an order of magnitude improvement in memory footprint and processing speed. Our classifier successfully and efficiently identified verbs in a 1.2-billion words untagged corpus of Russian fiction and distinguished between their two aspectual classes. The model proved efficient even for low-frequency vocabulary, including nonce verbs and neologisms.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of 2015 IEEE International Conference on Data Science and Data Intensive Systems (DSDIS) |
Antal sider | 8 |
Publikationsdato | 2015 |
Sider | 61-68 |
DOI | |
Status | Udgivet - 2015 |
Emneord
- Verb Classification
- Large Web-Corpora
- Co-occurrence Extraction
- Vector Space Models
- Natural Language Processing