To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer. We study two setups which differ by type and language coverage of the pre-trained embeddings. Our results show that jointly learning the main
tasks with masked language modeling is effective for slots, while machine translation works best for intent classification.
|Titel||Proceedings of NAACL|
|Forlag||Association for Computational Linguistics|
|Status||Udgivet - 2021|