Abstract
Adapters have been positioned as a parameter-efficient fine-tuning (PEFT) approach, whereby a minimal number of parameters are added to the model and fine-tuned. However, adapters have not been sufficiently analyzed to understand if PEFT translates to benefits in training/deployment efficiency and maintainability/extensibility. Through extensive experiments on many adapters, tasks, and languages in supervised and cross-lingual zero-shot settings, we clearly show that for Natural Language Understanding (NLU) tasks, the parameter efficiency in adapters does not translate to efficiency gains compared to full fine-tuning of models. More precisely, adapters are relatively expensive to train and have slightly higher deployment latency. Furthermore, the maintainability/extensibility benefits of adapters can be achieved with simpler approaches like multi-task training via full fine-tuning, which also provide relatively faster training times. We, therefore, recommend that for moderately sized models for NLU tasks, practitioners should rely on full fine-tuning or multi-task training rather than using adapters. Our code is available at https://github.com/AI4Bharat/adapter-efficiency.
| Originalsprog | Engelsk |
|---|---|
| Titel | CODS-COMAD '24: Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD) |
| Antal sider | 19 |
| Forlag | Association for Computing Machinery |
| Publikationsdato | 2024 |
| Sider | 136-154 |
| DOI | |
| Status | Udgivet - 2024 |
| Udgivet eksternt | Ja |
| Begivenhed | International Conference on Data Science & Management of Data - Bangalore, Indien Varighed: 4 jan. 2024 → 7 jan. 2024 Konferencens nummer: 7 https://dl.acm.org/doi/proceedings/10.1145/3632410 |
Konference
| Konference | International Conference on Data Science & Management of Data |
|---|---|
| Nummer | 7 |
| Land/Område | Indien |
| By | Bangalore |
| Periode | 04/01/2024 → 07/01/2024 |
| Internetadresse |