Abstract
Automatic Speech Recognition (ASR) has increased in popularity in recent years. The evolution of processor and storage technologies has enabled more advanced ASR mechanisms, fueling the development of virtual assistants such as Amazon Alexa, Apple Siri, Microsoft Cortana, and GoogleHome. The interest in such assistants, in turn, has amplifiedthe novel developments in ASR research.
However, despite this popularity, there has not been a detailed training efficiency analysis of modern ASR systems.This mainly stems from: the proprietary nature of many modern applications that depend on ASR; the relatively expensive co-processor hardware that is used to accelerate ASR by big vendors to enable such applications; and the absence of well-established benchmarks. The goal of this paper is to address the latter two of these challenges.
The paper first describes an ASR model, based on a deep neural network inspired by recent work, and our experiences building it. Then we evaluate this model on three CPU-GPU co-processor platforms that represent different budget categories. Our results demonstrate that utilizing hardware acceleration yields good results even without high-end equipment. While the most expensive platform (10X price of theleast expensive one) converges to the initial accuracy target10-30% and 60-70% faster than the other two, the differences among the platforms almost disappear at slightly higher accuracy targets. In addition, our results further highlight both the difficulty of evaluating ASR systems due to the complex, long, and resource-intensive nature of the model training in this domain, and the importance of establishing benchmarks for ASR.
However, despite this popularity, there has not been a detailed training efficiency analysis of modern ASR systems.This mainly stems from: the proprietary nature of many modern applications that depend on ASR; the relatively expensive co-processor hardware that is used to accelerate ASR by big vendors to enable such applications; and the absence of well-established benchmarks. The goal of this paper is to address the latter two of these challenges.
The paper first describes an ASR model, based on a deep neural network inspired by recent work, and our experiences building it. Then we evaluate this model on three CPU-GPU co-processor platforms that represent different budget categories. Our results demonstrate that utilizing hardware acceleration yields good results even without high-end equipment. While the most expensive platform (10X price of theleast expensive one) converges to the initial accuracy target10-30% and 60-70% faster than the other two, the differences among the platforms almost disappear at slightly higher accuracy targets. In addition, our results further highlight both the difficulty of evaluating ASR systems due to the complex, long, and resource-intensive nature of the model training in this domain, and the importance of establishing benchmarks for ASR.
Original language | English |
---|---|
Title of host publication | Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures |
Number of pages | 10 |
Place of Publication | Tokyo, Japan |
Publication date | 31 Aug 2020 |
Pages | 1-10 |
Article number | 1 |
Publication status | Published - 31 Aug 2020 |
Keywords
- Speech Recognition
- CPU-GPU co-processors
- Benchmarking