ITU

Training for Speech Recognition on Coprocessors

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

Training for Speech Recognition on Coprocessors. / Baunsgaard, Sebastian; Wrede, Sebastian Benjamin; Tözün, Pinar.

Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures. Tokyo, Japan, 2020. p. 1-10 1.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Baunsgaard, S, Wrede, SB & Tözün, P 2020, Training for Speech Recognition on Coprocessors. in Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures., 1, Tokyo, Japan, pp. 1-10.

APA

Baunsgaard, S., Wrede, S. B., & Tözün, P. (2020). Training for Speech Recognition on Coprocessors. In Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures (pp. 1-10). [1].

Vancouver

Baunsgaard S, Wrede SB, Tözün P. Training for Speech Recognition on Coprocessors. In Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures. Tokyo, Japan. 2020. p. 1-10. 1

Author

Baunsgaard, Sebastian ; Wrede, Sebastian Benjamin ; Tözün, Pinar. / Training for Speech Recognition on Coprocessors. Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures. Tokyo, Japan, 2020. pp. 1-10

Bibtex

@inproceedings{6cff936f60ef4d3fb62c91d0b517ac71,
title = "Training for Speech Recognition on Coprocessors",
abstract = "Automatic Speech Recognition (ASR) has increased in popularity in recent years. The evolution of processor and storage technologies has enabled more advanced ASR mechanisms, fueling the development of virtual assistants such as Amazon Alexa, Apple Siri, Microsoft Cortana, and GoogleHome. The interest in such assistants, in turn, has amplifiedthe novel developments in ASR research.However, despite this popularity, there has not been a detailed training efficiency analysis of modern ASR systems.This mainly stems from: the proprietary nature of many modern applications that depend on ASR; the relatively expensive co-processor hardware that is used to accelerate ASR by big vendors to enable such applications; and the absence of well-established benchmarks. The goal of this paper is to address the latter two of these challenges.The paper first describes an ASR model, based on a deep neural network inspired by recent work, and our experiences building it. Then we evaluate this model on three CPU-GPU co-processor platforms that represent different budget categories. Our results demonstrate that utilizing hardware acceleration yields good results even without high-end equipment. While the most expensive platform (10X price of theleast expensive one) converges to the initial accuracy target10-30% and 60-70% faster than the other two, the differences among the platforms almost disappear at slightly higher accuracy targets. In addition, our results further highlight both the difficulty of evaluating ASR systems due to the complex, long, and resource-intensive nature of the model training in this domain, and the importance of establishing benchmarks for ASR.",
keywords = "Speech Recognition, CPU-GPU co-processors, Benchmarking",
author = "Sebastian Baunsgaard and Wrede, {Sebastian Benjamin} and Pinar T{\"o}z{\"u}n",
year = "2020",
month = aug,
day = "31",
language = "English",
pages = "1--10",
booktitle = "Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures",

}

RIS

TY - GEN

T1 - Training for Speech Recognition on Coprocessors

AU - Baunsgaard, Sebastian

AU - Wrede, Sebastian Benjamin

AU - Tözün, Pinar

PY - 2020/8/31

Y1 - 2020/8/31

N2 - Automatic Speech Recognition (ASR) has increased in popularity in recent years. The evolution of processor and storage technologies has enabled more advanced ASR mechanisms, fueling the development of virtual assistants such as Amazon Alexa, Apple Siri, Microsoft Cortana, and GoogleHome. The interest in such assistants, in turn, has amplifiedthe novel developments in ASR research.However, despite this popularity, there has not been a detailed training efficiency analysis of modern ASR systems.This mainly stems from: the proprietary nature of many modern applications that depend on ASR; the relatively expensive co-processor hardware that is used to accelerate ASR by big vendors to enable such applications; and the absence of well-established benchmarks. The goal of this paper is to address the latter two of these challenges.The paper first describes an ASR model, based on a deep neural network inspired by recent work, and our experiences building it. Then we evaluate this model on three CPU-GPU co-processor platforms that represent different budget categories. Our results demonstrate that utilizing hardware acceleration yields good results even without high-end equipment. While the most expensive platform (10X price of theleast expensive one) converges to the initial accuracy target10-30% and 60-70% faster than the other two, the differences among the platforms almost disappear at slightly higher accuracy targets. In addition, our results further highlight both the difficulty of evaluating ASR systems due to the complex, long, and resource-intensive nature of the model training in this domain, and the importance of establishing benchmarks for ASR.

AB - Automatic Speech Recognition (ASR) has increased in popularity in recent years. The evolution of processor and storage technologies has enabled more advanced ASR mechanisms, fueling the development of virtual assistants such as Amazon Alexa, Apple Siri, Microsoft Cortana, and GoogleHome. The interest in such assistants, in turn, has amplifiedthe novel developments in ASR research.However, despite this popularity, there has not been a detailed training efficiency analysis of modern ASR systems.This mainly stems from: the proprietary nature of many modern applications that depend on ASR; the relatively expensive co-processor hardware that is used to accelerate ASR by big vendors to enable such applications; and the absence of well-established benchmarks. The goal of this paper is to address the latter two of these challenges.The paper first describes an ASR model, based on a deep neural network inspired by recent work, and our experiences building it. Then we evaluate this model on three CPU-GPU co-processor platforms that represent different budget categories. Our results demonstrate that utilizing hardware acceleration yields good results even without high-end equipment. While the most expensive platform (10X price of theleast expensive one) converges to the initial accuracy target10-30% and 60-70% faster than the other two, the differences among the platforms almost disappear at slightly higher accuracy targets. In addition, our results further highlight both the difficulty of evaluating ASR systems due to the complex, long, and resource-intensive nature of the model training in this domain, and the importance of establishing benchmarks for ASR.

KW - Speech Recognition

KW - CPU-GPU co-processors

KW - Benchmarking

M3 - Article in proceedings

SP - 1

EP - 10

BT - Twelth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures

CY - Tokyo, Japan

ER -

ID: 85746754