Implementation of deep learning models in FPGA development board for recognition accuracy enhancement

Salah Ayad Jassim, Ibrahim Khider

Abstract


Deep learning (DL) model performance is intricately tied to the quality of training, influenced by several parameters. Of these, the computing unit employed significantly impacts training efficiency. Traditional setups use central processing units (CPUs) or graphics processing units (GPUs) for DL training. This paper proposes an alternative using field programmable gate arrays (FPGAs) for DL training, leveraging their customizable and parallelizable architecture. FPGA programming allows for tailored circuit designs, optimizing DL training requirements and enabling efficient parallel processing. The use of FPGAs in DL training has garnered attention for their potential in achieving high computational throughput and energy efficiency, attributed to advantages like low latency, high bandwidth, and reconfigurability. By exploiting FPGA parallel processing capabilities, faster training times and the potential for larger, more complex DL models are feasible. This paper provides an overview of state-of-the-art techniques for FPGA-based DL model training, discussing challenges such as hardware architecture design, memory management, and algorithm optimization. Additionally, various FPGA-based DL frameworks and libraries facilitating DL model development and deployment on FPGAs are explored.

Keywords


Computational cost; Deep learning; Field programmable gate array; Performance; Training

Full Text:

PDF


DOI: https://doi.org/10.11591/eei.v13i6.7683

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats

Bulletin of Electrical Engineering and Informatics (BEEI)
ISSN: 2089-3191, e-ISSN: 2302-9285
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).