FPGA-based device for handwritten digit recognition in images
Zoev I.V., Beresnev A.P., Markov N.G., Malchukov A.N.

 

National Research Tomsk Polytechnic University, Tomsk, Russia

Full text of article: Russian language.

 PDF

Abstract:
We describe the design and manufacture of a mobile and energy efficient device that allows one to recognize handwritten digits in images using convolutional neural networks. The device is implemented on a field-programmable gate array (FPGA), which is included in the system-on-a-chip Cyclone V SX. Functional diagrams of the computational blocks implementing the convolution and pooling procedures are developed. Functional diagrams of the convolution neural network for the proposed architecture are also described. Results of testing the developed FPGA-based device for its efficiency in terms of handwritten digit recognition accuracy, recognition rate, and power consumption are presented. Results of a performance comparison between a hardware and software implementation of convolutional neural networks are presented.

Keywords:
handwritten digit recognition in images, convolutional neural networks, FPGA-based device.

Citation:
Zoev IV, Beresnev AP, Markov NG, Malchukov AN. FPGA-based device for handwritten digit recognition in images. Computer Optics 2017; 41(6): 938-949. DOI: 10.18287/2412-6179-2017-41-6-938-949.

References:

  1. Chatfield K, Simonyan K, Vedaldi A, Zisserman A. Return of the devil in the details: Delving deep into convolutional nets. Proc of the BMVC 2014. DOI: 10.5244/c.28.6.
  2. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg A C, Fei-Fei L. ImageNet large scale visual recognition challenge. Int J Comput Vis 2015; 115(3): 211-252. DOI: 10.1007/s11263-015-0816-y.
  3. Goyal S, Benjamin P. Object recognition using deep neural networks: A survey. Source: <https://arxiv.org/pdf/1412.3684.pdf>.
  4. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-Based Learning Appelied to Document Recognition. Proc IEEE 1998; 86(11): P.2278-2324. DOI: 10.1109/5.726791.
  5. Reshma AJ, James JJ, Kavya M, Saravanan M. An overview of character recognition focused on off-line handwriting. ARPN Journal of Engineering and Applied Sciences 2016; 11(15): 9372-9378.
  6. Tuba E, Tuba M, Simian D. Handwritten digit recognition by support vector machine optimized by bat algorithm. WSCG 2016: 369-376.
  7. Spitsyn VG, Bolotova YuA, Phan NH, Bui TTT. Using a Haar wavelet transform, principal component analysis and neural networks for OCR in the presence of impulse noise [In Russian]. Computer Optics 2016; 40(2): 249-257. DOI: 10.18287/2412-6179-2016-40-2-249-257.
  8. Elleuch M, Maalej R, Kherallah M. A new design based-SVM of the CNN classifier architecture with dropout for offline Arabic handwritten recognition. Proc of the Computer Science. 2016; 80(1): 1712-1723. – DOI: 10.1016/j.procs.2016.05.512
  9. Alom MZ, Sidike P, Taha TM, Asari VK. Handwritten bangla digit recognition using deep learning. Source: <https://arxiv.org/abs/1705.02680>.
  10. Maitra DS, Bhattacharya U, Parui SK. CNN based common approach to handwritten character recognition of multiple scripts. ICDAR 2015: 1021-1025. DOI: 10.1109/ICDAR.2015.7333916.
  11. Glauner PO. Comparison of training methods for deep neural networks. Source: <https://arxiv.org/pdf/1504.06825.pdf>.
  12. Guerra L, McGarry LM, Robles V, Bielza C, Larra>aga P, Yuste R. Comparison between supervised and unsupervised classifications of neuronal cell types: a case study. Dev Neurobiol 2011; 71(1): 71-82. DOI: 10.1002/dneu.20809.
  13. Bottou L. Stochastic gradient descent tricks. In Book: Montavon G, Orr GB, Müller KR, eds. Neural networks: Tricks of the trade. Berlin, Heidelberg: Springer; 2012: 421-436. DOI: 10.1007/978-3-642-35289-8_25.
  14. LeCun YA, Bottou L, Orr GB, Müller KR. Efficient BackProb. In Book: Montavon G, Orr GB, Müller KR, eds. Neural networks: Tricks of the trade. Berlin, Heidelberg: Springer; 2012: 9-48. DOI: 10.1007/3-540-49430-8_2.
  15. Soldatova OP, Garshin AA. Сonvolutional neural network applied to handwritten digits recognition [In Russian]. Computer Optics 2010; 34(2): 252-259.
  16. El-Sawy A., Hazem E.L.B., Loey M. CNN for Handwritten Arabic Digits Recognition Based on LeNet-5. AISI 2016: 566-575. DOI: 10.1007/978-3-319-48308-5_54.
  17. SoCKit – The development kit for new SoC device. Source: <http://www.terasic.com.tw/cgi-bin/page/archive.pl?CategoryNo=167&No=816>.
  18. Farabet C, Poulet C, LeCun Y. An FPGA-based stream processor for embedded real-time vision with convolutional networks. ICCV Workshops 2009: 878-885. DOI: 10.1109/ICCVW.2009.5457611.
  19. Zhang C, Li P, Sun G, Guan Y, Xiao B, Cong J. Optimizing FPGA-based accelerator design for deep convolutional neural networks. ACM/SIGDA 2015: 161-170. DOI: 10.1145/2684746.2689060.
  20. Motamedi M, Gysel P, Akella V, Ghiasi S. Design space exploration of FPGA-based deep convolutional neural networks. ASP-DAC 2016; P. 575-580. DOI: 10.1109/ASPDAC.2016.7428073.
  21. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems 2012; 1: 1097-1105.
  22. Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition. In Book: Diamantaras K, Duch W, Iliadis LS, eds. Artificial Neural Networks – ICANN 2010. Berlin, Heidelberg: Springer; 2010: 92-101. DOI: 10.1007/978-3-642-15825-4_10.
  23. The MNIST database of handwritten digits. Source: <http://yann.lecun.com/exdb/mnist>.
  24. Bahrampour S, Ramakrishnan N, Schott L, Shah M. Comparative study of deep learning software frameworks. Source: <https://arxiv.org/pdf/1511.06435.pdf>.
  25. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T. Caffe: Convolutional architecture for fast feature embedding. Proc of the 22nd ACM international conference on Multimedia 2014: 675-678. DOI: 10.1145/2647868.2654889.
  26. Beresnev AP, Zoev IV. Methodic of neural network weights transfer from software to hardware implementation [In Russian]. Trudy XIV Mezhdunarodnoy nauchno-prakticheskoy konferentsii studentov aspirantov i molodyh uchenyh 2016; 1: 22-23.
  27. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. AISTATS 2010: 249-256.
  28. 754-2008: IEEE standard for floating-point arithmetic. Revision of ANSI/IEEE Std 754-1985. New York: IEEE Publisher, 2008. DOI: 10.1109/IEEESTD.2008.4610935.
  29. Zoev IV, Beresnev AP, Mytsko EA, Malchukov AN. Implementation of 14 bits floating point numbers of calculating units for neural network hardware development. IOP Conference Series: Materials Science and Engineering 2017; 177(1): 012044. DOI: 10.1088/1757-899X/177/1/012044.
  30. Tavallaei S. Microsoft project Olympus hyperscale GPU accelerator (HGX-1). Source: <https://azure.microsoft.com/mediahandler/files/resourcefiles/00c18868-eba9-43d5-b8c6-e59f9fa219ee/HGX-1%20Blog_5_26_2017.pdf>.
  31. S<nchez OM. Adapting deep neural networks to a low-power environment. Source: <https://upcommons.upc.edu/bit­stream/handle/2117/106673/126470.pdf>.
  32. Quartus II handbook volume 3: Verification. Source: <https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/hb/qts/qts_qii5v3.pdf>.
  33. Half 1.12. IEEE 754-based half-precision floating point library. Source: <http://half.sourceforge.net/index.html>.
  34. NVIDIA: Caffe. Source: <https://github.com/NVIDIA/caffe>.
  35. Rastegari M, Ordonez V, Redmon J, Farhadi A. XNOR-Net: ImageNet classification using binary convolutional neural networks. ECCV 2016: 525-542. DOI: 10.1007/978-3-319-46493-0_32.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846)332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20