Open Access Open Access  Restricted Access Subscription Access

Effective VHDL Simulation with Parallel Pipelining for Convolutional Neural Networks

Monika Kumari

Abstract


Artificial Neural Networks are computational gadgets which are encouraged by the human cerebrum for taking care of the different processing issues. Currently, neural networks are widely used to solve problems in a variety of fields, including: picture handling, design acknowledgment, advanced mechanics and so forth. In view of the profound learning calculations, there are numerous new quick developments of uses. A profound gaining calculation which is extended from Fake Brain Organizations and it is broadly utilized for picture classification and distinguishing proof. The continuous expanding amount of handling essential by CNNs produces the field for equipment support implies. Besides, CNN jobs have a streaming nature, well appropriate to re-configurable equipment structures like FPGAs. Still there is no Brain Organization based figuring procedure, that is matches pipelining method has been connoted to help the presence of profound brain network with regards to exactness. In this paper, Figuring based convolutional brain network framework delivered profoundly exact and useful framework by utilizing equal pipelining. At the degree of neurons, advancements of the convolutional and completely associated layers are made sense of and thought about. At the organization level, assessed figuring streamlining implies are noticed fractional by not decreasing the precision of the organization. The proposed convolutional brain network is contrasted and past convolutional brain network executed on a FPGA utilizing a figuring procedure to streamline the time deferral and power utilization of the framework, with high exactness when contrasted with past ordinary brain network executions.


Full Text:

PDF

References


Kim, D., Moghaddam, M. S., Moradian, H., Sim, H., Lee, J., & Choi, K. (2017, December). FPGA implementation of convolutional neural network based on stochastic computing. In 2017 International Conference on Field Programmable Technology (ICFPT) (pp. 287-290). IEEE.

Alawad, M., & Lin, M. (2017, March). Stochastic-based multi-stage streaming realization of deep convolutional neural network. In 2017 18th International Symposium on Quality Electronic Design (ISQED) (pp. 13-18). IEEE.

Samudre, P., Shende, P., & Jaiswal, V. (2019, March). Optimizing performance of convolutional neural network using computing technique. In 2019 IEEE 5th International Conference for Convergence in Technology (I2CT) (pp. 1-4). IEEE.

Zamanlooy, B., & Mirhassani, M. (2013). Efficient VLSI implementation of neural networks with hyperbolic tangent activation function. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 22(1), 39-48.

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360.

Chen, Y. H., Krishna, T., Emer, J. S., & Sze, V. (2016). Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits, 52(1), 127-138.


Refbacks

  • There are currently no refbacks.