Combining Deep Learning Accelerators and Graphics Processing Unit for Efficient Computing
Abstract
Full Text:
PDFReferences
Nurvitadhi, E., Venkatesh, G., Sim, J., Marr, D., Huang, R., Ong Gee Hock, J., ... & Boudoukh, G. (2017, February). Can FPGAs beat GPUs in accelerating next-generation deep neural networks?. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (pp. 5-14).
Bolz, J., Farmer, I., Grinspun, E., & Schröder, P. (2003). Sparse matrix solvers on the GPU: conjugate gradients and multigrid. ACM transactions on graphics (TOG), 22(3), 917-924.
Harris, M. J., Baxter, W. V., Scheuermann, T., & Lastra, A. (2003, July). Simulation of cloud dynamics on graphics hardware. In Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware (pp. 92-101).
Krüger, J., & Westermann, R. (2005). Linear algebra operators for GPU implementation of numerical algorithms. In ACM SIGGRAPH 2005 Courses (pp. 234-es).
"nvidiateslamicroarchitecture"(http://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/Arch/gpu.pdf) (PDF).
"nvidia introduces supercomputer for self driving cars" (http://gas2.org/2016/01/06/nvidia-introduces-supercomputer-for-self-driving-cars/).
"imagenet classification with deep convolutional neural networks" (https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) (PDF).
"how the gpu came to be used for general computation"(http://igoro.com/archive/how-gpu-came-to-be-used-for-general computation/).
"google developing AI processors" (http://www.eetimes.com/document.asp?doc_id=1329715).google using its own AI accelerators.
"FPGA Based Deep Learning Accelerators Take on ASICs" (http://www.nextplatform.com/2016/08/23/fpga-based-deep-learning-accelerators-take-asics/). The Next Platform. 2016-08-23. Retrieved 2016-09-07.
Refbacks
- There are currently no refbacks.