Brad McDanelAssistant Professor of Computer Science

Education

Ph.D., Computer Science, Harvard University, 2019

M. Sc., Computer Science, Wake Forest University, 2012

B. Sc. Computer Science, Wake Forest University,  2010

Research Interests

I am broadly interested in the areas of deep learning, hardware architecture, and computer networks.  I have worked on developing efficient deep neural network (DNN) algorithms and have designed hardware architectures for these DNNs. I am also interested in optimizing DNN inference (prediction) for edge devices in both standalone and distributed network contexts.

Selected Publications

See a complete list of publications at my Google Scholar Profile.

H. T. Kung, B. McDanel, S. Zhang. Term Revealing: Furthering Quantization at Run Time on Quantized DNNs. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2020. (to appear)

B. McDanel, S. Zhang, H. T. Kung, X. Dong. Full-stack Optimization for Accelerating CNNs with FPGA Validation. 32nd ACM International Conference on Supercomputing (ICS), 2019.

H. T. Kung, B. McDanel, S. Zhang. Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization. 24th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2019.

S. Teerapittayanon,  B. McDanel, H. T. Kung. Distributed Deep Neural Networks over the Cloud, the Edge and End Devices. International Conference on Distributed Computing Systems (ICDCS), 2017.

B. McDanel, S. Teerapittayanon, H. T. Kung. Embedded Binarized Neural Networks. International Conference on Embedded Wireless Systems and Networks (EWSN), 2017.

S. Teerapittayanon, B. McDanel, H. T. Kung. BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks. International Conference on Pattern Recognition (ICPR), 2016.