About me
I am a Senior Deep Learning Research Scientist in the ADLR group at NVIDIA. I used to be a Senior Researcher in the Systems group at MSR Redmond.

For the last 5 or so years, I have mostly focused on designing and building software systems to train and perform inference over deep learning models as efficiently as possible.

I graduated from Stanford with a PhD in Computer Science in September 2021, where I was advised by Prof. Matei Zaharia. My PhD was supported by a National Science Foundation Graduate Research Fellowship. Before that, I was at MIT, where I received SB (in Computer Science and Mathematics) and MEng (in EECS) degrees.

My Google scholar profile has a relatively up-to-date list of publications.
Publications
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, Matei Zaharia.
SuperComputing 2021 (Best Student Paper).

Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP
Deepak Narayanan, Fiodar Kazhamiaka, Firas Abuzaid, Peter Kraft, Akshay Agrawal, Srikanth Kandula, Stephen Boyd, Matei Zaharia.
SOSP 2021.

Memory-Efficient Pipeline-Parallel DNN Training
Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, Matei Zaharia.
ICML 2021.

Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia.
OSDI 2020.

Analysis and Exploitation of Dynamic Pricing in the Public Cloud for ML Training
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia.
DISPA 2020.

Offload Annotations: Bringing Heterogeneous Computing to Existing Libraries and Workloads
Gina Yuan, Shoumik Palkar, Deepak Narayanan, Matei Zaharia.
USENIX ATC 2020.

Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia.
MLSys 2020.

MLPerf Training Benchmark
Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojyoti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Bill Jia, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, Matei Zaharia.
MLSys 2020.

PipeDream: Generalized Pipeline Parallelism for DNN Training
Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, Matei Zaharia.
SOSP 2019.

Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark
Cody Coleman*, Daniel Kang*, Deepak Narayanan*, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, Matei Zaharia.
SIGOPS Operating Systems Review July 2019.

Accelerating Deep Learning Workloads through Efficient Multi-Model Execution
Deepak Narayanan, Keshav Santhanam, Amar Phanishayee, Matei Zaharia.
NeurIPS Systems for ML Workshop 2018.

Analysis of the Time-To-Accuracy Metric and Entries in the DAWNBench Deep Learning Benchmark
Cody Coleman*, Daniel Kang*, Deepak Narayanan*, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, Matei Zaharia.
NeurIPS Systems for ML Workshop 2018.

Evaluating End-to-End Optimization for Data Analytics Applications in Weld
Shoumik Palkar, James Thomas, Deepak Narayanan, Pratiksha Thaker, Parimarjan Negi, Rahul Palamuttam, Anil Shanbhag, Holger Pirk, Malte Schwarzkopf, Saman Amarasinghe, Samuel Madden, Matei Zaharia.
VLDB 2018.

DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Christopher Re, Matei Zaharia.
NeurIPS Systems for ML Workshop 2017.

MacroBase: Prioritizing Attention in Fast Data
Peter Bailis, Edward Gan, Samuel Madden, Deepak Narayanan, Kexin Rong, Sahaana Suri.
SIGMOD 2017.

Weld: A Common Runtime for High Performance Data Analytics
Shoumik Palkar, James J. Thomas, Anil Shanbhag, Deepak Narayanan, Holger Pirk, Malte Schwarzkopf, Saman Amarasinghe, Matei Zaharia.
CIDR 2017.
Technical Reports
Resource-Efficient Execution of Deep Learning Computations
Deepak Narayanan.
Ph.D. Thesis.

On the Opportunities and Risks of Foundation Models
Center for Research on Foundation Models (CRFM). Led Section 4.5 (Systems).
arXiv:2108.07258.

Allocation of Fungible Resources via a Fast, Scalable Price Discovery Method
Akshay Agrawal, Stephen Boyd, Deepak Narayanan, Fiodar Kazhamiaka, Matei Zaharia.
arXiv:2104.00282.
Teaching
At Stanford, I have TAed Design and Analysis of Algorithms (CS 161), Principles of Data-Intensive Systems (CS 245), and Parallel Computing (CS 149).

At MIT, I TAed Introduction to Algorithms (6.006), and Design and Analysis of Algorithms (6.046). Before that, I was a Lab Assistant for Elements of Software Construction (6.005) and Introduction to EECS I (6.01).
Contact me