Distinguished Lecture: A Decade of Machine Learning Accelerators: Lessons Learned and Carbon Footprint
David Patterson: David Patterson is a UC Berkeley professor emeritus, a Google distinguished engineer, RIOS Laboratory Director, and the RISC-V International Vice-Chair
Abstract: The success of deep neural networks (DNNs) from Machine Learning (ML) has inspired domain specific architectures (DSAs) for them. ML has two phases: training, which constructs accurate models, and inference, which serves those models. Google’s first generation DSA offered 50x improvement over conventional architectures for inference in 2015. Google next built the first production DSA supercomputer for the much harder problem of training. Subsequent generations greatly improved performance of both phases. We start with ten lessons learned, such as DNNs grow rapidly; workloads quickly evolve with DNN advances; the bottleneck is memory, not floating-point units; and semiconductor technology advances unequally.
The rapid growth of DNNs rightfully raised concerns about their carbon footprint. The second part of the talk identifies the “4Ms” (Model, Machine, Mechanization, Map) that, if optimized, can reduce ML training energy by up to 100x and carbon emissions up to 1000x. By improving the 4Ms, ML held steady at <15% of Google’s total energy use despite it consuming ~75% of its floating point operations. Given the importance of climate change, ML papers should include emissions explicitly to foster competition on more than just model quality. External estimates have been off 100x–100,000x, so publishing emissions also ensures accurate accounting, which helps pinpoint the biggest challenges for climate change. With continuing focus on the 4Ms, we can realize the amazing potential of ML to positively impact many fields in a sustainable way.
Bio: David Patterson is a UC Berkeley professor emeritus, a Google distinguished engineer, RIOS Laboratory Director, and the RISC-V International Vice-Chair. He received BA, MS, and PhD degrees from UCLA. His Berkeley projects on Reduced Instruction Set Computers (RISC), Redundant Array of Inexpensive Disks (RAID), and Network of Workstation (NOW) helped lead to multibillion-dollar industries. The best known of his seven books is Computer Architecture: A Quantitative Approach. He and his co-author John Hennessy shared the 2017 ACM A.M Turing Award, the 2021 BBVA Foundation Frontiers of Knowledge Award, and the 2022 NAE Charles Stark Draper Prize for Engineering.