GPU- and FPGA-based clouds have been deployed to accelerate computationally intensive workloads. ASIC-based clouds are a natural evolution as cloud services expand across the planet. ASIC Clouds are purpose-built datacenters comprising large arrays of ASIC accelerators that optimize the total cost of ownership (TCO) of large, high-volume scale-out computations. On the surface, ASIC Clouds may seem improbable due to high nonrecurring engineering (NRE) costs and ASIC inflexibility, but large-scale ASIC Clouds have been deployed for the Bitcoin cryptocurrency system. This talk distills lessons from these Bitcoin ASIC Clouds and applies them to other large-scale workloads, including YouTube-style video-transcoding and Deep Learning, showing superior TCO versus CPU and GPU. It derives Pareto-optimal ASIC Cloud servers based on accelerator properties, by jointly optimizing ASIC architecture, DRAM, motherboard, power delivery, cooling, and operating voltage. The talk examines the impact of ASIC NRE and when it makes sense to build an ASIC Cloud. Finally, it looks at open source HW as a new direction for driving down ASIC NRE.
Professor Michael Bedford Taylor advises his PhD students in both the EE and CSE departments at UW. Prior to his recent arrival in Seattle, he was a tenured associate professor at University of California, San Diego, and Director of the Center for Dark Silicon. His research interests include tiled multicore architecture, open source hardware dark silicon, HLS accelerators for mobile, Bitcoin mining hardware, and ASIC Clouds. Taylor received a PhD in electrical engineering and computer science from the Massachusetts Institute of Technology.