Interstellar: Using Halide’s Scheduling Language to Analyze DNN Accelerators

Session: Frameworks for deep learning--Layering the ML cake.

Authors: Xuan Yang (Stanford University); Mingyu Gao (Tsinghua University); Qiaoyi Liu (Stanford University); Jeff Setter (Stanford University); Jing Pu (Stanford University); Ankita Nayak (Stanford University); Steven Bell (Stanford University); Kaidi Cao (Stanford University); Heonjae Ha (Stanford University); Priyanka Raina (Stanford University); Christos Kozyrakis (Stanford University, Google); Mark Horowitz (Stanford University)

We show that DNN accelerator micro-architectures and their program mappings represent specific choices of loop order and hardware parallelism for computing the seven nested loops of DNNs, which enables us to create a formal taxonomy of all existing dense DNN accelerators. Surprisingly, the loop transformations needed to create these hardware variants can be precisely and concisely represented by Halide's scheduling language. By modifying the Halide compiler to generate hardware, we create a system that can fairly compare these prior accelerators. As long as proper loop blocking schemes are used, and the hardware can support mapping replicated loops, many different hardware dataflows yield similar energy efficiency with good performance. This is because the loop blocking can ensure that most data references stay on-chip with good locality and the processing units have high resource utilization. How resources are allocated, especially in the memory system, has a large impact on energy and performance. By optimizing hardware resource allocation while keeping throughput constant, we achieve up to 4.2X energy improvement for Convolutional Neural Networks (CNNs), 1.6X and 1.8X improvement for Long Short-Term Memories (LSTMs) and multi-layer perceptrons (MLPs), respectively.