AutoTM: Automatic Tensor Movement in Heterogeneous Memory Systems using Integer Linear Programming

Session: Tensor computation and data orchestration--Playing musical chairs!

Authors: Mark Hildebrand (University of California, Davis); Jawad Khan (Intel Corporation); Sanjeev Trika (Intel Corporation); Jason Lowe-Power (University of California, Davis); Venkatesh Akella (University of California, Davis)

Memory capacity is a key bottleneck for training large scale neural networks. Intel{\textregistered} Optane{\texttrademark} DC PMM (persistent memory modules) which are available as NVDIMMs are a disruptive technology that promises significantly higher read bandwidth than traditional SSDs at a lower cost per bit than traditional DRAM. In this work we show how to take advantage of this new memory technology to minimize the amount of DRAM required without compromising performance significantly. Specifically, we take advantage of the static nature of the underlying computational graphs in deep neural network applications to develop a profile guided optimization based on Integer Linear Programming (ILP) called AutoTM to optimally assign and move live tensors to either DRAM or NVDIMMs. Our approach can replace 50\% to 80\% of a system's DRAM with PMM while only losing a geometric mean 27.7\% performance. This is a significant improvement over first-touch NUMA, which loses 71.9\% of performance. The proposed ILP based synchronous scheduling technique also provides 2x performance over using DRAM as a hardware-controlled cache for very large networks.