Download PDFOpen PDF in browser

HiNUMA: NUMA-aware Data Placement and Migration in Hybrid Memory Systems

EasyChair Preprint 1618

9 pagesDate: October 9, 2019

Abstract

Non-uniform memory access (NUMA) architectures feature asymmetrical memory access latencies on different nodes. Hybrid memory systems composed of emerging non-volatile memory (NVM) and DRAM further diversifies data access latencies due to significant performance gap between NVM and DRAM. Traditional NUMA memory management polices fail to be effective in hybrid memory systems and may even hurt application performance. In this paper, we present HiNUMA, a new NUMA abstraction for memory allocation and migration in hybrid memory systems. HiNUMA advocates NUMA topology-aware hybrid memory allocation polices for initial data placement. HiNUMA also introduces a new NUMA balancing mechanism called HANB for memory migration at runtime. HANB considers not only data hotness but also memory bandwidth utilization to reduce the cost of data access in hybrid memory systems. We evaluate the performance of HiNUMA with several typical workloads. Experimental results show that HiNUMA can effectively utilize hybrid memories, and deliver much higher application performance than the default NUMA memory management polices and other state-of-the-art work.

Keyphrases: Data Migration, NUMA, data placement, hybrid memory

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:1618,
  author    = {Zhuohui Duan and Haikun Liu and Xiaofei Liao and Hai Jin and Wenbin Jiang and Yu Zhang},
  title     = {HiNUMA: NUMA-aware Data Placement and Migration in Hybrid Memory Systems},
  howpublished = {EasyChair Preprint 1618},
  year      = {EasyChair, 2019}}
Download PDFOpen PDF in browser