Understanding GPU errors on large-scale HPC systems and the implications for system design and operating

Hits: 1092
Year:
2015
Type of Publication:
Article
Authors:
  • Tiwari, Devesh
  • Gupta, Saurabh
  • Rogers, James
  • Maxwell, Don
  • Rech, Paolo
  • Vazhkudai, Sudharshan
  • Oliveira, Daniel
  • Londo, Dave
  • DeBardeleben, Nathan
  • Navaux, Philippe
  • Carro, Luigi
  • Bland, Arthur
Journal:
IEEE 21st International Symposium on High Performance Computer Architecture (HPCA)
Pages:
331-342
BibTex:
Abstract:
Increase in graphics hardware performance and improvements in programmability has enabled GPUs to evolve from a graphics-specific accelerator to a general-purpose computing device. Titan, the world's second fastest supercomputer for open science in 2014, consists of more dum 18,000 GPUs that scientists from various domains such as astrophysics, fusion, climate, and combustion use routinely to run large-scale simulations. Unfortunately, while the performance efficiency of GPUs is well understood, their resilience characteristics in a large-scale computing system have not been fully evaluated. We present a detailed study to provide a thorough understanding of GPU errors on a large-scale GPU-enabled system. Our data was collected from the Titan supercomputer at the Oak Ridge Leadership Computing Facility and a GPU cluster at the Los Alamos National Laboratory. We also present results from our extensive neutron-beam tests, conducted at Los Alamos Neutron Science Center (LANSCE) and at ISIS (Rutherford Appleron Laboratories, UK), to measure the resilience of different generations of GPUs. We present several findings from our field data and neutron-beam experiments, and discuss the implications of our results for future GPU architects, current and future HPC computing facilities, and researchers focusing on GPU resilience.
Back

© 2017 New Mexico Consortium