Tuesday, Nov. 14
Travis Humble, Oak Ridge National Laboratory
“Advancing Scientific Discovery with Quantum Computing”
Oak Ridge National Laboratory is preparing for the demands of next-generation computational science through research and development of quantum computing systems. These "atomic processors" use the fundamental laws of physics to offer radically new methods for accelerating computation in chemistry, materials science, high-energy physics, and nuclear physics. We are now developing new concepts that take advantage of the principles of entanglement, randomness, and superposition for advanced high-performance computing system designs. Our quantum co-design research team is leading the integration and benchmarking of current, state of the art quantum processing units, while our quantum algorithms and quantum applications teams are preparing users for these next generation systems. The ORNL Quantum Computing Institute fosters the continuing interaction between these teams and computational scientists from industry, academia, and government by providing unique resources for advanced exploitation, prototyping, and performance analysis.
Choong-Seock Chang, Princeton Plasma Physics Laboratory
“High-fidelity coupling between core and edge multiscale codes in the ECP-Application fusion WDM project”
The first-stage goal in the ECP Application Project WDM (high-fidelity Whole-Device-Modeling of magnetic fusion plasma) is to couple core and edge kinetic codes together. The core code GENE is optimized for the core region of a magnetic fusion reactor where the magnetic field lines form closed confinement surfaces and the large-scale physics phenomena are mostly subject to the equilibrium thermodynamics law, with the plasma turbulence being treated as a perturbation. The edge code XGC is optimized for the edge region where the magnetic field lines are open, the single particle motions are not confined, the physics phenomena are far from equilibrium, and the space-time scales of the plasma turbulence is inseparable from those of the background plasma dynamics.
Ben Bergen, Los Alamos National Laboratory
“Introducing the Flexible Computational Science Infrastructure (FleCSI): Overview and Applications”
FleCSI is a compile-time configurable framework designed to support multi-physics application development. As such, FleCSI attempts to provide a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. Current support includes multi-dimensional mesh topology, mesh geometry, mesh adjacency information, n-dimensional hashed-tree data structures, graph partitioning interfaces, and dependency closures.
James Laros, Sandia National Laboratory
“The NNSA Vanguard Program”
(TBD) about a new advanced system at Sandia
Stephen Lee, Los Alamos National Laboratory/Exascale Computing Project
“The DOE Exascale Computing Project: Recent Accomplishments”
ECP Deputy Director Stephen Lee (LANL) will lead a brief update of the Exascale Computing Project (ECP), joined by several of the ECP’s leading Principal Investigators to discuss recent milestone accomplishments from the Application Development, Software Technology and Hardware and Integration focus areas.
Jonathan Carter, Lawrence Berkeley National Laboratory
“Advanced Quantum-Enabled Simulation (AQuES) Testbed”
I will briefly discuss the plans for Berkeley Lab’s Advanced Quantum-Enabled Simulation (AQuES) Testbed, highlighting recent and future developments in our sc-qubit platform, control electronics, and coupling to classical computing. I will also discuss some of the recent and future applications for the testbed.
Balint Joo, Jefferson Lab
“Lattice Quantum-Chromodynamics at Jefferson Lab and the quest for Performance Portability”
I will briefly outline Lattice QCD applications and discuss our current approach to performance portability as well as mention recent studies for producing performance portable software for future systems as part of our work in the USQCD Lattice QCD application project of ECP.
Alexei Klimentov, ORNL/BNL/LBNL/CERN
In this contribution we’ll discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to High Luminosity LHC in seven years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new leadership class facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world and in particular with US LCFs, academic and commercial cloud providers. We also discuss R&D computing projects started recently at BNL, ORNL and LBNL. We will emphasize the main accomplishments of DOE ASCR funded BigPanDA project to use BigPanDA workload management system as a general service on OLCF’s Titan.
Wednesday, Nov. 15
Les Cottrell, SLAC National Accelerator Laboratory
“Exascale computing preparation: pre-production verification of data transfer capability for exascale projects such as LCLS-II and a look into the future”
Modern and emerging scientific experiments, such as the Large Hadron Collider at CERN and next decade’s world-leading LCLS-II Free Electron Laser at SLAC, need to move petabytes of data from data acquisition instruments to computation facilities located at other sites for analysis and storage. Point-to-point data rate requirements are expected to reach terabits/second by the middle of the next decade just for LCLS-II. SLAC has thus been working with Zettar to develop, exercise and evaluate high-speed massive data transfers over long distances and between SLAC and supercomputer resources at other sites, such as NERSC’s Cori Cray XC40 supercomputer at LBNL. We will demonstrate and explain the initial, mostly software-based approach that Zettar is using and SLAC is evaluating. This consists of resource aggregation and parallel processing at each stage to leverage existing, in-place hardware, IT systems and network infrastructures and enable nearly real-time processing using Cori.
Ryan Friese, Pacific Northwest National Laboratory
“Data Vortex Network: from small packets to high performance”
In this talk, we will review the Data Vortex interconnection network architecture and how traditional and emerging workloads perform on such a network. We have conducted an extensive set of experiments and ported several kernels and applications on Data Vortex systems. We found out that Data Vortex systems largely outperform traditional HPC systems for irregular and data analytics workloads, while still providing acceptable performance for regular applications. We will review the characteristics that make an application perform well on Data Vortex systems, as well as the current strengths and weaknesses of the network. We will conclude by looking at future direction and programmability issues.
Doug Kothe, Oak Ridge National Laboratory/ Exascale Computing Project
“The DOE Exascale Computing Project: Status and Next Steps”
ECP Director Doug Kothe (ORNL) will present an overview of the Exascale Computing Project (ECP) joined by ECP Focus Area directors Andrew Siegel (ANL), Application Development, Michael Heroux (Sandia) Software Technology, and Terri Quinn (LLNL) Hardware and Integration. The team will also discuss the recently announced RFI (Request for Information) issued by the ECP for new approaches to exascale-capable DAC (Data Analytic Computing).
Ian Foster, Argonne National Laboratory
“Going smart and deep on materials at ALCF”
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
Barbara Chapman, Brookhaven National Laboratory
“OpenMP For The Long Haul”
OpenMP was first conceived as a portable programming interface that could enable easy adaptation of Fortran programs to exploit the parallelism in shared memory multiprocessors. It is now a widely supported portable programming interface that enables application codes written in Fortran, C or C++ to execute on a wide variety of modern shared memory architectures.. It accommodates several distinct parallelization approaches (e.g. fork-join, tasking, SIMD, SIMT). For instance, its latest version offers a host-to-accelerator code offloading capability.
Georgia Tourassi, Oak Ridge National Laboratory
“Deep Learning Enabled National Cancer Surveillance”
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this talk I will discuss the latest deep learning technology, presenting both theoretical and practical perspectives that are relevant to natural language processing of clinical pathology reports. Using different deep learning architectures, I will present benchmark studies for various information extraction tasks and discuss their importance in supporting a comprehensive and scalable national cancer surveillance program.
Rob Neely, Lawrence Livermore National Laboratory
“Preparing Applications for Heterogeneous Computing at the LLNL Sierra Center of Excellence (COE)”
Three years ago this week at SC14 Lawrence Livermore announced their plans to deploy an IBM/NVIDIA based system named Sierra at Livermore Computing in 2017-18 for use in the NNSA stockpile stewardship mission. The shift to heterogeneous computing represented a major disruption for the application base at LLNL, and with that recognition in mind – the Sierra Center of Excellence was stood up. The COE consists of a tight collaboration between LLNL, IBM, and NVIDIA aimed at making sure applications are ready to go once Sierra is accepted for mission use. In this talk, I’ll give a history of the COE, an update of our application progress and related activities, and some of the lessons learned in transitioning a large application base to use modern heterogeneous computing.