Keynote and Invited Speakers

J Nelson Amaral 
Professor, Department of Computing Science
University of Alberta, Canada
Until Compilers Get Better at Code Generation: Leverage Human Expertise

For many application programs, the combination of knowleadegeable programmer and of a sophisticated compiler produces efficient code that delivers high performance in most computing platforms. There are classes of computations that are important and for which carefully crafted libraries have been created by specialized programmers. For such classes of applications, even the combination of a good programmer and a well-developed compiler cannot deliver the same level of performance as the human experts that crafted the libraries. Yet, some application programs contain implementation of such computations instead of invocation of the specialized library implementations. A performant solution is to raise the representation of these computations in order to recognize idioms that perform such computations and then replace the idiom with a call to a specialized library call. The successful use of this approach requires data-dependence analysis and code rewriting. Introducing a new dependency on the availability of a library in the workflow of a system has implications that must be sorted out for deployment. This talk describes our approach to idiom recognition and to the leveraging of high-performant libraries.
J. Nelson Amaral, a Computing Science professor at the University of Alberta, Ph.D. from The University of Texas at Austin in 2004, has published in optimizing compilers and high-performance computing. Scientific community service includes general chair for the 23rd International Conference on Parallel Architectures and Compilation Techniques in 2014, for the International Conference on Performance Engineering in 2020, and for the International Conference on Parallel Processing in 2020. Accolades include ACM Distinguished Engineer, IBM Faculty Fellow, IBM Faculty Awards, IBM CAS "Team of the Year", awards for excellence in teaching, and the GSA Award for Excellence in Graduate Student Supervision, and a recent award for University of Alberta Award for Outstanding Mentorship in Undergraduate Research & Creative Activities.

Jeffrey S. Vetter 
Oak Ridge National Laboratory

Preparing for Extreme Heterogeneity in High Performance Computing

While computing technologies have remained relatively stable for nearly two decades, new architectural features, such as heterogeneous cores, deep memory hierarchies, non-volatile memory (NVM), and near-memory processing, have emerged as possible solutions to address the concerns of energy-efficiency and cost. However, we expect this ‘golden age’ of architectural change to lead to extreme heterogeneity of architectures, and it will have a major impact on software systems and applications. Software will need to be redesigned to exploit these new capabilities while providing some level of performance portability across these diverse architectures. In this talk, I will survey these emerging technologies, discuss their architectural and software implications, and describe several new approaches (e.g., domain specific languages, intelligent runtime systems) to address these challenges.
Jeffrey Vetter, Ph.D., is a Corporate Fellow at Oak Ridge National Laboratory (ORNL). At ORNL, he is currently the Section Head for Advanced Computer Systems Research and the founding director of the Experimental Computing Laboratory (ExCL). Previously, Vetter was the founding group leader of the Future Technologies Group in the Computer Science and Mathematics Division from 2003 until 2020. Vetter earned his Ph.D. in Computer Science from the Georgia Institute of Technology. Vetter is a Fellow of the IEEE, and a Distinguished Scientist Member of the ACM. In 2010, Vetter, as part of an interdisciplinary team from Georgia Tech, NYU, and ORNL, was awarded the ACM Gordon Bell Prize. In 2015, Vetter served as the SC15 Technical Program Chair. His recent books, entitled "Contemporary High Performance Computing: From Petascale toward Exascale (Vols. 1 - 3)," survey the international landscape of HPC.

Martin Kong 
Assistant Professor, School of Computer Science at the University of Oklahoma (OU)

Martin Kong is an Assistant Professor in the School of Computer Science at the University of Oklahoma (OU). Before joining this amazing department he spent two years as an Assistant Scientist in the Computational Science Initiative at Brookhaven National Laboratory. Prior to that, he held a post-doctoral research position in the Computer Science Department of Rice University, where he was a member of Vivek Sarkar's Habanero Research group. He obtained his PhD at The Ohio State University, where he was advised by Prof. Louis-Noel Pouchet and Prof. (Saday) Sadayappan.

Riyadh Baghdadi 
Postdoctoral researcher at CSAIL/MIT

Riyadh Baghdadi is an assistant professor in computer science at NYU AD. He works on the intersection of compilers and applied machine learning. More precisely, he works on developing compilers that take high-level code and optimize it automatically to generate highly efficient code. He uses machine learning to automate optimizations in these compilers. Before joining NYU, he did a postdoc at MIT. Riyadh obtained his Ph.D. and master's degrees from INRIA, France (Sorbonne University, Paris VI).

Doru Popovici 
Postdoctoral Scholar at Lawrence Berkeley National Lab

Doru Thom Popovici is currently a postdoc at Lawrence Berkeley National Lab. He works at the intersection of algorithms, compilers and computer architecture. His main focus is on developing frameworks that allow users to map applications to hardware ranging from single node CPU, GPU, FPGA to distributed systems. Understanding the algorithm and the hardware, one can develop analytical models to decide what algorithm best maps to a given system. Thom obtained his Ph.D from Carnegie Mellon University, where he worked in Franz Franchetti's Spiral group on automatic code generation and optimization for Fourier-based computations.

Rudi Eigenmann 
Professor, University of Delaware

Rudolf (Rudi) Eigenmann came to the University of Delaware in2017 from Purdue University, where he was a Professor in the School of Electrical and Computer Engineering. From 2013-2017, he has also served as Program Director in the National Science Foundation’s Office of Advanced Cyberinfrastructure. His core research interests include optimizing compilers, programming methodologies, tools, and performance evaluation for high-performance computing, as well as the design of cyberinfrastructure. Dr. Eigenmann received his Ph.D. in Electrical Engineering/Computer Science from ETH Zurich, Switzerland.

Chunhua “Leo” Liao 
Senior Computer Scientist, Lawrence Livermore National Laboratory.

Dr. Chunhua “Leo” Liao is a senior computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. His research focus has been compiler techniques for correctness, performance and productivity of high-performance computing. He is the lead author of the ROSE compiler’s AST translation API, OpenMP implementations targeting CPUs and GPUs, and a range of source-to-source tools.

Johannes Doerfert 
Assistant Computer Scientist, Argonne National Laboratory

Johannes Doerfert graduated from Saarland University with a Ph.D. in computer science that focused on polyhedral compiler technologies in low level codes. He is an active contributor to the LLVM compiler project since 2014 and the OpenMP standard since 2018. Today, Johannes is doing research and advancing LLVM in fields like inter-procedural optimization, OpenMP (offload) implementation, and parallel program optimizations. His interests also include the use of AI in compilers and smart/reactive software development.

Robert Harrison 
Director, Institute for Advanced Computational Science

Professor Robert Harrison is a distinguished expert in high-performance computing. Through a joint appointment with Brookhaven National Laboratory, Professor Harrison has also been named Director of the Computational Science Center at Brookhaven National Laboratory. Dr. Harrison comes to Stony Brook from the University of Tennessee Knoxville and Oak Ridge National Laboratory, where he was Director of the Joint Institute of Computational Sciences, Professor of Chemistry and Corporate Fellow. He has a prolific career in high-performance computing with over one hundred publications on the subject, as well as extensive service on national advisory committees.

Samir Das 
Professor and Chair, Department of Computer Science, Stony Brook University

Samir Das received his Ph.D. in computer science from Georgia Institute of Technology. Earlier, he was educated at Jadavpur University in Kolkata, India, and Indian Institute of Science, Bangalore, India. He also worked briefly in Indian Statistical Institute. Prior to Stony Brook, Das was a faculty member at the University of Texas at San Antonio and then at University of Cincinnati. Das has been at Stony Brook from 2002.

David Padua 
Professor of Computer Science, University of Illinois at Urbana-Champaign

David Padua received his Ph.D. from the University of Illinois in 1980. In 1985, after a few years at the Universidad Simón Bolívar in Venezuela, he returned to the University of Illinois where he is now Donald Biggar Willet Professor in Engineering. He has served as program committee member, program chair, or general chair to more than 70 conferences and workshops. He was the Editor-in-Chief of Springer-Verlag’s Encyclopedia of Parallel Computing and is currently a member of the editorial board of the Communications of the ACM, the Journal of Parallel and Distributed Computing, and the International Journal of Parallel Programming. Dr. Padua has supervised the dissertations of 30 Ph.D. students. He has devoted much of his career to the study of languages, tools, and compilers for parallel computing and has authored or co-authored more than 170 papers in these areas. He received the 2015 IEEE Computer Society Harry H. Goode Award. In 2017, he awarded an honorary doctorate from the University of Valladolid in Spain. He is a Fellow of the ACM and the IEEE.

Michelle Strout 
Professor in the Department of Computer Science, University of Arizona

Michelle Strout is a professor in the Department of Computer Science at the University of Arizona. Prof. Strout’s main research area is high performance computing and her research interests include compilers and run-time systems, scientific computing, and software engineering. Michelle received an NSF CAREER Award for her research in parallelization techniques for irregular applications, such as molecular dynamics simulations. She received a DOE Early Career award to fund her research in separating the specification of scientific computing applications from the specification of implementation details such as how to parallelize such computations. Some of Prof. Strout’s research contributions include the Universal Occupancy Vector (UOV) for determining storage mappings for any legal schedule in a stencil computation, the Sparse Polyhedral Framework (SPF) for specifying inspector-executor loop transformations, dataflow analysis for MPI programs, parameterized and full versus partial tiling with the outset and insets, and loop chaining for scheduling across stencil loops.

Vivek Sarkar 
Professor and Chair, School of Computer Science, Georgia Tech

Vivek Sarkar is Chair of the School of Computer Science at Georgia Tech, where he is also the Stephen Fleming Chair for Telecommunications in the College of Computing. He conducts research in multiple aspects of parallel computing software including programming languages, compilers, runtime systems, and debugging and verification systems for high performance computers. Prof. Sarkar currently leads the Habanero Extreme Scale Software Research Laboratory at Georgia Tech. He teaches a graduate class on Compilers/Programming Languages in the Fall semesters

Mary Hall 
Director, School of Computing, University of Utah

Mary Hall is the Director of the School of Computing at University of Utah. Her research focus brings together compiler optimizations and performance tuning targeting current and future high-performance architectures on real-world applications. Hall's prior work has developed compiler techniques for exploiting parallelism and locality on a diversity of architectures: automatic parallelization for SMPs, superword-level parallelism for multimedia extensions, processing-in-memory architectures, FPGAs and more recently many-core CPUs and GPUs. Professor Hall is an IEEE Fellow, an ACM Distinguished Scientist and a member of the Computing Research Association Board of Directors. She actively participates in mentoring and outreach programs to encourage the participation of groups underrepresented in computer science.

Will Lovett 
Technical Product owner, Compilers, Arm Manchester

Will Lovett joined Arm in 2014, to lead a new team developing LLVM compiler support for a prototype vector extension to AArch64, designed for HPC. This extension eventually became SVE, and the compiler work formed the basis of Arm Compiler for HPC, a fully functional C, C++ and Fortran toolchain with a highly tuned math library. In addition to the commercial product, the HPC team at Arm are tasked with enabling the open source ecosystem. We are providing support for SVE in upstream LLVM, are actively engaged in VPlan and veclib, and committed to ensuring F18 becomes a healthy and vibrant LLVM project.