Seminars

Fall 2017

 

Accelerating progress towards controlled fusion power via deep learning at the largest scale

Nuclear fusion power via magnetic confinement is an opportunity for clean, sustainable, and safe energy production for the future. A key challenge on the way is predicting and avoiding disruptions: powerful plasma instabilities that can abruptly end the fusion reaction and possibly damage the surrounding device.

Experimental fusion plasmas emit time series of multi-modal and high dimensional observable data that is captured by sensors. Using such diagnostic data from past experiments with both disruptive and non-disruptive outcomes, we train a deep recurrent neural network to predict the onset of disruptions with enough warning time to mitigate or even avoid their effects.

Our deep learning approach provides state of the art performance, can use both scalar and high-dimensional sensor data, generalizes training on one device to prediction on another, and provides promising directions for moving from prediction to active control. Moreover, it can engage HPC architectures at the largest scale to make training and hyperparameter tuning feasible on very large and growing datasets.

Leveraging deep learning to accelerate our understanding of a complex natural system in this way has implications for discovery science and applied research in other highly complex and data-rich domains such as biomedicine, material science or social science.

Julian grew up in Munich, Germany. He got his bachelor’s degree in physics at Stanford and earned a master’s in computer science (with a focus on AI and machine learning) at the same university. Before entering graduate school, he co-founded a tech startup, where he was responsible for product, hiring, and strategy. He is currently pursuing a PhD in physics at Harvard, studying dynamics on complex social and biological networks. For his studies there, he was awarded the Department of Energy CSGF, National Science Foundation GRFP, and Department of Defense NDSEG fellowships. Julian has written about machine learning, biophysics, high performance computing, and plasma physics.

 

Spring 2017

 

PICSciE Seminar: Exascale and Extreme Data Science at NERSC

Sudip Dosanjh, NERSC Director
Tuesday, April 18, 4:00 – 5:00 PM
Vis Lab, 347 Lewis Science Library
Washington Road & Ivy Lane

The National Energy Research Scientific Computing Center’s primary mission is to accelerate scientific discovery at the U.S. Department of Energy's Office of Science through high performance computing and data analysis. NERSC supports the largest and most diverse research community of any computing facility within the DOE complex, providing large-scale, state-of-the-art computing for DOE’S unclassified research programs in alternative energy sources, environmental science, materials research, astrophysics and other science areas related to DOE’s science mission. 

Cori, NERSC’s new supercomputer, is  deployed in Berkeley Laboratory’s new Computational Research and Theory (CRT) Facility. It has over 9300 manycore Intel Knight’s Landing processors, which introduce several technological advances, including higher intra-node parallelism; high-bandwidth, on-package memory; and longer hardware vector lengths. These enhanced features are expected to yield significant performance improvements for applications running on Cori. In order to take advantage of the new features, however, application developers will need to make code modifications because many of today’s applications are not optimized to take advantage of the manycore architecture and on-package memory.

Cori includes many enhancements to enable a rapidly growing extreme data science workload at NERSC. Cori has a 2000 Intel® Haswell processor partition with larger memory nodes to enable extreme data analysis. A fast internet connection lets users stream data from experimental and observational facilities directly into the system.  A “Burst Buffer”, a 1.5 Petabyte layer of NVRAM, helps accelerate I/O. Cori also includes a number of software enhancements to enable complex workflows. For the longer term we are investigating whether a single system can meet the simulation and data analysis requirements of our users.

Dr. Sudip Dosanjh is Director of NERSC at Lawrence Berkeley National Laboratory. Previously, Dr. Dosanjh headed extreme-scale computing at Sandia National Laboratories. He was co-director of the Los Alamos/Sandia Alliance for Computing at the Extreme-Scale from 2008-2012. He also served on the U.S. Department of Energy’s Exascale Initiative Steering Committee for several years. He had a key role in establishing co-design as a methodology for reaching exascale computing. He has numerous publications on exascale computing, co-design, computer architectures, massively parallel computing and computational science.

 Refreshments will be provided.

 

Machine learning based intelligent earthquake data processing for global adjoint tomography

Yangkeng Chen, Research Associate, Oak Ridge National Laboratory.
Monday, April 17, 12:00 – 1:00 PM
Vis Lab, 347 Lewis Science Library
Washington Road & Ivy Lane 

Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global adjoint tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine learning framework that will automatically tune parameters for the data processing chain. In the current machine learning framework, optimal misfit calculation windows in the seismograms can be automatically detected and thus extremely noisy and deviated waveforms are deserted, just like the face recognition in many computer vision applications. The intelligent earthquake data processing framework will enable the seismology community to compute the global adjoint tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

Yangkang Chen received the B.S. degree in geophysics from the China University of Petroleum, Beijing, in 2012, and the Ph.D. degree in geophysics from the University of Texas at Austin in 2015. He is currently a Distinguished Post-Doctoral Research Associate with the Oak Ridge National Laboratory. His long-time research interest includes machine learning, large-scale seismic data processing and inversion. His PhD thesis focused on high-resolution seismic imaging for oil&gas exploration and reservoir monitoring. His is now devoted to intelligently harnessing massive earthquake data for obtaining an unprecedented high-resolution global earth picture. Dr. Chen is a high-impact scholar with a strong publication record and serves many internationally renowned conferences and journals as an editor, a chair, and a reviewer. 

Lunch will be provided.

 

High Performance Computing Paradigms in Python

Julian Kates-Harbeck, Harvard University
Thursday, April 6, 12:00 – 1:00 pm
138 Lewis Science Library

(Lunch will be provided at 11:45am outside of the lecture hall)

In this talk, we will give an overview of several key paradigms for high performance computing in python: GPU computing, MPI, and multiprocessing. Relevant application areas include scientific computing and machine learning at scale. Other topics we will cover include key python packages and tradeoffs for when to use a given approach. We will show real world code examples that make use of these approaches, and work through an interactive demonstration of some simple examples of speeding up serial code.

Julian Kates-Harbeck received his bachelor’s degree in physics at Stanford and later earned a master’s in computer science (with a focus on AI and machine learning) at the same university. In the period in between, he co-founded a tech startup, where he was responsible for hiring, product and strategy. He is currently pursuing a PhD in biophysics (specifically evolutionary dynamics on networks) at Harvard. For his studies there, he was awarded the National Science Foundation GRFP, Department of Defense NDSEG, and Department of Energy CSGF fellowships. Julian has written about biophysics, machine learning, astrophysics and plasma physics.

 

PICSciE Colloquium: The U.S. D.O.E. Exascale Computing Project – Goals and Challenges

Paul Messina
Exascale Computing Project Director
Argonne Distinguished Fellow
Argonne National Laboratory

Wednesday, March 29th, 11:00 am – 12:00 pm
120 Lewis Science Library, Washington Road & Ivy Lane

The U.S. Department of Energy established in 2016 the Exascale Computing Project (ECP) -- a joint project of the DOE Office of Science (DOE-SC) and the DOE National Nuclear Security Administration (NNSA) -- that will result in a capable exascale ecosystem and prepare mission critical scientific and engineering applications to take advantage of that ecosystem.

This presentation will describe the goals of the ECP, its plans for achieving them, the challenges to be overcome, and its current status, as well as what elements of the exascale ecosystem are outside of the scope of the ECP.


Refreshments will be provided.

 

Fall 2016

 

Disruption Forecasting in Tokamak Fusion Plasmas using Deep Recurrent Neural Networks

Julian Kates-Harbeck, Harvard University
Friday, December 9, 2016 ∙ 12:00 - 1:00 pm
Visualization Lab ∙ 347 Lewis Science Library
Lunch will be served from 11:45-12:00 pm ∙ 

 
The prediction and avoidance of disruptions in tokamak fusion plasmas represents a key challenge on the way to stable energy production from nuclear fusion. A fusion plasma is a complex dynamical system with some unknown internal state which emits a time series of possibly high dimensional observable data that is captured by sensory diagnostics. Using such diagnostic data from past plasma shots with both disruptive and non-disruptive outcomes, we train a deep recurrent neural network to predict the onset of disruptions in an online setting. To deal with very large amounts of data and the need for iterative hyperparameter tuning, we also introduce a distributed training algorithm that runs on MPI clusters of GPU nodes and provides strong linear runtime scaling. Our approach demonstrates competitive predictive performance on experimental data from the JET tokamak, and we highlight promising avenues for extending our method to cross-tokamak prediction as well as to high-dimensional diagnostic data such as temperature and density profiles.

Julian grew up in Munich, Germany. He got his bachelor’s degree in physics at Stanford and later earned a master’s in computer science (with a focus on AI and machine learning) at the same university. In the period in between, he co-founded a tech startup, where he was responsible for hiring, product and strategy. He is currently pursuing a PhD in biophysics (specifically evolutionary dynamics on networks) at Harvard. For his studies there, he was awarded the National Science Foundation GRFP, Department of Defense NDSEG, and Department of Energy CSGF fellowships. Julian has written about biophysics, machine learning, astrophysics and plasma physics.

 

Enabling Scale-Up, Scale-Out, and Scale-Deep for Big Data

Dr. Jeremy Kepner 
MIT Lincoln Laboratory Fellow 
Head, Lincoln Laboratory Supercomputing Center

Monday, October 10, 2016, 12:00 - 1:00 pm

120 Lewis Science Library, Washington Road and Ivy Lane

Big Data volume, velocity, and variety challenges have led to a proliferation of computing hardware and software solutions.  Hyperscale data centers, accelerators, and programmable logic can deliver enormous performance via a wide range of analytic environments and data storage technologies.  Effectively exploiting these capabilities for science and engineering requires mathematically rigorous interfaces that allow scientists and engineers to focus on their research and avoid rewriting software each time computing technology changes.  Mathematically rigorous interfaces are at the core MIT Lincoln Laboratory Supercomputing Center (LLSC) and enable the LLSC to deliver leading edge technologies to thousands of scientists and engineers. This talk discusses the rapidly evolving computing landscape and how mathematically rigorous interfaces are the key to exploiting advanced computing capabilities.
Dr. Kepner is Lincoln Laboratory Fellow and leads the MIT Lincoln Laboratory Supercomputing Center (LLSC).  Dr. Kepner is the most published author in the 60+ year history of Lincoln Laboratory.  His published works span signal processing, data mining, databases, high performance computing, graph algorithms, cyber security, visualization, cloud computing, random matrix theory, abstract algebra, bioinformatics, astronomy, physics, and astrophysics.  He has authored two books on parallel computing and graph algorithms. He recently received Lincoln’s highest technical honor “for his leadership and vision in bringing supercomputing to Lincoln Laboratory through the establishment of LLGrid [now LLSC]; his pivotal role in open systems for embedded computing; his creativity in developing a novel database management language and schema; and his contributions to the field of graph analytics."   Dr. Kepner is the Chair of the largest computing conference in New England (IEEE High Performance Extreme Computing) and Chair of SIAM Data Mining.  Dr. Kepner received his Ph.D. in Astrophysics from Princeton University in 1998.

 

Fall 2015

 

Next Generation Applications: Using a Productivity Focus, 9/23/15

Michael A. Heroux
Distinguished Member of Technical Staff, Sandia National Labs
Scientist in Residence, St John’s University
 
The extreme-scale computing community is several years into a highly disruptive period of change.  New commodity performance curves must be incorporated into application designs, and the orders of magnitude in performance potential will increase the demand to couple physics and scales into a single integrated execution environment.
 
In this talk we discuss a productivity focus as the fundamental source for guiding application development activities.  Although productivity is always implicitly part of our decisions, an explicit focus on it may lead to new activities, and strategies we have not seriously considered before.  We will talk about emerging application architectures, development and use of software ecosystems, software best practices and characterize some of the important attributes of future scalable applications. 

Michael Heroux is a Distinguished Member of the Technical Staff at Sandia National Laboratories and Scientist in Residence at St. John’s University, MN, working on new algorithm development, and robust parallel implementation of solver components for problems of interest to Sandia and the broader scientific and engineering community. He leads development of the Trilinos Project, an effort to provide state of the art solution methods in a state of the art software framework.  Dr. Heroux works on the development of scalable parallel scientific and engineering applications and maintains his interest in the interaction of scientific/engineering applications and high performance computer architectures. He leads the Mantevo project, which is focused on the development of Open Source, portable mini-applications and mini-drivers for scientific and engineering applications.  Dr. Heroux is also the lead developer and architect of the HPCG benchmark, intended as an alternative ranking for the TOP 500 computer systems. 
 
Dr. Heroux is a member of the Society for Industrial and Applied Mathematics (SIAM) and past chair of the SIAM Activity Group on Supercomputing. He is a Distinguished Member of the Association for Computing Machinery (ACM). He is the Editor-in-Chief for the ACM Transactions on Mathematical Software, Subject Area Editor for the Journal on Parallel and Distributed Computing and Associate Editor for the SIAM Journal on Scientific Computing.

 

Spring 2014

 

Software Challenges for Extreme Scale Systems

Professor Vivek Sarkar

E.D. Butcher Chair in Engineering, Rice University
 
Tuesday, May 27, 2014 ∙ 12:30 - 1:30 pm
Visualization Lab ∙ 346 Lewis Science Library
Lunch will be served from 12:00-12:30 pm ∙ PICSciE Reception
 
It is widely recognized that computer systems in the next decade will be qualitatively different from current and past computer systems. Specifically, they will be built using homogeneous and heterogeneous many-core processors with 100's of cores per chip, their performance will be driven by parallelism (million-way parallelism just for a departmental server), and constrained by energy and data movement. They will also be subject to frequent faults and failures.  Unlike previous generations of hardware evolution, these Extreme Scale systems will have a profound impact on future software.  The software challenges are further compounded by the need to support new workloads and application domains that have traditionally not had to worry about large scales of parallelism in the past.
 
The challenges across the entire software stack for Extreme Scale systems are driven by programmability and performance requirements, and impose new requirements on programming models, languages, compilers, and runtime systems.  We focus on the critical role played by the runtime system in enabling programmability in upper layers of
the software stack that interface with the programmer, and in enabling performance in lower levels of the software stack that interface with the hardware.
 
Vivek Sarkar is Professor and Chair of the Department of Computer Science at Rice University, where he conducts research in multiple aspects of parallel software including programming languages, program analysis, compiler optimizations and runtime systems for parallel and high performance computer systems.  He currently leads the Habanero Extreme Scale Software Research project at Rice University, and serves as Associate Director of the NSF Expeditions project on the Center for Domain-Specific Computing.
 

 

What does Titan tell us about preparing for exascale supercomputers?

Speaker: Jack Wells, Director of Science, Oak Ridge National Labs

Monday, February 10, 2014

12-1:30pm, Vis Lab, Room 346 Lewis Library

Lunch will be provided at Noon

Modeling and simulation with petascale computing has supercharged the process of innovation, dramatically accelerating time-to-insight and time-to-discovery. The Titan supercomputer is the Department of Energy’s flagship Cray XK7 managed by the Oak Ridge Leadership Computing Facility (OLCF). With its hybrid, accelerated architecture of traditional CPUs and graphics processing units (GPUs), Titan allows advanced scientific applications to reach speeds exceeding 10 petaflops with a marginal increase in electrical power demand over the previous generation leadership-class supercomputer. I will summarize the lessons learned in deploying Titan and in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architectures, with a focus on early science outcomes from Titan. We will discuss implications for the research community as we prepare for exascale computational science and engineering within the next decade. I will also provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources.

Jack Wells is the Director of Science for the National Center for Computational Sciences (NCCS) at Oak Ridge National Laboratory (ORNL) with the rank of Distinguished R&D Scientist. He is responsible for devising the strategy to ensure cost-effective, state-of-the-art scientific computing at the NCCS, which hosts the Department of Energy’s Oak Ridge Leadership Computing Facility (OLCF), a national user facility, and Titan, currently the faster supercomputer in the United States. Dr. Wells began his ORNL career in 1990 for resident research on his Ph.D. in Physics from Vanderbilt University. Following a three-year postdoctoral fellowship at the Harvard-Smithsonian Center for Astrophysics, he returned to ORNL as a staff scientist in 1997 as a Wigner fellow. Jack is an accomplished practitioner of computational physics and has been sponsored in his research by the Department of Energy’s Office of Basic Energy Sciences. 

 

Fall 2013

 

MAE/PICSciE seminar: “Stability Analysis of an Impacting T-Junction Pipe Flow


Kevin Chen
Mechanical and Aerospace Engineering

Monday, December 9, 2013 
12:00 – 1:00 pm, EQuad J223
Pizza will be provided.

Abstract
 
The fluid flow through a T-shaped pipe bifurcation (with the inlet at the bottom of the "T") is a very familiar occurrence in both natural and man-made systems.  Everyday examples include industrial pipe networks, microfluidic channels, and blood flows in the heart and brain.  Despite the ubiquitous nature of the geometry, many questions about the flow physics remain, and prior analyses have been rudimentary and qualitative. This seminar addresses three important questions: 1) How does the flow evolve with Reynolds number?  2) What are the important flow structures?  3) Lastly, where does the flow exhibit dynamical sensitivity?  Much of this research focuses on the relation between recirculation regions in the outlet pipes and the regions of stability, receptivity, and sensitivity as defined by linear stability theory. The recirculation regions, which exist above a Reynolds number of 320, exhibit a characteristic vortex breakdown phenomenon. At a Reynolds number of 556, a rapid sequence of supercritical Hopf bifurcations begins. In this geometry, regions of growth are concentrated in the outlet pipes, but regions of receptivity to initial conditions and disturbances are confined to the front and back walls of the inlet and junction. Finally, the flow is most sensitive to localized dynamical perturbations in the recirculation regions. The recirculation can cause small perturbations to feed back on themselves, leading to large changes in dynamics.
 
 

An asymptotic parallel-in-time method for highly oscillatory PDEs

Speaker: Terry Haut, Los Alamos National Laboratory
 
October 22, 2013 at 10:30 am
Smagorinsky Room, NOAA/GFDL
Forrestal Campus, 201 Forrestal Road
 
Abstract
 
We present a new time-stepping algorithm for nonlinear PDEs that exhibit scale separation in time. Our scheme combines asymptotic techniques (which are inexpensive but can have insufficient accuracy) with parallel-in-time methods (which, alone, can be inefficient for equations that exhibit rapid temporal oscillations). In particular, we use an asymptotic numerical method for computing, in serial, a solution with low accuracy, and a more expensive fine solver for iteratively refining the solutions in parallel. We present examples on the rotating shallow water equations that demonstrate that significant parallel speedup and high accuracy are achievable.
 

Spring 2013

 

Algorithmic requirements for extreme scale simulation

Speaker: David E. Keyes, Professor, Applied Mathematics and Computational Science, Director, Strategic Initiative in Extreme Computing, King Abdullah University of Science and Technology (KAUST)

Date: February 7, 2013 from 1:30 pm - 2:30 pm, Visualization Lab, 346 Lewis Science Library

Light refreshments will be served

Abstract

Diverging exponentials in computer hardware subsystem performance require rethinking of models and reimplementation of algorithms in scientific and engineering simulation. Much mathematics and software appears to be missing if emerging hardware is to be used near its potential, since our existing code base has been assembled with a premium on squeezing out flops and improving the execution rate of those that remain. Instead, for reasons of energy efficiency and system acquisition cost, we must now focus on squeezing out synchronizations, memory footprint, and memory transfers. High concurrency and power-efficient design of the individual cores put opposite pressures on algorithms: respectively, they require greater data locality and greater freedom to redistribute data and computation. After decades of programming model stability, new models and new hardware must be developed simultaneously, a process called co-design. We extrapolate current trends and describe directions for exascale algorithms.

Speaker Biography

David Keyes is the inaugural Dean of the Division of Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) at KAUST, an adjunct professor in Applied Physics and Applied Mathematics at Columbia University, and an affiliate of several laboratories of the U.S. Department of Energy. Keyes graduated in Aerospace and Mechanical Sciences from Princeton in 1978, earned a doctorate in Applied Mathematics from Harvard in 1984, and post-doc’ed in Computer Science at Yale. He works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations. For his algorithmic influence in scientific simulation, Keyes was recognized as a Fellow of SIAM and of the AMS, with the Sidney Fernbach Award of the IEEE Computer Society, and with ACM’s Gordon Bell Prize. Author or editor of more than a dozen federal agency reports and member of several federal advisory committees on computational science and engineering and high performance computing, in 2011, Keyes received the SIAM Prize for Distinguished Service to the Profession.
 

Fall 2012

 

The Astronomical Multipurpose Software Environmentand the Ecology of Star Clusters

Speaker:  Simon Portegies Zwart, Professor of Computational Astrophysics at the Sterrewacht Leiden of Leiden University 

Date: February 13, 2012 from 12:30-1:30pm, Room Location TBD

Light Refreshments will be served at 11:45am in the PICSciE Reception area

Abstract

Star cluster ecology is the field of research where stellar evolution, gravitational dynamics, hydrodynamcs and the background potential dynamics of the parent galaxy interact to a complex non-linear evolution of self gravitating stellar systems. I will review the processes related to the ecology of stellar clusters, discuss the numerical hurdles and the physical principles. In addition, I will introduce the AMUSE framework with which we are performing simulations of the ecology of stellar clusters. AMUSE is a general purpose framework for interconnecting existing scientific software with a homogeneous and unified interface. The framework is based on the standard message passing interface any production ready code that is written in a language that supports its native bindings can be incorporated, in addition our framework is intrinsically parallel and it conveniently separates the all the numerical solvers in memory.

Speaker Biography

Simon Portegies Zwart was born in Amsterdam and studied astronomy at the University of Amsterdam. After his PhD with Frank Verbunt at Utrecht University he traveled over the world while working as a postdoctoral fellow at the University of Amsterdam, Tokyo University (Japan), MIT (USA) and back to Amsterdam. He currently is full professor of computational astrophysics at the Sterrewacht Leiden of Leiden University. His professional interests are high-performance computing and gravitational stellar dynamics, in particular the ecology of dense stellar systems. His personal interests include translating Egyptian hieroglyphs and brewing beer.

 

Perspectives on China’s Role in  Global High Performance Computing

 
Speaker: William Tang, PPPL, Princeton University

Date: January 23, 2012 from 12:30-1:30pm, 121 Lewis Science Library

Light Refreshments will be served at 11:45 in the PICSciE Reception area.

Abstract

High performance computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. China’s rapid emergence in this area has been remarkable, and the current presentation will highlight associated impressions from a number of visits there by me and national colleagues over the past year. At the top of the most recent LINPACK list (November, 2011) are Japan’s Fujitsu K machine at No. 1, the Chinese supercomputers at Nos. 2 and 4, and the U.S. falling to No. 3. It is significant to note that the 2.57 petaflops performance level of the Chinese Tianhe-1A system at the National Supercomputer Center in Tianjin has passed the U.S.’s Cray XT5 system “Jaguar” at the Oak Ridge National Laboratory with 1.76 petaflops – previously the No. 1 machine in June 2010. The rapid rise of HPC hardware in China over the past decade is particularly notable since the Chinese systems, which were basically absent from the Top500 list prior to 2001, now occupy the Nos. 2 and 4 positions.

Speaker Biography

William M. Tang is the Director of the Fusion Simulation Program at the Princeton Plasma Physics Laboratory (PPPL) and serves on the Executive Committee for PICSciE which he helped establish during his 6-years as Associate Director. He is a Fellow of the American Physical Society, and in October, 2005, received the Chinese Institute of Engineers-USA (CIE-USA) Distinguished Achievement Award “for his outstanding leadership in fusion research and contributions to fundamentals of plasma science. He was the Chief Scientist at PPPL from 1997 until 2009 and also played a national leadership role in the formulation and development of the DoE’s multi-disciplinary program in advanced scientific computing applications, SciDAC (Scientific Discovery through Advanced Computing). He chaired the major DoE-SC meeting on “Scientific Grand Challenges in Fusion Energy Sciences and the Role of Computing at the Extreme Scale” (Spring, 2009).

 

Fall 2011

 

Campus-Scale High Performance Cyberinfrastructure for Data-Intensive Research

 
Speaker: Larry Smarr, University of California – San Diego
 
Date: December 12, 2011 from 12:30-1:30 pm, Visualization Lab, Room 346 Lewis Library
Blue Waters Plus – A Super System to Solve Super Challenges
 
Speaker: William Kramer, Blue Waters Project, NCSA
 
Date: December 5, 2011 from 12:30-1:30pm, Visualization Lab, Room 346 Lewis Library
 
Lunch will be served at 11:45am in the PICSciE Reception area
 
 
Abstract:
 
Blue Waters is the NSF Track 1 system being deployed in the fall of 2012 for a diverse range of unique science and engineering challenges that require huge amounts of sustained performance. The Blue Waters project has used a number of principles, now frequently referred to as “co-design”, to improve the impact of the technology. Recently, the entire project was refocused to the new Cray XE/XK/Gemini/Sonexion technologies.
 
More than 25 teams, from a dozen distinct research fields, have already been selected to run projects on Blue Waters. These teams will achieve breakthroughs by using Blue Waters to model a broad range of phenomena, including: nanotechnology’s minute molecular assemblies, the evolution of the universe since the Big Bang, the damage caused by earthquakes and tornadoes, the mechanism by which viruses enter cells, and improved climate change predictions.
 
This talk will begin by explaining the goals and expectations of the Blue Waters Project and how the new Cray XE/XK/Gemini/Sonexion technologies will fulfill these expectations. To ensure the ongoing success of the Blue Waters science teams, the talk will cover how NCSA will verify the system will meet is requirements for a more than a sustained petaflop/s of computing for a diverse set of science application. The next part of the talk will discuss a significant ideas on creating new methods and algorithms to improve application codes to take full advantage of systems like Blue Waters, with particular attention for the areas of scalability, use of accelerators, simultaneous use of x86 and accelerated nodes within single codes and application resiliency. The final part of the talk will discuss some lessons learned from the co-design efforts.
 
 
Speaker Biography
 
William T.C. Kramer is deputy project director at the National Center for Supercomputing Applications; he is responsible for leading the Blue Waters project, a National Science Foundation-funded project, to deploy the first general purpose, open science, sustained-petaflop supercomputer as a powerful resource for the nation’s researchers. Blue Waters is an 8 year project with an overall funding of over $500M
 

Spring 2011

Understanding the Human Brain: The Ultimate Computational Challenge (in Theory and Practice)

Speaker: Professor Jonathan Cohen, Princeton Neuroscience Institute

April 11, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library

Abstract:

The human brain is the most complex device in the known universe.  With an estimated 100 billion neurons, 100 trillion connections among them, and an inestimable number of potential circuits, the challenge to track these and understand their function is arguably the greatest challenge science has ever faced.  It is a trivial assertion, therefore, that this challenge demands the most sophisticated approaches to mathematical analysis and numerical (computational) simulation we can garner.  This is true both for theory development, as well as for data analysis. The former stems from the inherent complexity of the problem, and the latter from the size of the datasets required to make progress in addressing it.  I will review the state-of-the along these dimensions, focusing in particular on the challenge posed by analyzing human brain imaging data — the most available measures we have of the functioning of the intact human brain.


Computational approaches to the study of collective behavior

Speaker: Prof. Iain Couzin, Department of Ecology & Evolutionary Biology, Princeton University

March 28, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library

Abstract:

A fundamental problem in a wide range of biological disciplines is understanding how functional complexity at a macroscopic scale (such as the functioning of a biological tissue) results from the actions and interactions among the individual components (such as the cells forming the tissue). Animal groups such as bird flocks, fish schools and insect swarms frequently exhibit complex and coordinated collective behaviors and present unrivaled opportunities to link the behavior of individuals with the functioning and efficiency of dynamic group-level properties.

Using an integrated experimental and theoretical approach involving both insects and vertebrates I will address both how, and why, animals coordinate behavior., and the computational tools that we have developed to facilitate their study. In some animal groups decision-making by individuals is so integrated that it has been associated with the concept of a “collective mind”. Since each organism has relatively local sensing ability, coordinated animal groups have evolved collective strategies that allow individuals to access higher-order computational abilities at the group level. I investigate the coupling between spatial and information dynamics in swarms, flocks, schools and herds and reveal the critical role uninformed individuals (those who have no information about the feature upon which a collective decision is being made) play in inhibiting extremism and promoting democratic consensus in groups.

Toward Exascale Computing in Gyrokinetic Particle-in-Cell Simulations of Fusion Plasmas

Speaker: Stephane Ethier, Computational Scientist, PPPL

March  24, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library

Abstract: 
The last decade has witnessed a rapid emergence of larger and faster computing systems in the US supercomputing centers. Massively parallel machines have gone mainstream and are now the tool of choice for large scientific simulations. Scientific applications need to be modified, adapted, and optimized for each new system being introduced. With a few petascale systems now in production mode, the    focus of the DOE Office of Advanced Scientific Computing Research has now shifted to the next level of "Exascale", which promises to be truly disruptive. With an estimated billion cores to deal with, scientific applications will need to manage extreme parallelism, limited bandwidth, frequent failures, and many more hardware and software challenges. In this talk, I will discuss the path to extreme scale computing from the point of view of the large-scale gyrokinetic particle-in-cell codes developed at Princeton University's Plasma Physics Laboratory to study microturbulent transport in fusion plasmas

3D Visualization and Physically-based Illumination

Speaker: David Banks, University of Tennessee and Oak Ridge National Laboratory

David Banks holds positions as tenured faculty in the EECS department at the University of Tennessee and as senior scientist in scientific computing at Oak Ridge National Laboratory. He is a member of the UT/ORNL Joint Institute for Computational Sciences, home to the top-ranked academic supercomputer in the world.

February 28, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library

Abstract:
"3D data visualization" applies computer graphics to datasets of various kinds. Graphics algorithms can be viewed as "solvers" for radiation transport. Our lab investigates the interplay among transport, rendering, visualization, and human perception. We have found that perception of 3D scenes can be improved by visualizing them using rendering algorithms that more accurately solve the transport equation. Surprisingly, such "physically-based” algorithms have not been widely adopted by scientific users.

New frontiers in quantum chemistry using supercomputers

Speaker: Jeffrey Hammond, University of Chicago / Argonne National Laboratory

Jeff Hammond is currently a Director's Postdoctoral Fellow at the Argonne Leadership Computing Facility.  He received his PhD in chemistry from the University of Chicago as a DOE Computational Science Graduate Fellow.  

February 21, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library

Abstract:

Recent advancements in high-performance computing present both challenges and new opportunities for quantum chemists. Accurate methods like coupled-cluster theory can now be applied to systems with
dozens of atoms, opening up new application areas related to biology and material science.  I will present recent results obtained using the massively parallel quantum chemistry package NWChem, highlighting
the importance of accurate many-body simulations of electric-field response properties and electronic excited-states for a diverse set of chemical systems. The rigorous development of force-fields, including both inter- and intramolecular terms, will also be discussed. Finally, I will discuss recent developments in computer architecture - million-way parallelism and heterogeneous nodes - affect algorithms and software development in correlated electronic structure calculations.

Extracting Biological Insight from Complex Genome-Scale Data: Connecting Growth Control and Stress Response in Yeast

Speaker: David Botstein, Director, Lewis-Sigler Institute for Integrative Genomics

February 14, 2011 from 12:30 pm - 1:30 pm, Visualization Lab, 346 Lewis Library


Abstract:
The maintenance of cellular homeostasis in the face of rapidly changing environmental conditions has been the focus of our research for the past five years. Specifically, we have studied the relationship    between the growth rate, which we can control directly by setting the dilution rate in chemostats, and the initiation of cell division cycle, response to environmental stress, and metabolism. We have exploited high-throughput methods, some of our own devising, to follow gene expression, metabolite levels, and relative fitness of mutants on a comprehensive scale in order to obtain a view of the integration of these functions at the system level. 

The biggest challenge in this kind of research is not the acquisition, nor even the statistical analysis of the data. Instead, it is the visualization of the analysis of the results in a form that can be appreciated and communicated by scientists. Examples will be provided from our research that illustrate this challenge and some of the ways we have attempted to meet it.