Date Jan 15, 2025, 10:00 am – 4:00 pm Location View location on My PrincetonU Audience Princeton students, graduate students, researchers, faculty, and staff Related link More details in My PrincetonU Details Event Description This workshop uses OpenMP to introduce the fundamental concepts behind parallel programming. Hands-on exercises will explore the common core of OpenMP, in addition to more advanced OpenMP features and fundamental parallel design patterns. In addition to hands-on experience using OpenMP, participants will also walk away knowing the basic history of parallel computing, the fundamental concepts behind parallel programming, and some fundamental design patterns from which most parallel algorithms are constructed. Knowledge prerequisites: The tutorial is taught in C, but we use a very simple subset of C (basic control structures, static arrays, simple pointers) that anyone familiar with programming (including python programmers) can learn. Familiarity with the Linux command line using one of the common shells (such as bash) would also be helpful. For a basic primer on C, see this file: https://github.com/tgmattso/ParProgForPhys/blob/main/OMP_Exercises/lear… Hardware/software prerequisites: The exercises can be performed on a modern multi-core laptop. Users who choose to do so will need to have installed a recent version of a C compiler that is OpenMP-aware (e.g. gcc from GNU … note that Apple renames their own compiler gcc and they have disabled OpenMP on their compiler. So if you use an Apple laptop, download an actual GNU gcc compiler. Both Homebrew and MacPorts provide easy ways to install GNU compilers). You will also need to be able to access a Linux/Unix command line locally on your laptops. Meet the Facilitator Tim Mattson, a parallel programmer obsessed with every variety of science. In 2023 he retired after a 45-year career in HPC (30 of which were with Intel). He has had the privilege of working with people much smarter than himself on great projects including: (1) the first TFLOP computer (ASCI Red), (2) Parallel programming languages … Linda, MPI, OpenMP, OpenCL, OCR and PyOMP (3) two different research processors (Intel's TFLOP chip and the 48 core SCC), (4) Data management systems (Polystore systems and Array-based storage engines), and (5) the GraphBLAS API for expressing graph algorithms as sparse linear algebra. Tim has over 150 publications including six books on different aspects of parallel computing. More Workshops by Tim Mattson - A.I. and the Future of Programming on 1/16 at 10:30 AM - Floating Point Numbers Aren’t Real on 1/16 at 2:00 PM See the entire PICSciE/RC Wintersession 2025 training program. To request accommodations for this event, please contact the workshop or event facilitator at least 3 working days prior to the event.