Date Feb 24, 2022, 4:00 pm – 5:30 pm Location Online Event Related link More details in My PrincetonU Details Event Description The first portion of this workshop will show participants how to optimize single-GPU training. The concepts of multi-GPU training will be discussed before demonstrating the use of Distributed Data Parallel (DDP) in PyTorch. Other distributed deep learning frameworks will be discussed. While the workshop is focused on PyTorch, demonstrations for TensorFlow will be available. Knowledge prerequisites: Participants should be familiar with training neural networks with PyTorch or TensorFlow using a GPU. Hardware/software prerequisites: For this workshop, users must have an account on the Adroit cluster, and they should confirm that they can SSH into Adroit *at least 48 hours beforehand*. Details can be found in this guide. THERE WILL BE LITTLE TO NO TROUBLESHOOTING DURING THE WORKSHOP! Workshop format: Lecture, demonstration and hands-on Learning objectives: Attendees will learn how to accelerate the training of neural networks using distributed deep learning frameworks.