TensorFlow and PyTorch User Group

The deep learning research community at Princeton comprises over 10 academic departments and more than 150 researchers. The TensorFlow and PyTorch User Group was created to serve as a campus-wide platform for researchers to connect with one another to discuss their work and the use of the tools. In addition to monthly presentations by graduate students and postdoctoral researchers, the group hosts external speakers from such companies as Google, NVIDIA and Intel. All members of the PU research community are welcome. Subscribe to the mailing list.

The group is sponsored by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Center for Statistics and Machine Learning (CSML).


Next Meeting

Thursday, October 17, 4:30-5:30 pm, 138 Lewis Science Library [one lightning talk, two 20-minute talks]

Please RSVP since pizza will be served.

Leveraging Intel Software Libraries for Accelerated AI Research (lightning talk)
Jonathan Halverson, Research Computing
Intel was on campus recently to present their "AI Journey" workshop. In this lightning talk, I will present the key points of their presentation as it relates to machine learning research using our HPC clusters.

Continual adaptation for efficient machine communication (20-minute talk)
Robert D. Hawkins, Postdoctoral Research Fellow, Department of Psychology
To communicate with new partners in new contexts, humans rapidly form new linguistic conventions. Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do. We introduce a "repeated reference task" as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently communicate with a partner over time. I’ll describe how we use PyTorch to implement real-time adaptation with human partners (i.e. relaying text from a chat box to the GPU, taking several optimization steps on the new data point, then sending a response back with a latency of ~5-10 seconds).

Accelerating automated modeling and design with stochastic optimization and neural networks (20-minute talk)
Alex Beatson, Graduate Student, Computer Science, with Faculty Advisor: Ryan Adams, Princeton University
For tasks such as learning to learn, identifying the parameters of natural systems, or optimizing the design of mechanical parts, tuning the (hyper)parameters of a system can require running a high fidelity numerical method at each optimization step. I will discuss two methods which aim to accelerate automated modelling and design by reducing this computational cost. The first is "Randomized Telescope" gradient estimators, which provide cheap unbiased stochastic gradients for problems where the objective is the limit of a sequence of increasingly costly approximations. These can accelerate tasks such as optimizing hyperparameters of neural networks and fitting parameters of ODEs. The second is "Neural Model Order Reduction", which uses deep learning and integrates PyTorch and Fenics (an open source PDE solver) to reduce the dimension of nonsmooth PDEs. Our preliminary work uses this to efficiently simulate mechanical metamaterials: materials engineered with fine-scale structure which can be expensive to simulate but which gives rise to macroscopic properties not found in nature.


Upcoming meetings

Friday, November 15, 2:00-3:30 pm, 138 Lewis Science Library

A Dive in to TensorFlow 2.0
Please join us for this 90-minute workshop, taught at an intermediate level. We will briefly introduce TensorFlow 2.0, then dive in to writing a few flavors of neural networks. Attendees will need a laptop and an internet connection. There is nothing to install in advance, we will use https://colab.research.google.com for examples. We will start with MNIST implemented using a linear model, a neural network, and a deep neural network, followed by a CNN. We will finish with a brief intro to a couple more advanced examples (Deep Dream, Style Transfer, etc).

Speaker: Josh Gordon works on the TensorFlow team at Google, and teaches Applied Deep Learning at Columbia. You can find him online at https://twitter.com/random_forests

Please RSVP for this workshop since space is limited.


Previous meetings

September 2019
JAX: Accelerated machine learning research via composable function transformations in Python by Peter Hawkins

 

July 2019
Selene: A PyTorch-based Deep Learning Library for Sequence Data by Kathleen Chen
Big data of big tissues: deep neural networks to accelerate analysis of collective cell behaviors in large populations by Julienne LaChance
GPU Computing with R and Keras by Danny Simpson
Announcements and TensorFlow 2 (beta) by Jonathan Halverson

 

June 2019
Opportunities and challenges in self-driving cars at NVIDIA by Timur Rvachov (slides not available)
Training deep convolutional neural networks by Michael Churchill
Deep Learning Frameworks at Princeton by Jonathan Halverson
 

 

Contact

For more information please contact Jonathan Halverson (halverson@princeton.edu)

 
Kathleen Chen at Princeton