Weekly Seminar – 1/19/2018 and 1/26/2018 – Power of Gradient Descent

Invited talk by Dr. Chinmay Hegde of ECpE on:

“The power of gradient descent”

Many of the recent advances in machine learning can be attributed to two reasons: (i) more available data, and (ii) new and efficient optimization algorithms. Curiously, the simplest primitive from numerical analysis — gradient descent — is at the forefront of these newer ML techniques, even though the functions being optimized are often extremely non-smooth and/or non-convex.

In this series of chalk talks, I will discuss some recent theoretical advances that may shed light onto why this is happening and how to properly approach design of new training techniques.

– 12pm to 1pm, Friday, 19th and 26th January

– 2222, Coover Hall


Lecture notes are available here.

Spring 18 Seminar #1

After a hiatus of about five months, we’re finally back in action this semester, with a series of exciting talks lined up! Ardhendu Tripathy, a PhD student with Dr. Aditya Ramamoorthy has volunteered to share his experience from his recent internship at MERL. Please find the details below:


In the first few minutes I will describe my internship experience with MERL in Summer 2017, followed by a short talk about the work that was done. The basic subject of the internship was privacy-preserving release of datasets. A report about it can be found at https://arxiv.org/abs/1712.07008

In the talk, I will describe the problem framework and show a tradeoff between privacy and utility in a case of synthetic data. This tradeoff can be closely attained by using adversarial neural networks. Following that I will visualize the performance on a contrived privacy problem on the MNIST dataset.

Thanks and regards,

Please find the presentation slides accompanying the talk here.

12th January, Friday (tomorrow), 12pm-1pm.

2222, Coover Hall.

We’re also going to arrange for some refreshments! Join us!