Reblogging an insightful blog post on CLT and WLLN by one of our members!
Attend the #NVDLI #deeplearning workshop hosted by NVIDIA and Department of Mechanical Engineering, Iowa State University on November 3rd, 2018 from 8AM to 5PM. Register now!
The fourth session in the Robustifying ML series was conducted by Dr. Sarkar at 12pm in Black 2004. The lecture notes can be found here: Defenses.
The third lecture in the Robustifying ML series was conducted by Dr. Sarkar in Black 2004, on the 7th of September, 2018. The slides for the same can be found here: Slides: Attacks on RL.
The second session for this lecture series was conducted in Black Engineering 2004, at 12pm on August 31. Lecture notes for the same can be found below:
The first lecture in this series was conducted in Coover 3043 on the 24th of August by Dr. Chinmay Hegde.
You can find the notes on the topics covered, here:
Invited talk by Dr. Chinmay Hegde of ECpE on:
“The power of gradient descent”
Many of the recent advances in machine learning can be attributed to two reasons: (i) more available data, and (ii) new and efficient optimization algorithms. Curiously, the simplest primitive from numerical analysis — gradient descent — is at the forefront of these newer ML techniques, even though the functions being optimized are often extremely non-smooth and/or non-convex.
In this series of chalk talks, I will discuss some recent theoretical advances that may shed light onto why this is happening and how to properly approach design of new training techniques.
– 12pm to 1pm, Friday, 19th and 26th January
– 2222, Coover Hall
Lecture notes are available here.
After a hiatus of about five months, we’re finally back in action this semester, with a series of exciting talks lined up! Ardhendu Tripathy, a PhD student with Dr. Aditya Ramamoorthy has volunteered to share his experience from his recent internship at MERL. Please find the details below:
In the first few minutes I will describe my internship experience with MERL in Summer 2017, followed by a short talk about the work that was done. The basic subject of the internship was privacy-preserving release of datasets. A report about it can be found at https://arxiv.org/abs/1712.
In the talk, I will describe the problem framework and show a tradeoff between privacy and utility in a case of synthetic data. This tradeoff can be closely attained by using adversarial neural networks. Following that I will visualize the performance on a contrived privacy problem on the MNIST dataset.
Thanks and regards,
Please find the presentation slides accompanying the talk here.
12th January, Friday (tomorrow), 12pm-1pm.
2222, Coover Hall.
We’re also going to arrange for some refreshments! Join us!
A summary post on major themes and takeaways from NIPS 2017, by Gauri Jagatap: NIPS 2017: Themes and Takeaways (click on post title to open).