Idea Seminar

MLOPT Idea Seminar is an informal student-led seminar series where junior researchers all around the world are invited to share their recent ideas and findings. In an effort to bring students closer together and engage them in discussion, MLOPT Idea Seminar is designed to provide an informal platform for students to freely share their half-baked papers or ideas and to get feedback. The biweekly MLOPT Idea Seminar series covers a variety of recent research topics, including the intersection of Machine Learning and Optimization.

Upcoming Events

This is an accordion element with a series of buttons that open and close related content panels.

Nov. 26, 2021: Thanksgiving!

No event.

Dec. 10, 2021: Angeliki Giannou (UW-Madison)

Dec. 10, 2021: Angeliki Giannou (UW-Madison)

  • Title: Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
  • Abstract: In this work, we examine the Nash equilibrium convergence properties of no-regret learning in general N-player games. For concreteness, we focus on the archetypal follow the regularized leader (FTRL) family of algorithms, and we consider the full spectrum of uncertainty that the players may encounter – from noisy, oracle-based feedback, to bandit, payoff-based information. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is stable and attracting with arbitrarily high probability if and only if it is strict (i.e., each equilibrium strategy has a unique best response).
  • Short Bio:  Angeliki is a first-year Ph.D. student in the UW-Madison, advised by Prof. Dimitris Papailiopoulos. Prior to that, she received a Bachelor & M.Sc degree in ECE from National Technical University of Athens, working with Prof. Panayotis Mertikopoulos. Her research interests involve machine learning theory and optimization.

Dec. 24, 2021: Christmas Eve 🎄

No event.

Jan. 7, 2022: Hongyi Wang (CMU)

Jan. 7, 2022: Hongyi Wang (CMU)

Jan. 21, 2022: Jy-yong Sohn (UW-Madison)

Jan. 21, 2022: Jy-yong Sohn (UW-Madison)

Feb. 4, 2022: TBD

Feb. 18, 2022: TBD

Mar. 4, 2022: TBD

Mar. 18, 2022: Yae Jee Cho (CMU)

Mar. 18, 2022: Yae Jee Cho (CMU)

Past Talks

This is an accordion element with a series of buttons that open and close related content panels.

Nov. 12, 2021: Chulhee Yun (MIT)

  • Links: [video] [arxiv]
  • Title: Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
  • Abstract: In distributed learning, local SGD (also known as federated averaging) and its simple baseline minibatch SGD are widely studied optimization methods. Most existing analyses of these methods assume independent and unbiased gradient estimates obtained via with-replacement sampling. In contrast, we study shuffling-based variants: minibatch and local Random Reshuffling, which draw stochastic gradients without replacement and are thus closer to practice. For smooth functions satisfying the Polyak-Łojasiewicz condition, we obtain convergence bounds (in the large epoch regime) which show that these shuffling-based variants converge faster than their with-replacement counterparts. Moreover, we prove matching lower bounds showing that our convergence analysis is tight. Finally, we propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings. Joint work with Shashank Rajput (UW-Madison) and Suvrit Sra (MIT).

Organizers: Jy-yong Sohn and Yuchen Zeng

Get the latest MLOPT Idea Seminar news: join the MLOPT Idea Seminar mailing list on Google Groups (you must have a google account to join).