Fall 2022 SIP Seminars

Prof. Cynthia Rush

Title: On the Robustness to Misspecification of α-Posteriors and Their Variational Approximations

Abstract: Variational inference (VI) is a machine learning technique that approximates difficult-to-compute probability densities by using optimization. While VI has been used in numerous applications, it is particularly useful in Bayesian statistics where one wishes to perform statistical inference about unknown parameters through calculations on a posterior density. In this talk, I will review the core concepts of VI and introduce some new ideas about VI and robustness to model misspecification. In particular, we will study α-posteriors, which distort standard posterior inference by downweighting the likelihood, and their variational approximations. We will see that such distortions, if tuned appropriately, can outperform standard posterior inference when there is potential parametric model misspecification.

Biography: Cynthia Rush is the Howard Levene Assistant Professor of Statistics in the Department of Statistics at Columbia University. In May, 2016, she received a Ph.D. in Statistics from Yale University under the supervision of Andrew Barron and she completed her undergraduate coursework at the University of North Carolina at Chapel Hill where she obtained a B.S. in Mathematics. She received an NSF CRIII award in 2019, was an NTT Research Fellow at the Simons Institute for the Theory of Computing for the program on Probability, Computation, and Geometry in High Dimensions in Fall 2020, and was a Google Research Fellow at the Simons Institute for the Theory of Computing for the program on Computational Complexity of Statistical Inference in Fall 2021.

Prof. Dileep Kalathil

Title: Reinforcement Learning with Robustness and Safety Guarantees

Abstract: Reinforcement Learning (RL) is the class of machine learning that addresses the problem of learning to control unknown dynamical systems. RL has achieved remarkable success recently in applications like playing games and robotics. However, most of these successes are limited to very structured or simulated environments. When applied to real-world systems, RL algorithms face two fundamental sources of fragility. First, the real-world system parameters can be very different from that of the nominal values used for training RL algorithms. Second, the control policy for any real-world system is required to maintain some necessary safety criteria to avoid undesirable outcomes. Most deep RL algorithms overlook these fundamental challenges which often results in learned policies that perform poorly in the real-world settings. In this talk, I will present two approaches to overcome these challenges. First, I will present an RL algorithm that is robust against the parameter mismatches between the simulation system and the real-world system. Second, I will discuss a safe RL algorithm to learn policies such that the frequency of visiting undesirable states and expensive actions satisfies the safety constraints. I will also briefly discuss some practical challenges due to the sparse reward feedback and the need for rapid real-time adaptation in real-world systems, and the approaches to overcome these challenges.

Biography: Dileep Kalathil is an Assistant Professor in the Department of Electrical and Computer Engineering at Texas A&M University (TAMU). His main research area is reinforcement learning theory and algorithms, and their applications in communication networks and power systems. Before joining TAMU, he was a postdoctoral researcher in the EECS department at UC Berkeley. He received his Ph.D. from University of Southern California (USC) in 2014, where he won the best Ph.D. Dissertation Prize in the Department of Electrical Engineering. He received his M. Tech. from IIT Madras, where he won the award for the best academic performance in the Electrical Engineering Department. He received the NSF CRII Award in 2019 and the NSF CAREER award in 2021. He is a senior member of IEEE.

Prof. Farzad Yousefian

Title: Distributed Multi-Agent Optimization for Hierarchical Problems

Abstract: We present new mathematical models and tractable algorithms for addressing the presence of hierarchy in multi-agent optimization. This includes the following two avenues:

(i) Distributed computation of the best equilibrium: In noncooperative Nash games, equilibria are known to be inefficient. This is exemplified by the Prisoner’s Dilemma and was first provably shown in 1980s. Since then, understanding the quality of Nash equilibrium (NE) received considerable attention, leading to the emergence of inefficiency measures such as Price of Anarchy and Price of Stability. Of these, the latter is characterized in terms of the best NE. Our goal lies in the development of amongst the first single-timescale distributed optimization methods over networks for computing the best NE. We devise a class of distributed and randomized iteratively regularized gradient algorithms equipped with provable performance guarantees. These methods are validated numerically on a few examples in game theory and transportation networks.

(ii) Stochastic Mathematical Programs with Equilibrium Constraints (MPECs): We consider stochastic variants of MPECs that find broad applicability in engineering and naval research. Despite nearly three decades of research, there are no efficient first/zeroth-order schemes equipped with non-asymptotic rate guarantees for resolving these problems. We develop a class of randomized zeroth-order smoothing-enabled methods. We derive amongst the first complexity guarantees for solving stochastic MPECs, in both single-stage and two-stage cases. Preliminary numerics suggest that the new schemes scale with problem size and provide solutions of similar accuracy in a fraction of the time taken by existing methods.

Biography: Farzad Yousefian is an Assistant Professor in the Department of Industrial and Systems Engineering at Rutgers University. Prior to joining Rutgers, he was an Assistant Professor from 2015 to 2021 and a tenured Associate Professor from 2021 to 2022 at Oklahoma State University (OSU). Prior to that, he was a Postdoctoral Researcher at Harold and Inge Marcus Department of Industrial and Manufacturing Engineering at the Pennsylvania State University from 2014 to 2015. He received his Ph.D. in Industrial Engineering from the University of Illinois at Urbana-Champaign in 2013. He obtained his B.Sc. and M.Sc. degrees in Industrial Engineering from Sharif University of Technology in 2006 and 2008, respectively. His research interest lies in distributed optimization in multi-agent networks, stochastic and large-scale optimization, nonconvex optimization, hierarchical optimization, variational inequalities, computational game theory, and applications in machine learning and transportation systems. His research has been funded by the National Science Foundation (NSF) Faculty Early Career Development (CAREER) award, the Office of Naval Research (ONR), and the Department of Energy (DOE). He is a recipient (jointly with his co-authors) of the Best Theoretical Paper award at the 2013 Winter Simulation Conference (WSC).

Prof. Chinmay Hegde

Title: Sparsity for Free

Abstract: Inverse problems in signal processing can often be ill-posed with far more parameters than samples. The canonical way to address this has been to pose the solution as the minimum of a least-squares optimization problem, coupled with an appropriate sparsity-promoting regularizer. Several such regularizers abound in the literature, each leading to their own unique minimization algorithm, often with near-optimal guarantees.

I will outline an alternate approach where a single, ubiquitous algorithm — standard gradient descent over the squared error loss — can be applied to solve inverse problems without having to worry about regularizers. Central to these approaches are new re-parameterizations (or architectures) whose loss landscapes naturally promote sparse representations. However, due to non-convexity in the landscapes, the gradient dynamics have to be carefully studied, and early stopping is key. I will also show how to extend this to achieve richer structured regularization effects beyond standard sparsity via compositional architectures, highlighting connections to 2D CNNs.

This is joint work with Jiangyuan Li, Thanh Nguyen, and Raymond Wong.

Biography: Chinmay Hegde is an Associate Professor at NYU, jointly appointed with the CSE and ECE Departments. His research focuses on foundational aspects of machine learning (such as reliability, robustness, efficiency, and privacy). He also works on applications ranging from computational imaging, materials design, and cybersecurity. He is a recipient of the NSF CAREER and CRII awards, the Black and Veatch Faculty Fellowship, multiple teaching awards, and best paper awards at ICML, SPARS, and MMLS.

Prof. David Zald

Title: Challenges and Decisions in Processing Functional MRI Data

Abstract: The use of functional MRI to measure blood oxygen level dependent (BOLD) signal has revolutionized our understanding of brain function. However, fMRI data possess numerous challenges because meaningful BOLD signal changes are small relative to other impacts on MRI signals. Multiple processing pipelines have been developed to improve the sensitivity, accuracy and reliability of fMRI activations. This talk provides an introduction to some of the critical steps and continued challenges in processing fMRI data.

Biography: David Zald, Ph.D. is the inaugural director of the Center for Advanced Human Brain Imaging Research (CAHBIR) and a Henry Rutgers professor of psychiatry in the Robert Wood Johnson Medical School. Initially trained in clinical neuropsychology and psychological assessment, he has been conducting neuroimaging studies using PET and MRI for over a quarter century. His work focuses on the neural and neuropharmacological substrates of emotion and cognition, and the manner in which individual differences in the functioning of these systems impact psychopathology and other maladaptive traits. Prior to coming to Rutgers in 2020, he served as the Cornelius Vanderbilt Professor of Psychology and Psychiatry, Directed Vanderbilt’s undergraduate neuroscience major, and was an associate director of the Vanderbilt Brain Institute. He has published over 190 papers and book chapters and is a fellow of both the American Association for the Advancement of Science and the Association for Psychological Science.