Welcome to the Signal and Information Processing Seminar Series at Rutgers!

The SIP Seminar Series at Rutgers University–New Brunswick brings together a diverse group of researchers, both from within and outside Rutgers, on (approximately) a biweekly basis to discuss recent advances in signal and information processing. The term of “Signal and Information Processing” used within the SIP Seminars is rather broad in nature and subsumes signal processing, image processing, statistical inference, machine learning, computer vision, control theory, harmonic analysis, information theory, etc.

Seminar Mailing List: You can subscribe to the SIP Seminars mailing list by sending an email to ECE_SIP-request@email.rutgers.edu with SUBSCRIBE in the subject of the email.

Fall 2020 Seminar Schedule: The SIP Seminars in Fall 2020 will take place virtually on the following dates (Wednesdays) at 2 pm on Zoom: September 23; October 7, 14, 21, and 28; November 4, 11, and 18; and December 2.

Fall 2020 SIP Seminars

Prof. Ermin Wei

Title: Robust and Flexible Distributed Optimization Algorithms for Networked Systems

Abstract: In this talk, we present a framework of distributed optimization algorithms for the problem of minimizing a sum of convex objective functions, which are locally available to agents in a network. This setup is motivated by the federated machine learning applications. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. The algorithms we propose can be proven to converge linearly to the optimal solution, can tolerate communication and computation noises and can adapt to various hardware environments with different computation and communication capabilities.

Biography: Ermin Wei is currently an Assistant Professor at the Electrical and Computer Engineering Department and Industrial Engineering and Management Sciences Department of Northwestern University. She completed her PhD studies in Electrical Engineering and Computer Science at MIT in 2014, advised by Professor Asu Ozdaglar, where she also obtained her M.S. She received her undergraduate triple degree in Computer Engineering, Finance and Mathematics with a minor in German, from University of Maryland, College Park. She has received many awards, including the Graduate Women of Excellence Award, second place prize in Ernst A. Guillemen Thesis Award. Her team won the 2nd place in the GO-competition 2019, an electricity grid optimization competition organized by Department of Energy. Wei’s research interests include distributed optimization methods, convex optimization and analysis, smart grid, communication systems and energy networks and market economic analysis.

Prof. Dror Baron

Title: Pooled Coronavirus Testing with Side Information

Abstract: We recast a pooled coronavirus testing problem as a noisy linear inverse problem. Generalized approximate message passing (GAMP) is an iterative framework that solves linear inverse problems with different types of noise and structured unknown inputs while achieving best-possible estimation quality. We map pooled testing to GAMP by incorporating genetic PCR tests’ noise mechanism, and different models for the illness status of patients are considered. A model where the illness is independent and identically distributed (i.i.d.) provides results for non-adaptive pooled testing similar to other state-of-art approaches. When side information (SI) allows us to treat patients’ illness as non-i.i.d., a 40-50% reductions in the number of tests is provided. Our vision for coronavirus testing involves using SI in the form of patients’ symptoms, medical history, social structure, and contact tracing to greatly improve testing efficiency.

Biography: Dror Baron received the B.Sc. (summa cum laude) and M.Sc. degrees from the Technion – Israel Institute of Technology, Haifa, Israel, in 1997 and 1999, and the Ph.D. degree from the University of Illinois at Urbana-Champaign in 2003, all in electrical engineering. Since 2010, Baron has been with the Electrical and Computer Engineering Department at North Carolina State University, where he is currently an Associate Professor. Dr. Baron’s research interests combine information theory, sparse signal processing, and fast algorithms.

Prof. Mengdi Wang

Title: Data-Efficient Reinforcement Learning in Metric and Feature Space

Abstract: In recent years, reinforcement learning (RL) systems with general goals beyond a cumulative sum of rewards have gained traction, such as in constrained problems, exploration, and acting upon prior experiences. In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general concave utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases. Such generality invalidates the Bellman equation. As this means that dynamic programming no longer works, we focus on direct policy search. Analogously to the Policy Gradient Theorem \cite{sutton2000policy} available for RL with cumulative rewards, we derive a new Variational Policy Gradient Theorem for RL with general utilities, which establishes that the parametrized policy gradient may be obtained as the solution of a stochastic saddle point problem involving the Fenchel dual of the utility function. We develop a variational Monte Carlo gradient estimation algorithm to compute the policy gradient based on sample paths. We prove that the variational policy gradient scheme converges globally to the optimal policy for the general objective, though the optimization problem is nonconvex. We also establish its rate of convergence of the order O(1/t) by exploiting the hidden convexity of the problem, and proves that it converges exponentially when the problem admits hidden strong convexity. Our analysis applies to the standard RL problem with cumulative rewards as a special case, in which case our result improves the available convergence rate.

Biography: Mengdi Wang is an associate professor at the Department of Electrical Engineering and Center for Statistics and Machine Learning at Princeton University. She is also affiliated with the Department of Operations Research and Financial Engineering and Department of Computer Science. Her research focuses on data-driven stochastic optimization and applications in machine and reinforcement learning. She received her PhD in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2013. At MIT, Mengdi was affiliated with the Laboratory for Information and Decision Systems and was advised by Dimitri P. Bertsekas. Mengdi received the Young Researcher Prize in Continuous Optimization of the Mathematical Optimization Society in 2016 (awarded once every three years), the Princeton SEAS Innovation Award in 2016, the NSF Career Award in 2017, the Google Faculty Award in 2017, and the MIT Tech Review 35-Under-35 Innovation Award (China region) in 2018. She serves as an associate editor for Operations Research and Mathematics of Operations Research, as area chair for ICML, NeurIPS, AISTATS, and is on the editorial board of Journal of Machine Learning Research. Research supported by NSF, NIH, AFOSR, Google, Microsoft C3.ai DTI, FinUP.

Prof. Suvrit Sra

Title: Accelerated Gradient Methods on Riemannian Manifolds

Abstract: This talks lies at the interface of geometry and optimization. In particular, I’ll talk about geodesically convex optimization problems, a vast class of non-convex optimization problems that admit tractable global optimization. I’ll provide some background on this class and some canonical examples as motivation. The bulk of the talk thereafter will be devoted to a recent, long-sought result of potentially fundamental value: an accelerated gradient method for Riemannian manifolds. Toward developing this method, we will revisit Nesterov’s (Euclidean) estimate sequence technique and develop a conceptually simple alternative. We build on this alternative to obtain new results in the Riemannian setting. We localize the key difficulty into “metric distortion,” which we then control carefully to obtain the first (global) accelerated Riemannian-gradient method.

Biography: Suvrit Sra is an Associate Professor of EECS at MIT, and a core member of the Laboratory for Information and Decision Systems (LIDS), the Institute for Data, Systems, and Society (IDSS), as well as a member of MIT-ML and Statistics groups. He obtained his PhD in Computer Science from the University of Texas at Austin. Before moving to MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. His research bridges mathematical areas such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS (erstwhile NIPS) conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of macro-eyes, a global healthcare+AI startup.

Prof. Usman Khan

Title:

Abstract:

Biography:

Prof. Salman Avestimehr

Title:

Abstract:

Biography:

Prof. Linjun Zhang

Title:

Abstract:

Biography:

Ghadir Ayache

Title:

Abstract:

Biography: