Welcome to the Signal and Information Processing Seminar Series at Rutgers!

The SIP Seminar Series at Rutgers University–New Brunswick brings together a diverse group of researchers, both from within and outside Rutgers, on (approximately) a biweekly basis to discuss recent advances in signal and information processing. The term of “Signal and Information Processing” used within the SIP Seminars is rather broad in nature and subsumes signal processing, image processing, statistical inference, machine learning, computer vision, control theory, harmonic analysis, information theory, etc.

Seminar Mailing List: You can subscribe to the SIP Seminars mailing list by sending an email to ECE_SIP-request@email.rutgers.edu with SUBSCRIBE in the subject of the email.

Fall 2020 Seminar Schedule: The SIP Seminars in Fall 2020 will take place virtually on the following dates (Wednesdays) at 2 pm on Zoom: September 23; October 7, 14, and 21; November 4 and 18; and December 2.

Fall 2020 SIP Seminars

Prof. Ermin Wei

Title: Robust and Flexible Distributed Optimization Algorithms for Networked Systems

Abstract: In this talk, we present a framework of distributed optimization algorithms for the problem of minimizing a sum of convex objective functions, which are locally available to agents in a network. This setup is motivated by the federated machine learning applications. Distributed optimization algorithms make it possible for the agents to cooperatively solve the problem through local computations and communications with neighbors. The algorithms we propose can be proven to converge linearly to the optimal solution, can tolerate communication and computation noises and can adapt to various hardware environments with different computation and communication capabilities.

Biography: Ermin Wei is currently an Assistant Professor at the Electrical and Computer Engineering Department and Industrial Engineering and Management Sciences Department of Northwestern University. She completed her PhD studies in Electrical Engineering and Computer Science at MIT in 2014, advised by Professor Asu Ozdaglar, where she also obtained her M.S. She received her undergraduate triple degree in Computer Engineering, Finance and Mathematics with a minor in German, from University of Maryland, College Park. She has received many awards, including the Graduate Women of Excellence Award, second place prize in Ernst A. Guillemen Thesis Award. Her team won the 2nd place in the GO-competition 2019, an electricity grid optimization competition organized by Department of Energy. Wei’s research interests include distributed optimization methods, convex optimization and analysis, smart grid, communication systems and energy networks and market economic analysis.

Prof. Dror Baron

Title: Pooled Coronavirus Testing with Side Information

Abstract: We recast a pooled coronavirus testing problem as a noisy linear inverse problem. Generalized approximate message passing (GAMP) is an iterative framework that solves linear inverse problems with different types of noise and structured unknown inputs while achieving best-possible estimation quality. We map pooled testing to GAMP by incorporating genetic PCR tests’ noise mechanism, and different models for the illness status of patients are considered. A model where the illness is independent and identically distributed (i.i.d.) provides results for non-adaptive pooled testing similar to other state-of-art approaches. When side information (SI) allows us to treat patients’ illness as non-i.i.d., a 40-50% reductions in the number of tests is provided. Our vision for coronavirus testing involves using SI in the form of patients’ symptoms, medical history, social structure, and contact tracing to greatly improve testing efficiency.

Biography: Dror Baron received the B.Sc. (summa cum laude) and M.Sc. degrees from the Technion – Israel Institute of Technology, Haifa, Israel, in 1997 and 1999, and the Ph.D. degree from the University of Illinois at Urbana-Champaign in 2003, all in electrical engineering. Since 2010, Baron has been with the Electrical and Computer Engineering Department at North Carolina State University, where he is currently an Associate Professor. Dr. Baron’s research interests combine information theory, sparse signal processing, and fast algorithms.

Prof. Mengdi Wang

Title: Data-Efficient Reinforcement Learning in Metric and Feature Space

Abstract: In recent years, reinforcement learning (RL) systems with general goals beyond a cumulative sum of rewards have gained traction, such as in constrained problems, exploration, and acting upon prior experiences. In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general concave utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases. Such generality invalidates the Bellman equation. As this means that dynamic programming no longer works, we focus on direct policy search. Analogously to the Policy Gradient Theorem \cite{sutton2000policy} available for RL with cumulative rewards, we derive a new Variational Policy Gradient Theorem for RL with general utilities, which establishes that the parametrized policy gradient may be obtained as the solution of a stochastic saddle point problem involving the Fenchel dual of the utility function. We develop a variational Monte Carlo gradient estimation algorithm to compute the policy gradient based on sample paths. We prove that the variational policy gradient scheme converges globally to the optimal policy for the general objective, though the optimization problem is nonconvex. We also establish its rate of convergence of the order O(1/t) by exploiting the hidden convexity of the problem, and proves that it converges exponentially when the problem admits hidden strong convexity. Our analysis applies to the standard RL problem with cumulative rewards as a special case, in which case our result improves the available convergence rate.

Biography: Mengdi Wang is an associate professor at the Department of Electrical Engineering and Center for Statistics and Machine Learning at Princeton University. She is also affiliated with the Department of Operations Research and Financial Engineering and Department of Computer Science. Her research focuses on data-driven stochastic optimization and applications in machine and reinforcement learning. She received her PhD in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2013. At MIT, Mengdi was affiliated with the Laboratory for Information and Decision Systems and was advised by Dimitri P. Bertsekas. Mengdi received the Young Researcher Prize in Continuous Optimization of the Mathematical Optimization Society in 2016 (awarded once every three years), the Princeton SEAS Innovation Award in 2016, the NSF Career Award in 2017, the Google Faculty Award in 2017, and the MIT Tech Review 35-Under-35 Innovation Award (China region) in 2018. She serves as an associate editor for Operations Research and Mathematics of Operations Research, as area chair for ICML, NeurIPS, AISTATS, and is on the editorial board of Journal of Machine Learning Research. Research supported by NSF, NIH, AFOSR, Google, Microsoft C3.ai DTI, FinUP.

Prof. Suvrit Sra

Title: Accelerated Gradient Methods on Riemannian Manifolds

Abstract: This talks lies at the interface of geometry and optimization. In particular, I’ll talk about geodesically convex optimization problems, a vast class of non-convex optimization problems that admit tractable global optimization. I’ll provide some background on this class and some canonical examples as motivation. The bulk of the talk thereafter will be devoted to a recent, long-sought result of potentially fundamental value: an accelerated gradient method for Riemannian manifolds. Toward developing this method, we will revisit Nesterov’s (Euclidean) estimate sequence technique and develop a conceptually simple alternative. We build on this alternative to obtain new results in the Riemannian setting. We localize the key difficulty into “metric distortion,” which we then control carefully to obtain the first (global) accelerated Riemannian-gradient method.

Biography: Suvrit Sra is an Associate Professor of EECS at MIT, and a core member of the Laboratory for Information and Decision Systems (LIDS), the Institute for Data, Systems, and Society (IDSS), as well as a member of MIT-ML and Statistics groups. He obtained his PhD in Computer Science from the University of Texas at Austin. Before moving to MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. His research bridges mathematical areas such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization with machine learning. He founded the OPT (Optimization for Machine Learning) series of workshops, held from OPT2008–2017 at the NeurIPS (erstwhile NIPS) conference. He has co-edited a book with the same name (MIT Press, 2011). He is also a co-founder and chief scientist of macro-eyes, a global healthcare+AI startup.

Prof. Usman Khan

Title: Distributed ML: Optimal algorithms for distributed stochastic non-convex optimization

Abstract: In many emerging applications, it is of paramount interest to learn hidden parameters from data. For example, self-driving cars may use onboard cameras to identify pedestrians, highway lanes, or traffic signs in various light and weather conditions. Problems such as these can be framed as classification, regression, or risk minimization in general, at the heart of which lies machine learning and stochastic optimization. In many practical scenarios, distributed and decentralized learning methods are preferable as they benefit from a divide-and-conquer approach towards data at the expense of local (short-range) communication. In this talk, I will present our recent work that develops a novel algorithmic framework to address various aspects of decentralized stochastic first-order optimization methods for non-convex problems. A major focus will be to characterize regimes where decentralized solutions outperform their centralized counterparts and lead to optimal convergence guarantees. Moreover, I will characterize certain desirable attributes of decentralized methods in the context of linear speedup and network-independent convergence rates. Throughout the talk, I will demonstrate such key aspects of the proposed methods with the help of provable theoretical results and numerical experiments on real data.

Biography: Usman A. Khan is an Associate Professor of Electrical and Computer Engineering (ECE) at Tufts University, USA, since September 2017. His research interests include signal processing, optimization and control, and machine learning. He has published extensively in these topics with more than 100 articles in journals and conference proceedings and holds multiple patents. Recognition of his work includes the prestigious National Science Foundation (NSF) Career award, several NSF REU awards, an IEEE journal cover, three best student paper awards in IEEE conferences, and several news articles including two in IEEE spectrum. Dr. Khan joined Tufts as an Assistant Professor in 2011 and held a Visiting Professor position at KTH, Sweden, in Spring 2015. Prior to joining Tufts, he was a postdoc in the GRASP lab at the University of Pennsylvania. He received his B.S. degree in 2002 from University of Engineering and Technology, Pakistan, M.S. degree in 2004 from University of Wisconsin-Madison, USA, and Ph.D. degree in 2009 from Carnegie Mellon University, USA, all in ECE. Dr. Khan is an IEEE Senior Member and is an elected full member of the Sensor Array and Multichannel Technical Committee with the IEEE Signal Processing Society since 2019, where he was an Associate member from 2010 to 2019. He was an elected full member of the IEEE Big Data Special Interest Group from 2017 to 2019, and has served on the IEEE Young Professionals Committee and on the IEEE Technical Activities Board. He was an Editor of the IEEE Transactions on Smart Grid from 2014 to 2017 and is currently an Associate Editor of the IEEE Control System Letters, IEEE Transactions on Signal and Information Processing over Networks, and IEEE Open Journal of Signal Processing. He is the Chief Editor of the Proceedings of the IEEE special issue on Optimization for Data-driven Learning and Control and a Guest Associate Editor for IEEE Control System Letters special issue on Learning and Control both to appear in 2020. He is the Technical Area Chair for the Networks track in IEEE 2020 Asilomar Conference on Signals Systems and Computers, has served on the TPCs of several IEEE conferences, and has organized/chaired several IEEE workshops and sessions.

Prof. Linjun Zhang

Title: The cost of privacy and adversarial robustness in statistical estimation

Abstract: Privacy-preserving data analysis is a rising challenge in contemporary statistics, as the privacy guarantees of statistical methods are often achieved at the expense of accuracy. In this talk, we investigate the tradeoff between statistical accuracy and privacy in generalized linear models, under both the classical low- dimensional and modern high-dimensional settings. A primary focus is to establish minimax optimality for statistical estimation with the (\epsilon, \delta)-differential privacy constraint. To this end, we find that classical lower bound arguments fail to yield sharp results, and new technical tools are called for. Another challenge in contemporary statistics is adversarial robustness. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this talk, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently.

Biography: Linjun Zhang is an Assistant Professor in the Department of Statistics at Rutgers University. He earned his Ph.D. from the Wharton School, University of Pennsylvania. His research focuses on high-dimensional statistics, adversarial robustness, and differential privacy.

Prof. Salman Avestimehr

Title: Coded Computing and Its Application to Scalable and Secure Federated/Collaborative Learning

Abstract: In this talk, I will introduce Coded Computing, a transformative framework that brings concepts and tools from information theory into distributed computing in order to mitigate several bottlenecks that arise in large-scale distributed machine learning, namely resiliency to stragglers, security/privacy, and scalability. I will then focus on the applications of federated and collaborative learning and demonstrate the impact of coded computing in these domains. In particular, I will present TurboAggregate that overcomes the quadratic aggregation barrier in secure federated learning and improves the state-of-the-art by more than 40X speedup in training. Time permitting, I will also present CopML for privacy-preserving collaborative machine learning that provides strong statistical privacy guarantees against colluding parties (adversaries) with unbounded computational power, while achieving more than 16X speedup in the training time against the benchmark protocols that are based on secure multi-party computing.

Biography: Salman Avestimehr is a Professor and Director of the Information Theory and Machine Learning (vITAL) research lab at the Electrical and Computer Engineering Department of University of Southern California. He received his Ph.D. in 2008 and M.S. degree in 2005 in Electrical Engineering and Computer Science, both from the University of California, Berkeley. Prior to that, he obtained his B.S. in Electrical Engineering from Sharif University of Technology in 2003. His research interests include information theory and coding theory, and large-scale distributed computing and machine learning, secure and private computing, and blockchain systems. Dr. Avestimehr has received a number of awards for his research, including the James L. Massey Research & Teaching Award from IEEE Information Theory Society, the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, a Young Investigator Program (YIP) award from the U. S. Air Force Office of Scientific Research, a National Science Foundation CAREER award, the David J. Sakrison Memorial Prize, an Information Theory Society and Communication Society Joint Paper Award, and several Best Paper Awards at Conferences. He has been an Associate Editor for IEEE Transactions on Information Theory and a general Co-Chair of the 2020 International Symposium on Information Theory (ISIT). He is a fellow of IEEE.