Welcome to the Signal and Information Processing Seminar Series at Rutgers!
The SIP Seminar Series at Rutgers University–New Brunswick, overseen by the Steering Committee consisting of Profs. Waheed U. Bajwa, Mert Gurbuzbalaban, and Shirin Jalali, gathers a diverse assembly of researchers from within and outside the university. Held approximately every two weeks, this series is a platform for discussing recent developments in signal and information processing. The term “Signal and Information Processing” in these seminars covers a wide range of topics, including but not limited to signal processing, image processing, statistical inference, machine learning, computer vision, control theory, harmonic analysis, information theory, and more. Additionally, the series serves as a valuable forum for the graduate students to present preliminary research and receive constructive feedback.
Seminar Mailing List: You can subscribe to the SIP Seminars mailing list by sending an email to ECE_SIP-request@email.rutgers.edu with SUBSCRIBE in the subject of the email.
Fall 2023 Seminar Schedule: The SIP Seminars in Fall 2023 will take place in Room EE-240 at 12 pm on Tuesdays, with one exception, on the following dates: October 17 and 31, November 14 and 28, and December 12; Exception: Wednesday, November 29 at 4 pm in EE-203.
Fall 2023 SIP Seminars
Speaker 1: Zonghong Liu
Title: Decentralized Learning via Random Walks on Graphs
Abstract: We study the decentralized optimization problem via random walk for the empirical risk minimization problem. More specifically, we assume that the data are distributed over a network, and a random walk carries the global model, travels over the network, and trains the global model using the local data stored at local nodes. We focus on speeding up the training via the design of the transition probability of the random walk. We implement the importance sampling idea in centralized optimization, identify the entrapment phenomenon that slows down training convergence under specific configurations, and propose a novel algorithm, random walk with random jumps, to overcome the entrapment problem.
Biography: Zonghong Liu is a fourth-year Ph.D. candidate advised by Prof. Salim El Rouayheb at Rutgers ECE Department. His research aims at implementing probability tools to design distributed optimization algorithms. His research interests also include information theory and statistics. He got his Master’s Degree in Statistics at Rutgers University in Oct 2019 and a B.S. in Chemical Physics from the University of Science and Technology of China in June 2016.
Speaker 2: Haizhou Shi
Title: A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm
Abstract: Domain incremental learning aims to adapt to a sequence of domains with access to only a small subset of data (i.e., memory) from previous domains. Various methods have been proposed for this problem, but it is still unclear how they are related and when practitioners should choose one method over another. In response, we propose a unified framework, dubbed Unified Domain Incremental Learning (UDIL), for domain incremental learning with memory. Our UDIL unifies various existing methods, and our theoretical analysis shows that UDIL always achieves a tighter generalization error bound compared to these methods. The key insight is that different existing methods correspond to our bound with different fixed coefficients; based on insights from this unification, our UDIL allows adaptive coefficients during training, thereby always achieving the tightest bound. Empirical results show that our UDIL outperforms the state-of-the-art domain incremental learning methods on both synthetic and real-world datasets.
Biography: Haizhou Shi is a second-year Ph.D. student advised by Prof. Hao Wang at Rutgers CS Department. His research interest lies in improving generalization ability of machine learning models, especially continually learning and adapting to ever-evolving learning tasks. He obtained his Master’s and Bachelor’s Degree in Computer Science in 2022 and 2019 at Zhejiang University, China.
Prof. Yanjun Han
Title: Covariance Alignment: From Maximum Likelihood Estimation to Gromov-Wasserstein
Abstract: Feature alignment methods are used in many scientific disciplines for data pooling, annotation, and comparison. Motivated by a metabolomic study in biostatistics, in this talk I will introduce a new feature alignment problem, termed as covariance alignment, where the two samples in a Gaussian covariance model are linked with an unknown permutation. First, it will be shown that a quasi maximum likelihood estimator (QMLE) is minimax rate-optimal in estimating the permutation, and the resulting minimax rate takes a semiparametric form which interpolates between covariance-aware permutation estimation and covariance estimation. Second, borrowing ideas from the computational optimal transport to mitigate the computational issues of the QMLE, a Gromov-Wasserstein (GW) estimator lifting the set of permutations to all probabilistic couplings will be introduced. The first statistical rate of estimation for the GW estimator will be provided, showing that it is also minimax rate-optimal. Finally, I will discuss the connections to recent literature on statistical graph matching and orthogonal statistical learning.
Based on a joint work with Philippe Rigollet and George Stepaniants.
Biography: Yanjun Han is an assistant professor of mathematics and data science at the Courant Institute of Mathematical Sciences and the Center for Data Science, New York University. He received his Ph.D. in Electrical Engineering from Stanford University in Aug 2021, under the supervision of Tsachy Weissman. After that, he spent one year as a postdoctoral scholar at the Simons Institute for the Theory of Computing, UC Berkeley, and another year as a Norbert Wiener postdoctoral associate in the Statistics and Data Science Center at MIT, mentored by Sasha Rakhlin and Philippe Rigollet. Honors on his past work include a best student paper finalist at ISIT 2016, a student paper award at ISITA 2016, and the Annals of Statistics Special Invited Session at JSM 2021. His research interests include high-dimensional and nonparametric statistics, bandits, and information theory.
Prof. Ahmed Aziz Ezzat
Title: Predictive and Prescriptive Analytics for Offshore Wind Energy: Uncertainty, Quality, and Reliability
Abstract: The rising U.S. offshore wind sector holds great promise—both environmentally and economically—to unlock vast supplies of clean and renewable energy. To harness this valuable resource, Gigawatt (GW)-scale offshore wind (OSW) projects are already under way at several locations off of the U.S. coastline and are set to host turbines that are larger than many of the world’s tallest buildings. Realizing this promise, however, is contingent on innovative solutions to several challenges related to the optimal management of such ultra-scale assets, which would operate under harsh environmental conditions, in fairly under-explored territories, and at unprecedented scales. In this talk, I will review our research group’s progress in formulating tailored machine learning (ML) and operations research (OR) solutions aimed at mitigating some of those operational uncertainties. I will primarily focus on ML/OR methods which address two key challenges: (i) Uncertainty: how can we develop ML-based solutions that can make use of the multi-source, multi-resolution data in OSW energy regions to accurately forecast their power output at high spatial and temporal resolutions; and (ii) quality/reliability: how can we translate those forecasts into optimal operations and maintenance (O&M) decisions through offshore-tailored optimization models that consider the multi-source uncertainties and complex decision dependencies in the OSW environment. Our models and analyses are tailored and tested using real-world data from the NY/NJ Bight—where several GW-scale wind farms are in-development.
Biography: Dr. Ahmed Aziz Ezzat is an Assistant Professor of Industrial & Systems Engineering at Rutgers University, where he leads the Renewables & Industrial Analytics (RIA) research group [RIA Research Group]. Before joining Rutgers, Dr. Aziz Ezzat received his Ph.D. from Texas A&M University in 2019, and his B.Sc. degree from Alexandria, Egypt, in 2013, both in Industrial Engineering. His research interests are in the areas of data and decision sciences, probabilistic forecasting and machine learning, quality, reliability, and maintenance optimization, and their applications to renewable energy and industrial systems. Dr. Aziz Ezzat is the recipient of the 2022 IISE Data Analytics Teaching Award, The 2020 IIF-SAS® research award, the 2020 Rutgers OAT Teaching Award, and the 2014 IISE Sierleja Fellowship. He currently serves as the 2023-2024 president of the IISE Energy Systems Division. His research has been supported by several external and internal grants including from the National Science Foundation (NSF), The Department of Energy (DOE) and National Offshore Wind Research and Development Consortium (NOWRDC), The NJ Economic Development Authority, The Rutgers-Provost Chancellor Office, and industry. He is a member of INFORMS, IEEE-PES, and IISE.
Prof. Oliver Kosut
Title: Schrödinger’s Cactus: Optimal Differential Privacy Mechanisms in the Large-Composition Regime
Abstract: A distinguishing characteristic of information theory is the ability to boil down a complex engineering problem to just one key quantity (e.g., channel capacity). This talk applies this approach to differential privacy (DP). In DP, the goal is to keep user data private by designing random algorithms that ensure that changing one entry of a database has a small effect on the output distribution. Many applications of DP, including training machine learning models, involve applying a privacy mechanism (i.e., adding noise) over many iterations; we call this the large-composition regime. We find that, in the limit of a large number of compositions, the optimal mechanism should minimize a certain Kullback-Leibler divergence. This divergence is analogous to a channel capacity or rate-distortion function. While the mechanism optimizing this divergence has no closed-form solution, we approximate it numerically. We find that it has a surprisingly jagged shape, giving rise to the name “Cactus mechanism”. Furthermore, when the noise variance is large, an additional simplification occurs: the optimal mechanism is the one that minimizes the Fisher information. This mechanism can in turn be found by solving a certain differential equation, which turns out to be identical to Schrödinger’s equation from quantum mechanics.
Biography: Oliver Kosut is an associate professor in the School of Electrical, Computer and Energy Engineering at Arizona State University, where he has been a faculty member since 2012. He received a Ph.D. from Cornell University in 2010. He was a postdoc at MIT from 2010 to 2012. His research interests include information theory—particularly with applications to privacy, security, and machine learning—and power systems. He received the NSF CAREER award in 2015. He is an associate editor for the IEEE Transactions on Information Forensics and Security.
Okko Makkonen
Title: Secure Distributed Matrix Multiplication over Complex Numbers
Abstract: Secure distributed matrix multiplication (SDMM) aims to distribute the computation of a matrix product to worker nodes, such that the contents of the matrices are kept secret from these workers. Previous work has considered techniques from coding theory to perform matrix multiplication over finite fields. However, for applications in machine learning, data science, and engineering, it is important to speed up matrix multiplication over real and complex numbers. In this work, we discuss some challenges that come with converting existing ideas in SDMM to work over these domains and present some constructions that allow for numerically stable ways of doing SDMM over complex numbers with minimal information leakage.
Biography: Okko Makkonen is a PhD student in mathematics at Aalto University, Finland, where he is advised by Professor Camilla Hollanti. He obtained the B.Sc. and M.Sc. degrees in mathematics at Aalto University in 2021 and 2022. His research interests include the applications of coding theory in secure distributed computation schemes as well as privacy mechanisms in federated learning.
Yasa Syed
Title: Optimal Randomized Multilevel Monte Carlo Methods for Repeatedly Nested Expectations
Abstract: The estimation of repeatedly nested expectations is a challenging problem that arises in many real-world systems. However, existing methods generally suffer from high computational costs when the number of nestings becomes large. Fix any non-negative integer D for the total number of nestings. Standard Monte Carlo methods typically have sampling complexities which depend on D exponentially. In this paper, we introduce an estimator which achieves the optimal Monte Carlo estimator complexity independent of D under certain regularity conditions, and it achieves a near-optimal complexity (still independent of D) under much milder conditions. Our estimator is also unbiased, which makes it easy to parallelize. The key ingredients in our construction are an observation of the problem’s recursive structure and the recursive use of the randomized multilevel Monte Carlo method.
Biography: Yasa Syed is a 4th year PhD student in the statistics department at Rutgers. Yasa completed his bachelor’s in mathematics with a minor in computer science at Rutgers as well in 2020. His research is on Monte Carlo methods and optimization, he is also interested in a variety of areas.