Fall 2019 SIP Seminars
Prof. Anand Sarwate
Title: The Interplay of Causality and Myopia in Adversarial Channel Models
Abstract: There is often a significant gap between average-case and worst-case models for engineering systems. A classical example is in communications, where the random noise (average-case) and malicious noise (worst-case) lead to significantly different communication capacities. One way to bridge this gap is to consider intermediate models that interpolate between average-case and worst-case models. This talk will discuss some of these intermediate models. Worst-case models tailor the channel interference to the transmitted codeword. We assume that the noise in the channel is chosen maliciously but we impose restrictions on how the noise can depend on the transmitted codeword. In particular, we consider a model in which a binary erasure channel (with maximum fraction of erasures $p$) is controlled by an adversary who can observe the transmitted codeword through an independent and memoryless erasure channel (with erasure probability $q$). This “myopic” adversary can be strictly weaker than the worst-case adversary. We show how this intermediate model brings to light some important ideas for how to beat the adversary and insights into the dynamics of decoding.
Biography: Anand D. Sarwate joined as an Assistant Professor in the Department of Electrical and Computer Engineering at Rutgers, the State University of New Jersey in 2014. He received a B.S. degree in Electrical Science and Engineering and a B.S. degree in Mathematics from MIT in 2002, an M.S. in Electrical Engineering from UC Berkeley in 2005 and a PhD in Electrical Engineering from UC Berkeley in 2008. From 2008-2011 he was a postdoctoral researcher at the Information Theory and Applications Center at UC San Diego and from 2011-2013 he was a Research Assistant Professor at the Toyota Technological Institute at Chicago, a philanthropically endowed academic computer science institute located on the University of Chicago campus. He was the Online Editor of the IEEE Information Theory Society (2015-2018) and an Associate Editor for the IEEE Transactions on Signal and Information Processing over Networks (2015-2018). Prof. Sarwate received the NSF CAREER award in 2015 and the A. Walter Tyson Assistant Professor Award from the Rutgers School of Engineering in 2018. His interests are in information theory, machine learning, and signal processing, with applications to distributed systems, privacy and security, and biomedical research.
Prof. Jason M. Klusowski
Title: Global Complexity Measures for Deep ReLU Networks via Path Sampling
Abstract: The ability of modern neural networks to generalize well despite having many more parameters than training samples has been a widely studied topic in the deep learning community. A recently proposed framework for establishing generalization guarantees involves showing that a given network can be ‘compressed’ to a sparser network with fewer and discrete parameters. We study a path-based approach in which the compressed network is formed from empirical counts of paths drawn at random from a Markov distribution induced by the weights of the original network. This method leads to a generalization bound depending on the complexity of the path structure in the network. In addition, by exploiting certain invariance properties of neural networks, the generalization bound does not depend explicitly on the intermediate layer dimensions, allowing for very large networks. Finally, we study empirically the relationship between compression and generalization, and find that networks that generalize well can indeed be compressed more effectively than those that do not.
Biography: Jason M. Klusowski received a B.S. (hons.) in Statistics and Mathematics from the University of Manitoba in 2013 and Ph.D. in Statistics and Data Science from Yale University in 2018. He is currently Assistant Professor in the Department of Statistics at Rutgers University, New Brunswick. His research interests include statistical learning and network analysis.
Dr. Michael Rabbat
Title: Distributed Learning with Gossip Algorithms
Abstract: Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. In this talk I will discuss our recent work on Stochastic Gradient Push (SGP) for supervised learning and Gossip-Based Actor Learner Architectures (GALA) for reinforcement learning, both of which build on PushSum. By reducing the amount of synchronization between compute nodes, both methods are more computationally efficient and scalable compared to comparable methods built on AllReduce, and both methods also enjoy theoretical guarantees, e.g., related to convergence (for SGP) and bounded disagreement (for SGP and GALA). The talk is based on joint work with Mido Assran, Nicolas Ballas, Nicolas Loizou, Josh Romoff, and Joelle Pineau.
Biography: Michael Rabbat received the B.Sc. from the University of Illinois, Urbana-Champaign, in 2001, the M.Sc. from Rice University, Houston, TX, in 2003, and the Ph.D. from the University of Wisconsin, Madison, in 2006, all in electrical engineering. He is currently a Research Scientist in Facebook’s Artificial Intelligence Research group (FAIR) in Montreal, Canada. From 2007 to 2018 he was a professor in the Department of Electrical and Computer Engineering at McGill University. During the 2013–2014 academic year he held visiting positions at Telecom Bretegne, France, the Inria Bretagne-Atlantique Reserch Centre, France, and KTH Royal Institute of Technology, Sweden. His research interests include optimization and distributed algorithms for machine learning.
Prof. Allison Beemer
Title: Authentication in the Presence of a Myopic Adversary
Abstract: Consider the communication setting in which two legitimate users wish to send information to one another in the presence of a malicious adversary. Rather than insisting that a message be recovered at each transmission, we can also allow the receiver to declare malicious interference; this is called authentication, and is a useful model for scenarios in which awareness of an adversary’s presence is just as important as information recovery. In this talk, we consider keyless authentication in the presence of a myopic adversary, who sees some noisy version of the transmission sequence. We introduce a channel condition called U-overwritability and use it to give a necessary condition for positive authentication capacity. We then examine a particular binary model, and completely classify the authentication capacity with deterministic decoders for that case. Finally, we demonstrate by example that allowing for stochastic encoders does make a difference in this setting.
Biography: Allison Beemer is a postdoctoral researcher in the Electrical and Computer Engineering department at the New Jersey Institute of Technology. Her research interests include error-correcting codes, secure and authentic communication, graph-based codes and decoding algorithms, and applied discrete mathematics. Previously, she was a postdoc in the School of Electrical, Computer and Energy Engineering at Arizona State University. She earned her B.A. in mathematics from Whitman College, and her M.S. and Ph.D. in mathematics from the University of Nebraska – Lincoln under the supervision of Prof. Christine A. Kelley. Her postdoctoral work is supported by the U.S. Army Research Laboratory.
Prof. Vaneet Aggarwal
Title: Non-linear Reinforcement Learning: A Non-Markovian Approach
Abstract: Reinforcement Learning (RL) is being increasingly applied to optimize complex functions that may have a stochastic component. RL is extended to multi-agent systems to find policies to optimize systems that require agents to coordinate or to compete under the umbrella of Multi-Agent RL (MARL). A crucial factor in the success of RL is that the optimization problem is represented as the expected sum of rewards, which allows the use of backward induction for the solution. However, many real-world problems require a joint objective that is non-linear and dynamic programming cannot be applied directly. For example, in a resource allocation problem, one of the objective is to maximize long-term fairness among the users. This talk addresses the problem of joint objective optimization, where not only the sum of rewards of each agent but a function of the sum of rewards of each agent needs to be optimized. In such cases, the problem is no longer a Markov Decision Process. We propose efficient model-based and model-free approaches for such problem, with provable guarantees. Further, using fairness in cellular base-station scheduling as an example, the proposed algorithms are shown to significantly outperform the state-of-the-art approaches.
Biography: Vaneet Aggarwal received the B.Tech. degree in 2005 from the Indian Institute of Technology, Kanpur, India, and the M.A. and Ph.D. degrees in 2007 and 2010, respectively from Princeton University, Princeton, NJ, USA, all in Electrical Engineering. He is currently an Associate Professor in the School of IE and ECE (by courtesy) at Purdue University, West Lafayette, IN, where he has been since Jan 2015. He was a Senior Member of Technical Staff Research at AT&T Labs-Research, NJ (2010-2014), Adjunct Assistant Professor at Columbia University, NY (2013-2014), and VAJRA Adjunct Professor at IISc Bangalore (2018-2019). His current research interests are in communications and networking, cloud computing, and machine learning. Dr. Aggarwal received Princeton University’s Porter Ogden Jacobus Honorific Fellowship in 2009, the AT&T Vice President Excellence Award in 2012, the AT&T Key Contributor Award in 2013, the AT&T Senior Vice President Excellence Award in 2014, the 2017 Jack Neubauer Memorial Award recognizing the Best Systems Paper published in the IEEE Transactions on Vehicular Technology, and the 2018 Infocom Workshop HotPOST Best Paper Award. He is on the Editorial Board of the IEEE Transactions on Communications, the IEEE Transactions on Green Communications and Networking, and the IEEE/ACM Transactions on Networking.
Dr. Chris Ng
Title: Opportunities for Intelligent Algorithms in 5G Radio Access Networks
Abstract: To meet ever-increasing mobile data demands, the upcoming 5G cellular system has embraced much enhanced flexibility and controllability. Together with changing traffic patterns, diverse wireless application requirements, and constant network expansions and new site builds, it has resulted in unprecedented complexities for wireless operators. In this talk, we present opportunities for intelligent control and optimization in various areas of radio access networks (RANs): from site selection, radio frequency (RF) planning, site preparation, to deployment, monitoring, and network optimization and maintenance. In particular, we focus on network management tools enabled by Coherent Active Antenna Arrays, where the phase and magnitude characteristics of the antenna elements are precisely synchronized at RF. We show such RF coherency allows efficient beamforming based on environment geometry and long-term channel statistics, which can be leveraged to drive the next-generation self-optimizing intelligent RANs.
Biography: Dr. Ng has over 15 years’ experience in wireless communications and optimization software systems, including cross-layer wireless network design, multi-antenna/multi-user systems, numerical optimization, and software architecture. Dr. Ng is the author of more than 20 technical papers and multiple patents. His professional experience includes research and development positions with Bell Labs, MIT, Intel, Oracle and other technology companies. He received his bachelor’s degree in Applied Science from University of Toronto, and master’s and Ph.D. degrees in Electrical Engineering from Stanford University. Dr. Ng is currently a Director of Systems Engineering Products at Blue Daube Systems in New Jersey working on the next-generation Massive MIMO and beamforming optimization platform, and a Co-Chair on the Massive MIMO Working Group of the IEEE Beyond 5G Roadmap.
Dr. Zhihui Zhu
Title: Provable Nonsmooth Nonconvex Approaches for Low-Dimensional Models
Abstract: As technological advances in fields such as the Internet, medicine, finance, and remote sensing have produced larger and more complex data sets, we are faced with the challenge of efficiently and effectively extracting meaningful information from large-scale and high-dimensional signals and data. Many modern approaches to addressing this challenge naturally involve nonconvex optimization formulations. Although in theory finding a local minimizer for a general nonconvex problem could be computationally hard, recent progress has shown that many practical (smooth) nonconvex problems obey benign geometric properties and can be efficiently solved to global solutions. In this talk, I will extend this powerful geometric analysis to robust low-dimensional models where the data or measurements are corrupted by outliers taking arbitrary values. We consider nonsmooth nonconvex formulations of the problems, in which we employ an L1-loss function to robustify the solution against outliers. We characterize a sufficiently large basin of attraction around the global minima, enabling us to develop subgradient-based optimization algorithms that can rapidly converge to a global minimum with a data-driven initialization. I will also talk about our very recent work for general nonsmooth optimization on the Stiefel manifold which appears widely in engineering. I will discuss the efficiency of this approach in the context of robust subspace recovery, robust low-rank matrix recovery, robust principal component analysis (RPCA), and orthogonal dictionary learning.
Biography: Zhihui Zhu is a Postdoctoral Fellow in the Mathematical Institute for Data Science at the Johns Hopkins University. He received his B.Eng. degree in communications engineering in 2012 from Zhejiang University of Technology (Jianxing Honors College), and his Ph.D. degree in electrical engineering in 2017 at the Colorado School of Mines, where his research was awarded a Graduate Research Award. His research interests include the areas of data science, machine learning, signal processing, and optimization. His current research largely focuses on the theory and applications of nonconvex optimization and low-dimensional models in large-scale machine learning and signal processing problems.