Fall 2021 SIP Seminars (Host: Prof. Soljanin)
09/21/2021 - FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning - Presented by Dr. Swanand Kadhe (U. of California, Berkeley)
Dr. Swanand Kadhe
Title:FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
Abstract: In modern large-scale machine learning, federated learning has emerged as an important paradigm, where the training data remains distributed over a large number of clients (e.g., mobile phones, smart devices). In federated learning, each client trains a neural network model locally using their data, and the central server aggregates these local models to obtain an improved model. However, recent attacks have demonstrated that model parameters shared by clients can leak significant amounts of information about their training data, making privacy preservation a critical concern.
In this talk, I will present a secure aggregation protocol, FastSecAgg, that enables the central server to average local models in a privacy-preserving manner while being robust to client dropouts. FastSecAgg reduces the computation cost at the server by several orders-of-magnitude compared to the state-of-the-art schemes, and guarantees security against the server colluding with any subset of some constant fraction (e.g. ~10 %) of clients in the honest-but-curious setting. I will highlight the main building block of FastSecAgg — a novel multi-secret sharing scheme, FastShare, powered by the (finite field) Fast Fourier Transform (FFT). Finally, I will demonstrate that FastSecAgg achieves similar accuracy as vanilla federated averaging on LEAF benchmark datasets for federated learning.
Biography: Swanand Kadhe is a postdoctoral researcher in the EECS Department at the University of California Berkeley. He earned his Ph.D. degree in Electrical and Computer Engineering from Texas A&M University in 2017. He is a recipient of the 2016 Graduate Teaching Fellowship from the College of Engineering at Texas A&M University. He has been a visiting researcher at Nokia Bell Labs, Duke University, and The Chinese University of Hong Kong. From 2009 to 2012, he was an R&D engineer at the TCS Innovation Labs, Bangalore. His research interests lie broadly in federated and distributed machine learning, information and coding theory, privacy and security, and blockchains.
10/05/2021 - Machine Learning-Aided Channel Coding: Opportunities and Challenges - Presented by Prof. Hessam Mahdavifar (Michigan)
Prof. Hessam Mahdavifar
Title: Machine Learning-Aided Channel Coding: Opportunities and Challenges
Abstract: Today, channel codes are among the fundamental parts of any communication system, including cellular, WiFi, and deep space, among others, enabling reliable communications in the presence of noise. Decades of research have led to breakthrough inventions of various families of channel codes. Yet no unified approach exists in answering these two fundamental questions: Given a channel, how do we efficiently construct the best possible code? And given a channel code, how do we design an efficient and optimal decoder? In this talk, we will discuss how the remarkable advancements in data-driven machine learning (ML) can be leveraged toward answering these questions. In particular, we will focus on a class of codes rooting in Plotkin recursive construction. This class includes Reed–Muller (RM) codes as the state-of-the art binary algebraic codes, as well as polar codes, the first capacity-achieving codes with explicit, i.e., non-randomized, constructions. In the first part of this talk, we will present an efficient and close-to-optimal decoder obtained for RM codes by learning a pruning process applied to an exponentially complex decoder. In the second part, we will tackle the fundamental problem of designing new channel codes. In particular, we will demonstrate KO codes, a new class of channel codes designed by training neural networks while preserving Plotkin-like structures. KO codes beat both of their RM and polar code counterparts, under the successive cancellation decoding, in the challenging short-to-medium blocklength regime. We will also discuss various challenges that should be overcome to pave the way for adopting such ML-aided channel coding strategies in practice.
Biography: Hessam Mahdavifar is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of Michigan. He received the B.Sc. degree from Sharif University of Technology in 2007, and the M.Sc. and the Ph.D. degrees from the University of California San Diego (UCSD) in 2009 and 2012, respectively, all in Electrical Engineering. He was with the Samsung Mobile Solutions Lab between 2012 and 2016. His general research interests are in coding and information theory with applications to wireless communications, machine learning, and security. He has won several awards including the NSF CAREER award in 2020, the Best Paper Award in the 2015 IEEE International Conference on RFID, the UCSD Shannon Memorial Fellowship, and two Silver Medals at the International Mathematical Olympiad.
10/19/2021 - Overcoming Data Availability Attacks in Blockchain Systems: A Graph-Coding Perspective - Presented by Prof. Lara Dolecek (U. of California, LA)
Prof. Lara Dolecek
Title: Overcoming Data Availability Attacks in Blockchain Systems: A Graph-Coding Perspective
Abstract: Blockchain systems are already gaining popularity in a variety of applications due to their decentralized design that is favorable in many settings. To overcome excessive storage and latency burden, light nodes and side blockchains have been proposed to, respectively, enhance the basic blockchain architecture. However, both light nodes and side chains are vulnerable to data availability (DA) attacks by malicious nodes. Recently, a technique based on erasure codes called Coded Merkle Tree (CMT) was proposed by Yu et al. that enables light nodes to detect a DA attack with high probability. CMT method relies on the use of random LDPC codes. We build on the previous work and demonstrate that graph codes specifically designed for the target applications in blockchain systems perform better than randomly constructed codes; intriguingly, the new finite-length code optimization framework unveils code properties beyond the established metrics.
Biography: Lara Dolecek is a Full Professor with the Electrical and Computer Engineering Department and Mathematics Department (courtesy) at the University of California, Los Angeles (UCLA). She holds a B.S. (with honors), M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences, as well as an M.A. degree in Statistics, all from the University of California, Berkeley. She received several awards for her research and teaching including the David J. Sakrison Memorial Prize from UC Berkeley, NSF CAREER Award, IBM Faculty Award, Okawa Research Grant and the Northrop Grumman Excellence in Teaching Award from UCLA. With her research group and collaborators, she received numerous best paper awards. She currently serves as an Associate Editor for IEEE Transactions on Information Theory and as the Secretary of the IEEE Information Theory Society. Prof. Dolecek is a 2021-2022 Distinguished Lecturer of the IEEE Information Theory Society. Prof. Dolecek has served as a consultant for a number of companies specializing in data communications and storage. In her current research, she is especially excited to explore the role of channel coding methods in blockchain systems, quantum information systems, and distributed storage and computing.
11/02/2021 - Scalable and Energy Efficient IoT Networks Based on Cell-Free Massive MIMO - Presented by Dr. Alexei Ashikhmin (Bell Labs)
Dr. Alexei Ashikhmin
Title: Scalable and Energy Efficient IoT Networks Based on Cell-Free Massive MIMO
Abstract: In recent years Internet of Things (IoT) has received significant momentum as a hot topic for both industry and academia. This technology promises to turn each physical object into an Internet node and to enable a whole new class of exciting applications and services for Industry 4.0, Healthcare 4.0, and Society 5.0, such as smart homes and cities, smart grids, IoT Retail Shops, urban security and environmental monitoring, to name a few.
Opening new and exciting business opportunities, at the same time IoT represents an entirely new challenge for physical-layer wireless communications. IoT challenges include scalability (the number of wirelessly connected IoT devices is expected to increase exponentially fast), low transmit power requirements (things are anticipated either to use energy harvesting or infrequently replaced batteries), sporadic data transmission, low and ultra-low latency requirements, and others. Wireless massive MIMO systems appear to be one of the best candidates for resolving the above challenges.
In this talk I will present my results, obtained jointly with my colleagues, on IoT networks supported by cell-free massive MIMO. I will consider a compressed sensing type algorithm for active things detection, random matrix theory approach for performance estimation, and transmit power optimization with machine learning for obtaining energy efficient and scalable IoT networks.
Biography: Dr. Alexei Ashikhmin is Distinguished Member of Technical Staff in the Communications and Statistical Sciences Research Department of Nokia Bell Labs. Alexei is also an adjunct professor at Columbia University, where he teaches courses on Digital Communications, Quantum Computing and Communications, and Error Correcting Codes: Classical and Modern.
Alexei’s research interests include communications theory, massive MIMO systems, quantum information theory and error correction, theory of error correcting codes and its modern applications.
Alexei is a recipient of the 2017 SPS Donald G. Fink Overview Paper Award for the article “An Overview of Massive MIMO: Benefits and Challenges” published in IEEE Journal of Selected Topics in Signal Processing. In 2004 Alexei, with his co-authors, received the IEEE Communications Society Stephen O. Rice Prize for the best paper published in IEEE Transactions on Communications. In 2014 he received the Thomas Edison Patent Award in the Telecommunications for a Patent on Massive MIMO System with Decentralized Service Antennas. In 2019 he got the Top Ten Nokia Inventors Award with the All-Time Highest Number of Granted Patents. In 2011, 2010, and 2002 he was a recipient of the Bell Laboratories President Award for breakthrough research in wired and wireless communication projects.
11/16/2021 - Convertible Codes: Enabling Redundancy Tuning in Large-scale Storage Systems - Presented by Prof. Rashmi K. Vinayak (CMU)
Prof. Rashmi K. Vinayak
Title: Convertible Codes: Enabling Redundancy Tuning in Large-scale Storage Systems
Abstract: In large-scale data storage systems, erasure codes are employed to store data in a redundant fashion to protect against data loss. In this setting, a set of k data blocks to be stored is encoded using an [n, k] code to generate n blocks that are then stored on distinct storage devices. In contrast to how redundancy is configured in current systems, we show that the failure rates of devices vary significantly over time. Dynamically tuning the redundancy to match the observed failure rates provides more than 15% cost and energy savings (translating to a savings of millions of dollars). However, traditional codes suffer from prohibitively high resource overheads in changing the code parameters on already encoded data.
In this talk, we:
- Introduce the concept of redundancy tuning and its benefits using real-world production data from large-scale cluster storage systems,
- Present a new theoretical framework to formalize the notion of “code conversion”—the process of converting data encoded using an [n, k] code into data encoded using a code with different parameters [n’, k’], while maintaining desired decodability properties,
- Introduce “convertible codes”, a new class of codes that enable resource-efficient conversion,
- Prove tight bounds on resource requirements of convertible codes and present optimal explicit constructions.
Biography: Rashmi Vinayak is an assistant professor in the Computer Science department at Carnegie Mellon University. Rashmi is a recipient of NSF CAREER Award 2020-25, Tata Institute of Fundamental Research Memorial Lecture Award 2020, Facebook Distributed Systems Research Award 2019, Google Faculty Research Award 2018, and Facebook Communications and Networking Research Award 2017. Her work has received USENIX NSDI 2021 Community (Best Paper) Award, UC Berkeley Eli Jury Dissertation Award 2016, and IEEE Data Storage Best Paper and Best Student Paper Awards for 2011/2012. During her Ph.D. studies, Rashmi was a recipient of Facebook Fellowship 2012-13, the Microsoft Research PhD Fellowship 2013-15, and the Google Anita Borg Memorial Scholarship 2015-16. Rashmi received her Ph.D. from UC Berkeley in 2016, and was a postdoctoral scholar at UC Berkeley’s AMPLab/RISELab from 2016-17. Webpage: http://www.cs.cmu.edu/~rvinayak/
11/24/2021 - Spatio-Temporal Inference and Learning using Multiple Hypothesis Testing and Multiple Change-Point Detection - Presented by Prof. Visa Koivunen (Aalto University, Finland)
Prof. Visa Koivunen
Title: Spatio-Temporal Inference and Learning using Multiple Hypothesis Testing and Multiple Change-Point Detection
Abstract: We address the problems of learning and inference for large-scale sensor networks observing streaming spatio-temporally varying data. The developed methods stem from distributed learning, multiple hypothesis testing (MHT) and multiple change-point detection for multiple physical world data streams. The proposed learning and inference methods control the False Discovery Rate (FDR) in dynamic spatio-temporal settings. In distributed sensing different probability models may be in place in different locations, hence we learn and use empirical probability models, employ Bayesian and nonparametric techniques, and different FDR variants. For time-critical applications and setups where detection of multiple abrupt changes, anomalies or adversarial activities are crucial, we develop sequential learning methods that perform multiple change-point detection for multi-stream data taking into account the spatial nature of the phenomenon of interest while strictly controlling the FDR. Analytical results on strict FDR control and proofs on average delay in learning and inference are established. The applications include the Internet of Things, radar systems, identifying adversarial activity in networks, agile radio spectrum use, air quality monitoring, smart grid and the spread of an epidemic.
Biography: Visa Koivunen (IEEE Fellow, EURASIP Fellow) received his D.Sc. (EE) degree with honors from the Univ of Oulu, Dept. of Electrical Engineering. He was a visiting researcher at the Univ of Pennsylvania, Phila., USA, 1991-1995. Since 1999 he has been a full Professor of Signal Processing at Aalto University, Finland. He received the Academy professor position in 2010 and Aalto Distinguished professor in 2020. 2002-2013 he was one of the PIs in SMARAD CoE in Research nominated by the Academy of Finland. Years 2003-2006 he was also adjunct full professor at the Univ. of Pennsylvania, USA. During his sabbatical terms in 2006-2007 and 2013-2014 he was a visiting fellow at Princeton University. He has also been a Visiting Fellow at Nokia Research (2006-2012). Since 2010 he has been part time visiting fellow and has spent mini-sabbaticals at Princeton University each year.
Dr. Koivunen’s research interest include statistical signal processing, wireless comms, radar, multisensor systems and machine learning. He has published more than 450 papers in international scientific conferences and journals and holds 5 patents. He has advised 29 Ph.D works. He has co-authored multiple papers receiving the best paper award in IEEE and other conferences. He was awarded the IEEE SP Society best paper award for the year 2007 (with J. Eriksson) and 2017 (w Zoubir, Muma and Chakhchouk) . He served in editorial board for IEEE SP Letters, IEEE TR on SP, and IEEE SP Magazine. He was awarded the 2015 EURASIP (European Association for Signal Processing) Technical Achievement Award for fundamental contributions to statistical signal processing and its applications in wireless communications, radar and related fields. He has served in the IEEE Fourier Award, Kilby Medal and Fellow Evaluation committees, SPS technical committees, AESS Radar Systems Panel and as IEEE SP Society Distinguished Lecturer in 2015-2016, and has given more than 50 invited talks in world leading research universities and institutes.
11/30/2021 - Privacy against Pattern Matching Attacks - Presented by Prof. Hossein Pishro-Nik (UMass Amherst)
Prof. Hossein Pishro-Nik
Title: Privacy against Pattern Matching Attacks
Abstract: Privacy of users in modern IoT applications has recently attracted attention. In this talk we focus on the following scenario: Suppose we are given a large number of sequences on a given alphabet, and an adversary is interested in identifying (de-anonymizing) a specific target sequence based on its patterns. Our goal is to thwart such an adversary by obfuscating the target sequences by applying artificial (but small) distortions to its values. A key point here is that we would like to make no assumptions about the statistical model of such sequences. This is in contrast to existing literature where assumptions (e.g., Markov chains) are usually made regarding such sequences to obtain privacy guarantees. We relate this problem to a set of combinatorial questions on sequence construction based on which we are able to obtain provable guarantees. This problem could be relevant to important privacy applications: from fingerprinting webpages visited by users through anonymous communication systems to linking communicating parties on messaging applications to inferring activities of users of IoT devices.
Biography: Hossein Pishro-Nik is a professor of electrical and computer engineering at the University of Massachusetts, Amherst. He received a B.S. degree from Sharif University of Technology, and M.Sc. and Ph.D. degrees from the Georgia Institute of Technology, all in electrical and computer engineering. His research interests include information and coding theory, privacy & security, stochastic analysis of wireless networks, and vehicular communications. He has received an NSF Faculty Early Career Development (CAREER) Award, an Outstanding Junior Faculty Award from UMass, and an Outstanding Graduate Research Award from the Georgia Institute of Technology.