Welcome to the Signal and Information Processing Seminar Series at Rutgers!
The SIP Seminar Series at Rutgers University–New Brunswick brings together a diverse group of researchers, both from within and outside Rutgers, on (approximately) a biweekly basis to discuss recent advances in signal and information processing. The term of “Signal and Information Processing” used within the SIP Seminars is rather broad in nature and subsumes signal processing, image processing, statistical inference, machine learning, computer vision, control theory, harmonic analysis, information theory, etc.
Seminar Mailing List: You can subscribe to the SIP Seminars mailing list by sending an email to ECE_SIPemail@example.com with SUBSCRIBE in the subject of the email.
Spring 2018 Seminar Schedule: The SIP Seminars in Spring 2018 will take place on the following dates (typically, Wednesdays) at 2 pm in Room 203 of the Electrical Engineering Building on Busch Campus of Rutgers University–New Brunswick: Jan. 31; Feb. 7, 21; Mar. 5 (Monday), 20 (Tuesday); and Apr. 4, 18.
Spring 2018 SIP Seminars
Title: Through-The-Wall Radar Imaging Using a Distributed Quasi-Newton Method
Abstract: In this talk we will consider a distributed network of through-the-wall imaging radars and provides a solution for accurate indoor scene reconstruction in the presence of multipath propagation. A sparsity-based method is proposed for eliminating ghost targets under imperfect knowledge of interior wall locations. Instead of aggregating and processing the observations at a central fusion station, joint scene reconstruction and estimation of interior wall locations is carried out in a distributed manner across the network. Using alternating minimization approach, the sparse scene is reconstructed using the recently proposed MDOMP algorithm, while the wall location estimates are obtained with a distributed quasi-Newton method (D-QN) which is the main focus of this talk. We conclude by proving efficacy of the proposed approach using numerical simulations.
Biography: Haroon Raja is a fifth year PhD student working with Prof. Waheed Bajwa. His research interests include solving machine learning problems such as dictionary learning, principal component analysis, etc., in distributed and streaming settings.
Title: Game Theoretic Approaches for Design of Information Centric Networks (ICN)
Abstract: Future Internet designs call for increased security, performance reliability, social content distribution, mobility and distributed scalable resource allocation. In this talk, we develop an analytical framework for distribution of popular content in an Information-Centric Network (ICN) that comprises of Access ICNs, a Transit ICN and a Content Provider. Using a generalized Zipf distribution to model content popularity, we devise a game theoretic approach to jointly determine caching and pricing strategies in such an ICN. Under the assumption that the caching cost of the access and transit ICNs is inversely proportional to popularity, we show that the Nash caching strategies in the ICN are 0-1 (all or nothing) strategies. Further, for the case of symmetric Access ICNs, we show that the Nash equilibrium is unique and the caching policy (0 or 1) is determined by a threshold on the popularity of the content (reflected by the Zipf probability metric), i.e., all content more popular than the threshold value is cached. We also show that the resulting threshold of the Access and Transit ICNs, as well as all prices, can be obtained by a decomposition of the joint caching and pricing problem into two independent caching only and pricing only problems.
Biography: Mohammad is a Ph.D. candidate in WINLAB working with Prof. Narayan Mandayam. His research interests include solving wireless communication problems like caching and pricing, resource allocation and security using game theoretic approaches.
Prof. Pranjal Awasthi
Title: Learning and 1-bit Compressed Sensing Under Asymmetric Noise
Abstract: Given corrupted 1-bit measurements of the form sgn(w* . x), recover a vector w that is a good approximation to w*. This problem has been studied in both the learning theory and signal processing communities. In learning theory, this is known as the problem of learning halfspaces with noise, and in signal processing, as 1-bit compressed sensing, in which there is an additional assumption that w* is sparse. The challenge in both cases is to design computationally efficient algorithms tolerant to large amounts of noise. In this talk, I will describe recent work where we give algorithms achieving nearly optimal guarantees for both problems under two well-studied noise models, bounded (Massart) noise and adversarial noise. (Joint work with Nina Balcan, Nika Haghtalab and Hongyang Zhang.)
Biography: Pranjal Awasthi is an Assistant Professor in the Computer Science department at Rutgers University. His research interests are in theoretical aspects of machine learning with a particular focus on the design of robust and efficient learning algorithms as well as new models for interactive machine learning.
Prof. John Paisley
Title: Some new Bayesian inference and modeling approaches to time series data
Abstract: This talk will discuss some new approaches to tracking and time series problems using ideas from Bayesian machine learning. The first part will focus on inference for the the classical nonlinear Kalman filtering problem. I will discuss new methods for posterior approximation based on the Kullback-Leibler divergence. The talk will then focus on a general, multiple time series modeling problem. I will discuss a new dynamic matrix factorization approach based on Bayesian nonparametrics, with an extension to deep modeling using the variational autoencoder.
Biography: John Paisley is an assistant professor in the Department of Electrical Engineering at Columbia University, where he is also a member of the Data Science Institute. He received the B.S. and Ph.D. degrees in Electrical and Computer Engineering from Duke University in 2004 and 2010. Between 2010 and 2013 he was a postdoc in the Computer Science departments at Princeton University and UC Berkeley. His research focuses on Bayesian methods for machine learning, including Bayesian nonparametrics and variational inference techniques. He applies these techniques to several problems in signal and information processing, including compressed sensing and topic modeling.
Prof. Lalitha Sankar
Title: Privacy Challenges in the Emerging Data-driven Economy
Abstract: The emerging data analytics driven economy has brought to the forefront an important problem: consumer privacy. While data collection and processing have the promise of tremendous utility to both consumers and society at large, there is a growing need to ensure rigorous privacy guarantees to end users. In this talk, I will address two aspects of privacy: (a) the problem of data publishing with privacy guarantees; and (b) the economics of providing privacy guarantees for consumer-retailer interactions. In the first part of my talk, I will introduce a novel game-theoretic privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn the privacy scheme from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. In the second part, I will focus on the economics of privacy: in particular, the goal is to understand the competitiveness of online service providers (SPs) that offer free services (thereby implicitly using data collected from consumers) to offer privacy- and quality-of-service- (QoS) differentiated services. Building upon the classical Hotelling model for markets, I will present a parametrized model for SP profit and consumer valuation of service for the two-SP setting and highlight the effect of our theoretical analysis in practice. The first part of the talk involves joint work with the following: Chong Huang at ASU, and Peter Kairouz, Mark Chen, and Ram Rajagopal at Stanford. The second part of the talk is joint work with Chong Huang.
Biography: Lalitha Sankar received the B.Tech degree from the Indian Institute of Technology, Bombay, the M.S. degree from the University of Maryland, and the Ph.D degree from Rutgers University. She is presently an Assistant Professor in the ECEE department at Arizona State University. Prior to this, she was an Associate Research Scholar at Princeton University. Following her doctorate, Dr. Sankar was a recipient of a three year Science and Technology teaching postdoctoral fellowship from the Council on Science and Technology at Princeton University. Her research interests include information privacy and security in distributed and cyber-physical systems. For her doctoral work, she received the 2007-2008 Electrical Engineering Academic Achievement Award from Rutgers University. She received the IEEE Globecom 2011 Best Paper award for her work on side-information privacy. She is a recipient of the NSF CAREER award in 2014.
Prof. Javad Ghaderi
Title: Scheduling Multi-Resource Jobs in Data Center Networks
Abstract: We consider a natural scheduling problem which arises in many distributed computing frameworks. Jobs with diverse resource requirements (e.g. CPU, memory) arrive over time and must be served by a cluster of servers. To improve the throughput and delay, the scheduler can pack as many jobs as possible in the servers however the sum of the resource requirements of the jobs placed in a server cannot exceed its capacity. We are interested in scalable scheduling algorithms that can provide optimal or near-optimal throughput guarantees with low complexity. We first show that greedy bin packing heuristics are not throughput optimal and might even cause instability. Then we propose randomized scheduling algorithms that can provably achieve maximum throughput with low complexity. The algorithms are naturally distributed and each server needs to perform only a constant number of operations per time unit. Finally, we consider a setting where the job types are not known and the resource requirements might even come from an unknown distribution over a continuous support. We propose an algorithm that achieves at least 2/3 of the optimal throughput without the knowledge of the jobs’ resource requirement distribution. Based on Joint work with Konstantinos Psychas and Mehrnoosh Shafiee.
Biography: Javad Ghaderi is an Assistant Professor of Electrical Engineering at Columbia University. His research interests include algorithms, control, and optimization with application to communication networks. He received his B.Sc. from the University of Tehran, Iran, in 2006, his M.Sc. from the University of Waterloo, Canada, in 2008, and his Ph.D. from the University of Illinois at Urbana-Champaign (UIUC), in 2013, all in Electrical and Computer Engineering. He spent a one-year Simons Postdoctoral Fellowship at the University of Texas at Austin. He is the recipient of the Mac Van Valkenburg Graduate Research Award at UIUC, Best Student Paper Finalist at the 2013 American Control Conference, Best Paper Award at ACM CoNEXT 2016, and NSF CAREER Award in 2017.
Serge Kas Hanna
Title: Guess & Check Codes for Deletions, Insertions, and Synchronization
Abstract: We consider the problem of constructing binary codes that can correct multiple deletions. Varshamov-Tenengolts (VT) codes, dating back to 1965, are zero-error single deletion correcting codes, and have an asymptotically optimal redundancy. Finding similar codes for more than one deletion remains an open problem. In our work, we make progress by relaxing the standard zero-error decoding requirement. To this end, we assume that the input message is uniform iid and that the positions of the deletions are independent of the codeword. Our contribution is a new family of systematic codes, that we call Guess & Check (GC) codes, that can with high probability correct multiple deletions (or insertions). GC codes have logarithmic redundancy; and deterministic polynomial time encoding and decoding algorithms.
In this talk, I will discuss the construction of GC codes and present the theoretical results on the properties of these codes, i.e., redundancy, complexity, and probability of successful decoding. I will also present numerical simulations which include results on the application of GC codes to file synchronization. I will also briefly discuss extensions of GC codes to correcting localized (bursty) deletions; and to list decoding.
Biography: Serge Kas Hanna is a third year Ph.D. student at Rutgers University, working with Prof. Salim El Rouayheb. He received a Diploma in computer and communications engineering from the Lebanese University, Faculty of Engineering, in 2015. He also received a M.S. degree in computer and information systems engineering from the Lebanese University, Doctoral school, in 2015. His research interests are in the broad area of information theory and coding theory with a focus on coding for synchronization.