Fall 2017 SIP Seminars

Prof. Roy Yates

Title: The Age of Information: Status updates for real-time systems and networks

Abstract: Increasingly ubiquitous network connectivity has engendered applications in which sources send updates of their status to interested recipients. These applications require timely status updates at the recipients; however, this is typically constrained by communication and network resources. In this talk, we introduce a status-age timeliness metric for the evaluation of status update systems. We develop general methods for calculating the age metric that we apply to network models of sources and service facilities. We characterize optimal updating rates and policies based on the communication constraints of the senders and systems.

Biography: Roy Yates received the B.S.E. degree in 1983 from Princeton University, and the S.M. and Ph.D. degrees in 1986 and 1990 from M.I.T., all in Electrical Engineering. Since 1990, he has been with the Wireless Information Networks Laboratory (WINLAB) and the Electrical and Computer Engineering (ECE) department at Rutgers University where he is currently a Distinguished Professor of ECE and Associate Director of WINLAB. An IEEE Fellow in 2011, Dr. Yates is a past associate editor of the IEEE Journal on Selected Areas of Communication Series in Wireless Communication and also a past Associate Editor for Communication Networks of the IEEE Transactions on Information Theory.

Rawad Bitar

Title: Staircase Codes for Fast and Secure Distributed Computing

Abstract: We consider the setting of a master server who possesses confidential data (genomic, medical data, etc.) and wants to run intensive computations on it, as part of a machine-learning algorithm for example. The master wants to distribute these computations to untrusted workers who have volunteered or are incentivized to help with this task. However, the data must be kept private (in an information theoretic sense) and not revealed to the individual workers. The workers may be busy and introduce random delays to finish the task assigned to them. We are interested in secure coding schemes that can guarantee privacy and reduce the aggregate delay experienced by the master. Towards that goal, I will introduce a new family of codes, called Staircase codes, which can guarantee privacy and reduce delays. I will discuss briefly the construction of these codes and present theoretical guarantees on their delay performance under exponential delay model. Then, I will present results from experiments we ran on Amazon cloud. I will conclude with some problems that remain open.

Biography: Rawad Bitar is a Ph.D. student in the ECE department at Rutgers University, New Jersey, working with Prof. Salim El Rouayheb. He received the Diploma degree in computer and communication engineering from the Lebanese University, Faculty of Engineering, Roumieh, Lebanon in 2013 and the M.S. degree from the Lebanese University, Doctoral school, Tripoli, Lebanon in 2014. His research interests are in the broad area of information theory and coding theory with a focus on coding for distributed storage and information theoretic security.

Prof. Pierre Bellec

Title: How to Generalize Bias and Variance to Convex Regularized Estimators?

Abstract: Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. However, there are still simple open questions about these estimators, even in the simple linear regression model. We are particularly interested in the following open questions.

1) The bias and variance of linear estimators is easy to define and provide precise insights on the performance of linear estimators. How can bias and variance be generalized to nonlinear convex estimators?

2) The performance guarantees of these estimators require the tuning parameter to be larger than some universal threshold, but the literature is mostly silent about what happens if the tuning parameter is smaller than this universal threshold. How bad is the performance when the tuning parameter is below the universal threshold?

3) The correlations in the design can significantly deteriorate the empirical performance of these nonlinear estimators. Is it possible to quantify the this deterioration explicitly? Is there a price to pay for correlations; in particular, is the performance for correlated designs always worse than that for orthogonal designs?

4) Most theoretical results on the Lasso and its variants rely on conditions on the design matrix. These conditions greatly simplify the proofs and our understanding of these estimators, but it is still unclear whether these conditions are truly necessary of whether they are an artifact of the proofs. Are these conditions actually necessary?

We will provide some general properties of norm-penalized estimators and propose a generalization of the bias and the variance for these nonlinear estimators. These generalizations of bias/variance will hopefully let us answer the above questions.

Biography: Pierre is an assistant professor in the department of statistics and biostatistics at Rutgers since Fall 2016. Prior to that, he obtained his PhD from ENSAE in Paris. His research interests include aggregation of estimators, shape restricted regression, confidence sets, high-dimensional statistics and concentration inequalities.

Dr. Ankit Singh Rawat

Title: Faster Data-Processing in Cloud-based Systems

Abstract: Cloud is an increasingly popular choice for storing and analyzing vast amounts of data. Besides severing as a backbone for large institutions such as CERN, Google, Microsoft, Amazon, and Facebook, cloud systems have transformed the ways with which most of the businesses operate today. It’s imperative that these systems operate in a fault-tolerant and resource efficient manner; and realize fast and seamless access to information present on these systems. In this talk, I’ll present a coding theoretic framework to address these aforementioned key challenges arising in the context of cloud systems. I’ll describe a new construction of codes that ensure resource efficient regeneration of the content lost due to server failures. These codes provide practical regeneration schemes without compromising fault-tolerance of the system. Focusing on the issue of fast computation in a cloud computing setup, I’ll then discuss how coded computation enables successful inference in the presence of straggling servers.

Biography: Ankit Singh Rawat is currently a joint postdoc at MIT and UMass Amherst. Earlier, during 2015-16, he was a postdoctoral fellow in the Computer Science Department at CMU. Ankit received his Ph.D. in Electrical and Computer Engineering from UT Austin in 2015 and his Bachelor’s degree in Electrical Engineering from the Indian Institute of Technology at Kanpur in 2010. He is a recipient of the Microelectronics and Computer Development Fellowship from UT Austin. His research interests include coding theory, information theory, statistical learning, security & privacy, and neuro-inspired computing.

Jie Shen

Title: Towards a Principled Analysis of Support Recovery from Compressed Measurements

Abstract: In machine learning and compressed sensing, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements. In this work, we present a principled analysis on the support recovery performance for a family of hard thresholding algorithms. To this end, we appeal to the recently proposed partial hard thresholding (PHT) operator. We show that under proper conditions, PHT recovers an arbitrary s-sparse signal within O(s kappa log(kappa)) iterations where kappa is an appropriate condition number. Specifying the PHT operator, we obtain the best known result for two of the prominent sparse recovery algorithms: hard thresholding pursuit and orthogonal matching pursuit with replacement. The empirical study perfectly matches our theoretical findings.

Biography: Jie Shen is currently a fourth year Ph.D. candidate with the Department of Computer Science, Rutgers University. Ping Li and Pranjal Awasthi are his co-advisors. He received the master degree (computer science) and bachelor degree (mathematics) from Shanghai Jiao Tong University in China. During 2013 and 2014, he was a visiting scholar at National University of Singapore. His research interests mainly include machine learning, compressed sensing and convex/non-convex optimization.

Dr. Yanina Shkel

Title: Data Compression and Learning with Logarithmic Loss

Abstract: In this talk we discuss data compression, as well as learning from data, with a specific performance criterion: the logarithmic loss (log-loss). The log-loss measures distortion in settings when the reconstructed information is soft, that is, a distribution over possible values is provided. It is natural to construct data processing modules which interface by processing such soft information: consider, for example, the belief propagation algorithm that works by passing probabilities between nodes in order to perform statistical inference on graphical models. Conceptually, log-loss is an important performance criterion as it provides a bridge between the theory of data compression and sequential prediction, and has nice mathematical properties which allow for strong bounds for both problems. The focus of our talk will be on the non-asymptotic and universal fundamental limits of lossy compression where we obtain simple and elegant results that generalize those in lossless compression. A particular emphasis will be made on new connections between the problems of data compression and sequential prediction with logarithmic loss.

Biography: Yanina Shkel is a postdoctoral scholar in the department of Electrical Engineering at Princeton University; before this she was a postdoctoral fellow with the NSF Center for Science of Information where she worked with collaborators at Princeton University and University of Illinois at Urbana-Champaign. Yanina has B.S. degrees in Mathematics and in Computer Science, as well as a Ph.D. degree in Electrical and Computer Engineering from University of Wisconsin-Madison. Before attending graduate school she worked as a developer for Morningstar, Inc. where she administered databases containing and processing large amounts of financial data. More recently, she was an intern at 3M Corporate Research Labs where she had a unique opportunity to utilize her background in computation and information sciences for materials and product driven needs of 3M. Yanina is broadly interested in identifying laws which govern the behavior of information in both engineered and naturally occurring systems, and using these laws to better understand the capabilities of such systems. Her most recent work focuses on points of connection between information theory, statistics, and machine learning.