Welcome to the Signal and Information Processing Seminar Series at Rutgers!
The SIP Seminar Series at Rutgers University–New Brunswick brings together a diverse group of researchers, both from within and outside Rutgers, on (approximately) a biweekly basis to discuss recent advances in signal and information processing. The term of “Signal and Information Processing” used within the SIP Seminars is rather broad in nature and subsumes signal processing, image processing, statistical inference, machine learning, computer vision, control theory, harmonic analysis, information theory, etc.
Seminar Mailing List: You can subscribe to the SIP Seminars mailing list by sending an email to ECE_SIPfirstname.lastname@example.org with SUBSCRIBE in the subject of the email.
Spring 2020 Seminar Schedule: The SIP Seminars in Spring 2020 will take place on the following dates (typically, Wednesdays) at 2 pm in Room 240 of the Electrical Engineering Building on Busch Campus of Rutgers University–New Brunswick: Jan 30 (Thursday); Feb 5, 19; Mar 4, 25; Apr 8, 15.
Spring 2020 SIP Seminars
Prof. Sidharth Jaggi
Title: Covert communication, or, how to whisper
Covert communication tries to answer the following question — if Alice wishes to whisper to Bob while ensuring that the eavesdropper Eve cannot even detect whether or not Alice is whispering, how much can she whisper. Ensuring such a stringent security requirement can be met requires new ideas from information theory, coding theory, and cryptography. In this talk I will survey the state of the existing literature (recent information-theoretic capacity-style results for a variety of settings), and then discuss even more recent results. Specifically, I will highlight: Code constructions: Computationally efficient code constructions that achieve the information-theoretic capacity bounds. Resilience to jamming: In some settings, Eve may not just be a passive eavesdropper, but actively attempt to jam Alice’s communication, even if she isn’t sure whether or not Alice is actually whispering. I will discuss covert communication schemes that are resilient to such malicious jamming. Impact of environmental uncertainty: Often, noise levels on the communication medium are not static, but stochastically varying (for instance, in fading channels). It turns out such natural variation can dramatically impact the capacity — indeed, in general such variation hurts Eve’s detector much more than it hurts Bob’s decoder.
Biography: Sidharth Jaggi received his B. Tech. from I.I.T. Bombay 2000, his M.S. and Ph.D. degrees from the CalTech in 2001 and 2006 respectively, all in EE. He spent 2006 as a Postdoctoral Associate at LIDS MIT. He joined the Department of Information Engineering at the Chinese University of Hong Kong in 2007, where he is now an Associate Professor. His interests lie at the intersection of network information theory, coding theory, and algorithms. His research group thus (somewhat unwillingly) calls itself the CAN-DO-IT team (Codes, Algorithms, Networks: Design and Optimization for Information Theory). Examples of topics he has dabbled in include network coding, sparse recovery/group-testing, covert communication, and his current obsession is with adversarial channels.
Prof. Adam Charles
Title: Modern Methods for Calcium Imaging of Neural Population Activity
Abstract: Deciphering how the brain works requires new methods for recording and interpreting activity from large neural populations at single-neuron resolution. In this talk I will focus on one important and widely used optical neural recording modality: two-photon microscopy (TPM). Specifically, I will describe recent methodological advances for increasing the quality and quantity of inferred neural activity from TPM recordings, as well as novel assessment techniques. First, I will discuss new volumetric two-photon imaging of neurons using stereoscopy (vTwINS): a computational imaging (co-designed hardware/algorithm) approach that projects an entire volume onto each image and thus increases the quantity of imaged neurons with no reduction of frame-rate. To infer the neural locations and activities, vTwINS relies on a co-designed greedy algorithm we developed that leverages knowledge of the optics to seed an adaptive matching-pursuit-type algorithm. Second I will discuss the importance of accurate neural activity inference from TPM data and show that the basic model underlying all state-of-the-art algorithms can lead to critical errors. Specifically, bursts of activity, or transients, can contaminate the inferred activity of neighboring cells in ways that impact the ensuing scientific results. I will demonstrate a new algorithm that directly models unexplained activity with spatial structure and significantly removes such cross-talk to the benefit of scientific discovery. Finally, all advances in imaging must be understood and rigorously validated. Methods, such as TPM, that image otherwise un-observable data suffer from a lack of needed ground-truth with which to perform such validation. I will thus present a simulation-based approach to such validation that uses known statistics of neural anatomy and optical propagation to generate realistic synthetic data. Multiple microscopy techniques and algorithms can be assessed using such data, enabling more rapid and confident development of new TPM techniques.
Biography: Adam Charles received both a B.E. and M.E in Electrical and Computer Engineering in 2009 from The Cooper Union in New York City. He received his Ph.D. in Electrical and Computer Engineering in 2015 working under Dr. Christopher Rozell at The Georgia Institute of Technology, where his research was awarded a Sigma Xi Best Doctoral Thesis award as well as an Electrical and Computer Engineering Research Excellence award. Post-graduation, Adam joined the Princeton Neuroscience Institute, working with Dr. Jonathan Pillow on computational neuroscience and data analysis methods. Currently Adam is joining the Biomedical Engineering Department at Johns Hopkins University, where his research includes neural imaging technologies, inference and tracking of sparse and structured signals, and mathematical modeling of neural networks.
Dr. Ruobin Gong
Title: Exact statistical inference for differentially private data
Abstract: Differential privacy (DP) is a mathematical framework that protects confidential information in a transparent and quantifiable way. I discuss how two classes of approximate computation techniques can be systematically adapted to produce exact statistical inference using DP data. For likelihood inference, we call for an importance sampling implementation of Monte Carlo expectation-maximization, and for Bayesian inference, an approximate Bayesian computation (ABC) algorithm suitable for possibly complex likelihood. Both approaches deliver exact statistical inference with respect to the joint statistical model inclusive of the differential privacy mechanism, yet do not require analytical access of such joint specification. Highlighted is a transformation of the statistical tradeoff between privacy and efficiency, into the computational tradeoff between approximation and exactness. Open research questions on two fronts are posed: 1) how to afford computationally accessible and (approximately) correct statistical analysis tools to DP data users; 2) how to understand and remedy the effect of any necessary post-processing with statistical analysis.
Biography: Ruobin Gong is Assistant Professor of Statistics at Rutgers University. Her research interests lie at the theoretical foundations of Bayesian and generalized Bayesian methodologies, statistical modeling, inference, and computation with differentially private data, and ethical implications of aspects of modern data science. Her current research on Bayesian methods for differential privacy is supported by the National Science Foundation. Ruobin received her Ph.D. in statistics from Harvard University in 2018. She is currently an associate editor of the Harvard Data Science Review.
Prof. Min Xu
Title: Inference for the History of a Randomly Growing Tree
Abstract: The spread of infectious disease in a human community or the proliferation of fake news on social media can be modeled as randomly growing tree-shaped networks. The history of the random growth process is often unobserved but contains important information such as the source of the infection. We propose to infer aspects of the latent history through an approximate resampling framework which produces a confidence set with honest Frequentist coverage and certain optimality properties. In some common models such as preferential attachment, our sampling method is exact and has runtime linear in the number of nodes in the network.
Biography: Min Xu is an assistant professor in the department of statistics at Rutgers University. He obtained his Ph.D. in the Department of Machine Learning from Carnegie Mellon University and was the departmental post-doctoral researcher at the statistics department in the Wharton School of the University of Pennsylvania. His research interests include nonparametric estimation in machine learning and network data analysis.
Prof. Ying Hung
Dr. Patrick Johnstone
Dr. Fangwei Ye