Information Theory, Probability, and Statistics Unit

The Information Theory, Probability, and Statistics Unit conducts theoretical research at the convergence of Information Theory, Probability Theory, and Statistics, with applications spanning Learning Theory, Estimation Theory, and Computational Biology. Our aim is twofold: to expand theoretical boundaries by establishing links across these fields and to develop theory-driven, innovative applications. 
Our main object of study are information measures, mathematical objects fundamental to a variety of practical problems, including compression, communication over noisy channels, privacy, and estimation.

For example, when ''compressing'' symbols, the (expected) length of the compressed text must be larger than a well-known information measure: Entropy. Moreover, we can ''compress'' to almost exactly Entropy bits. It is thus said that Entropy provides a complete characterisation of this problem, defining both a minimum threshold (impossibility: we cannot compress to less than Entropy) and a target limit (achievability: we can algorithmically approach this fundamental limit).

Many more information measures have been defined over the years, all sharing similar theoretical properties. However, very few have been connected to practical problems like Entropy. Looking at information measures through different mathematical lenses will allow us to better understand these objects and link them to meaningful applications that are relevant to modern problems. Some areas of interest are: Concentration of measure phenomenon in Probability theory, Estimation Theory (with a focus on problems in biology, precision medicine, etc.), Learning Theory (understanding and bounding of the generalization error of well-established learning algorithms), Hypothesis Testing, and more.