Bin Yu is Chancellor's Professor in the Departments of Statistics and Electrical Engineering & Computer Science at UC Berkeley. She chaired the Statistics Department at Berkeley from 2009 to 2012. She has published more than 100 scientific papers in leading journals and conference proceedings on statistics, EECS, remote sensing and neuroscience. Her publications cover a wide range of research on empirical process theory, information theory (MDL), MCMC methods, signal processing, machine learning, high dimensional data inference (e.g. sparse modeling (boosting and lasso), and spectral clustering), and interdisciplinary data problems. She has served on editorial boards including Annals of Statistics, Journal of American Statistical Association, and Journal of Machine Learning Research.
Prof. Yu was a Guggenheim Fellow and co-recipient of the Best Paper Award of IEEE Signal Processing Society in 2006, and was the Tukey Memorial Lecturer for the Bernoulli Society in 2012. She is President-elect of the Institute of Mathematical Statistics (IMS) and an elected Fellow of IMS, AAAS, IEEE, and the American Statistical Association.
She serves on the Scientific Advisory Board of the Institute for Pure and Applied Mathematics and on the Board of Mathematical Sciences and Applications of the National Academy of Sciences. She previously served as co-chair of the National Scientific Committee of the Statistical and Applied Mathematical Sciences Institute and on the Board of Governors of IEEE-IT Society.
Statistical inference for high dimensional data and interdisciplinary research in neuroscience, remote sensing, and text summarization.
I am currently working on statistical methodologies and models involving large data sets from remote sensing, data networks (internet and sensor networks), neuroscience, finance, and bioinformatics. Together with my students and collaborators, I have been working on different areas of statistical machine learning, theoretical and computational. These areas include boosting, Lasso, support vector machines (SVM), and semi-supervised learning. On the computational side, we have developed algorithms such as BLasso and iCAP for sparse modeling. My past research areas have also included empirical processes, Markov Chain Monte Carlo, signal processing, the minimum description length principle (MDL), and information theory.