Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

Recent & Upcoming Neyman Seminars

Daniel M. Roy, Dept. of Statistics, University of Toronto
Apr 5, 2017 4:00pm
1011 Evans Hall
For finite parameter spaces under finite loss, every Bayes procedure derived from a prior with full support is admissible, and every admissible procedure is Bayes. This relationship already breaks down once we move to finite-dimensional Euclidean parameter spaces. Compactness and strong regularity conditions suffice to repair the relationship, but without these conditions, admissible procedures...
Rob Tibshirani, Stanford University
Apr 11, 2017 4:00pm
277 Cory Hall
In April 1995 I gave the Stanford-Berkeley seminar entitled "Regression Shrinkage and Selection via the Lasso". I will recount that day and review what has happened in this area of research since that time. I will also discuss some new developments (by others) in the computation of best subsets regression, a main competitor to the lasso, and present the results of a large scale numerical study...
(Tuesday; Berkeley-Stanford joint colloquium)
Ethan Anderes, Department of Statistics, UC Davis
Apr 20, 2017 4:00pm
105 North Gate Hall
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the precision mapping of CMB distortions due to the gravitational lensing effect of dark matter. Estimating this lensing is important for two reasons. First, lensing probes the nature of dark matter fluctuations in the sky. Second, lensing estimates can be used, in principle, to delense the observed CMB...
(Thursday; Berkeley-Davis joint colloquium)
Jing Lei, Department of Statistics, CMU
Apr 26, 2017 4:00pm
1011 Evans Hall
Cross-validation is one of the most popular model selection methods in statistics and machine learning. Despite its wide applicability, traditional cross-validation methods tend to overfit, unless the ratio between the training and testing sample sizes is very small. We argue that such an overfitting tendency of cross-validation is due to the ignorance of the uncertainty in the testing...
Mikhail Belkin, Department of Computer Science and Engineering, Ohio State University
May 3, 2017 4:00pm
1011 Evans Hall
What can we learn from big data? First, more data allows us to more precisely estimate probabilities of uncertain outcomes. Second, data provides better coverage to approximate functions more precisely. I will argue that the second is key to understanding the recent success of large scale machine learning. A useful way of thinking about this issue is that it is necessary to use many more...
Fernando Perez, University of California, Berkeley (Speaker)
May 9, 2017 12:00pm
1011 Evans Hall
The scientific traditions of physics and applied mathematics have focused mainly on defining simplified models of the world amenable to analytical and numerical approximation. Today, the flood of real-world data at unprecedented levels of resolution and diversity creates opportunity for building richer scientific descriptions that combine statistical inference with "classical" models. I will...