Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

Recent & Upcoming Neyman Seminars

Seth Flaxman, Department of Statistics, Oxford
Jan 18, 2017 4:00pm
1011 Evans Hall
Abstract:
In this talk I will highlight the statistical machine learning methods that I am developing, in response to the needs of my social science collaborators, to address public policy questions. My research focuses on flexible nonparametric modeling approaches for spatiotemporal data and scalable inference methods to be able to fit these models to large datasets. Most critically, my models and...
Daniel Kowal, Cornell University
Jan 25, 2017 4:00pm
1011 Evans Hall
Abstract:
I will present a Bayesian approach for modeling multivariate, dependent functional data. To account for the three dominant structural features in the data--functional, time dependent, and multivariate components--we extend hierarchical dynamic linear models for multivariate time series to the functional data setting. We also develop Bayesian spline theory in a more general constrained...
Yang Chen, Department of Statistics, Harvard University
Jan 30, 2017 4:00pm
1011 Evans Hall
Abstract:
Single-molecule experiments investigate the kinetics of individual molecules and thus can substantially enhance our understandings of various organisms. Analyzing data from single-molecule experiments poses a number of challenges: (a) the inherent stochasticity of molecules is usually buried in random experimental noise; (b) single-molecule behavior can be highly volatile. For both of these...
Sam Pimentel, Department of Statistics, Wharton School, UPenn
Feb 1, 2017 4:00pm
1011 Evans Hall
Abstract:
Every newly trained surgeon performs a first unsupervised operation. How do her patients' health outcomes compare with the patients of experienced surgeons? A credible comparison must (1) occur within hospitals, since health outcomes vary widely by hospital; (2) compare outcomes of patients undergoing the same operative procedures, since the risks differ in a knee replacement and an appendectomy;...
Alexander Franks, University of Washington
Feb 6, 2017 4:00pm
1011 Evans Hall
Abstract:
Understanding the function of biological molecules requires statistical methods for assessing covariability across multiple dimensions as well as accounting for complex measurement error and missing data. In this talk, I will discuss two models for covariance estimation which have applications in molecular biology. In the first half of the talk, I will describe the role of covariance estimation...
Aarti Singh, Machine Learning Department, Carnegie Mellon University
Feb 15, 2017 4:00pm
1011 Evans Hall
Abstract:
We investigate statistical aspects of subsampling for large-scale linear regression under label budget constraints. In many applications, we have access to large datasets (such as healthcare records, database of building profiles, and visual stimuli), but the corresponding labels (such as customer satisfaction, energy usage, and brain response, respectively) are hard to obtain. We derive...
Karl Rohe, Department of Statistics, University of Wisconsin, Madison
Feb 22, 2017 4:00pm
1011 Evans Hall
Abstract:
Web crawling, snowball sampling, and respondent-driven sampling (RDS) are three types of network driven sampling techniques that are popular when it is difficult to contact individuals in the population of interest. This talk will show that if participants refer too many other participants, then under the standard Markov model in the RDS literature, the standard approaches do not provide "square...
Dorit S. Hochbaum, IEOR dept UC Berkeley
Mar 1, 2017 4:00pm
1011 Evans Hall
Abstract:
Many problems in fitting observations while satisfying rank order constraints, occur in contexts of learning, Lipschitz regularization and Isotonic regression (with or without fused Lasso). All these problems can be abstracted as a convex cost closure problem which is to minimize the cost of deviating from the observations while satisfying rank order constraints. Any feasible solution that...
Laurel Larsen, Berkeley Geography Department
Mar 8, 2017 4:00pm
1011 Evans Hall
Abstract:
One consequence of earth systems moving out of a regime of stationarity is that statistical models based on past behavior may no longer be useful for predicting the future. Rather, an understanding of the mechanisms driving dynamic earth systems is needed. The mechanisms responsible for nonlinear—even surprising—behavior often involve feedbacks between biotic and abiotic processes. Examples of...
Chiara Sabatti, Department of Biostatistics, Stanford University
Mar 15, 2017 4:00pm
1011 Evans Hall
Abstract:
Geneticists have always been aware that, when looking for signal across the entire genome, one has to be very careful to avoid false discoveries. Contemporary studies often involve a very large number of traits, increasing the challenges of “looking every-where”. I will discuss novel approaches that allow an adaptive exploration of the data, while guaranteeing reproducible results.
Guido Imbens, Stanford Business School
Mar 22, 2017 4:00pm
1011 Evans Hall
Abstract:
When a researcher estimates the parameters of a regression function, using information on all 50 states in the United States, or information on all visits to a website, what is being estimated, and what is the interpretation of the standard errors? Researchers typically assume the sample is a random sample from a large population of interest, and report standard errors that are designed to...
Daniel M. Roy, Dept. of Statistics, University of Toronto
Apr 5, 2017 4:00pm
1011 Evans Hall
Abstract:
For finite parameter spaces under finite loss, every Bayes procedure derived from a prior with full support is admissible, and every admissible procedure is Bayes. This relationship already breaks down once we move to finite-dimensional Euclidean parameter spaces. Compactness and strong regularity conditions suffice to repair the relationship, but without these conditions, admissible procedures...