Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

Recent & Upcoming Neyman Seminars

Gabor Lugosi, Pompeu Fabra University
Sep 12, 2018 4:00pm
1011 Evans Hall
Abstract:
Given n independent, identically distributed copies of a random vector, one is interested in estimating the expected value. Perhaps surprisingly, there are still open questions concerning this very basic problem in statistics. The goal is to construct estimators that are close to the true mean with high probability, with respect to some given norm. In this talk we are primarily...
Alex Papanicolaou, UC Berkeley
Sep 19, 2018 4:00pm
1011 Evans Hall
Abstract:
There is a source of bias in the sample eigenvectors of financial covariance matrices, when unchecked, distorts weights of minimum variance portfolios and leads to risk forecasts that are severely biased downward. Recent work with Lisa Goldberg and Alex Shkolnik develops an eigenvector bias correction. Our approach is distinct from the regularization and eigenvalue shrinkage methods found in the...
Jeroen P. van der Sluijs, University of Bergen and Utrecht University
Sep 26, 2018 4:00pm
1011 Evans Hall
Abstract:
Scientific assessment of many contemporary risks is plagued by controversy, persistent uncertainty, and polarized societal contexts. Decision makers often become mired in contested evidence, beset by uncertainties and contradictions. This leads to inaction on early warnings, paralysis-by-analysis, and erodes trust in science and its institutions. But why do controversies persist? A new...
Kristian Lum, Human Rights Data Analysis Group
Oct 3, 2018 4:00pm
1011 Evans Hall
Abstract:
An accurate understanding of the magnitude and dynamics of casualties during a conflict is important for a variety of reasons, including historical memory, retrospective policy analysis, and assigning culpability for human rights violations. However, during times of conflict and their aftermath, collecting a complete or representative sample of casualties can be difficult if not impossible. One...
Sebastian Schreiber, UC Davis
Oct 10, 2018 4:00pm
1011 Evans Hall
Abstract:
Two long standing, fundamental questions in biology are "Under what conditions do populations persist or go extinct? When do interacting species coexist?" The answers to these questions are essential for guiding conservation efforts and identifying mechanisms that maintain biodiversity. Mathematical models play an important role in identifying these mechanisms and, when coupled with empirical...
Niall Cardin, Google
Oct 17, 2018 4:00pm
1011 Evans Hall
Abstract:
This talk is in two parts, both of which discuss interesting uses of experiments in Google search ads. In part 1 I discuss how we can inject randomness into our system to get causal inference in a machine learning setting. In part 2. I talk about experiment designs to measure how users learn in response to ads on Google.com.
Claire Tomlin, UC Berkeley
Oct 24, 2018 4:00pm
1011 Evans Hall
Abstract:
A great deal of research in recent years has focused on robot learning. In many applications, guarantees that specifications are satisfied throughout the learning process are paramount. For the safety specification, we present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. In the first part of the talk, we will review these...
Michael W. Mahoney, UC Berkeley
Nov 7, 2018 4:00pm
1011 Evans Hall
Abstract:
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of self-regularization, implicitly sculpting a more regularized energy or penalty...
Paul Grigas, UC Berkeley
Nov 14, 2018 4:00pm
1011 Evans Hall
Abstract:
Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is non-separable, but fails to exist when the data is separable....
Tyler VanderWeele, Harvard School of Public Health
Nov 26, 2018 12:00pm
1011 Evans Hall
Abstract:
Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the “E-value,” which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an...