Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

Recent & Upcoming Neyman Seminars

Niall Cardin, Google
Oct 17, 2018 4:00pm
1011 Evans Hall
Abstract:
This talk is in two parts, both of which discuss interesting uses of experiments in Google search ads. In part 1 I discuss how we can inject randomness into our system to get causal inference in a machine learning setting. In part 2. I talk about experiment designs to measure how users learn in response to ads on Google.com.
Claire Tomlin, UC Berkeley
Oct 24, 2018 4:00pm
1011 Evans Hall
Abstract:
A great deal of research in recent years has focused on robot learning. In many applications, guarantees that specifications are satisfied throughout the learning process are paramount. For the safety specification, we present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. In the first part of the talk, we will review these...
Michael W. Mahoney, UC Berkeley
Nov 7, 2018 4:00pm
1011 Evans Hall
Abstract:
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models and smaller models trained from scratch. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of self-regularization, implicitly sculpting a more regularized energy or penalty...
Paul Grigas, UC Berkeley
Nov 14, 2018 4:00pm
1011 Evans Hall
Abstract:
Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is non-separable, but fails to exist when the data is separable....
Tyler VanderWeele, Harvard School of Public Health
Nov 26, 2018 12:00pm
1011 Evans Hall
Abstract:
Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the “E-value,” which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an...