Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

Recent & Upcoming Neyman Seminars

Michael Hudgens, UNC-Chapel Hill
Nov 29, 2017 4:00pm
1011 Evans Hall
Abstract:
A fundamental assumption usually made in causal inference is that of no interference between individuals (or units), i.e., the potential outcomes of one individual are assumed to be unaffected by the treatment assignment of other individuals. However, in many settings, this assumption obviously does not hold. For example, in infectious diseases, whether one person becomes infected depends on who...
Weijie Su, University of Pennsylvania
Dec 6, 2017 4:00pm
1011 Evans Hall
Abstract:
Stochastic gradient descent (SGD) is an immensely popular approach for optimization in settings where data arrives in a stream or data sizes are very large. Despite an ever-increasing volume of works on SGD, less is known about statistical inferential properties of predictions based on SGD solutions. In this paper, we introduce a novel procedure termed HiGrad to conduct inference on predictions,...
Aaditya Ramdas, UC Berkeley
Jan 17, 2018 4:00pm
1011 Evans Hall
Abstract:
Modern data science is often exploratory in nature, with hundreds or thousands of hypotheses being regularly tested on scientific datasets. The false discovery rate (FDR) has emerged as a dominant error metric in multiple hypothesis testing over the last two decades. I will argue that both (a) the FDR error metric, as well as (b) the current framework of multiple testing, where the scientist...
Shirshendu Ganguly, UC Berkeley
Jan 22, 2018 4:00pm
1011 Evans Hall
Abstract:
Statistical mechanics models are ubiquitous at the interface of probability theory, information theory, and inference problems in high dimensions. In this talk, we will focus on sparse networks, and polymer models on lattices. The study of rare behavior (large deviations) is intimately related to the understanding of such models. In particular, we will consider the rare events that a sparse...
Jacob Steinhardt, Stanford University
Jan 29, 2018 4:00pm
1011 Evans Hall
Abstract:
Deployed machine learning systems create a new class of computer security vulnerabilities where, rather than attacking the integrity of the software itself, malicious actors exploit the statistical nature of the learning algorithms. For instance, attackers can add fake training data, or strategically manipulate input covariates at test time. Attempts so far to defend against these...
Merle Behr, University of Göttingen
Jan 31, 2018 4:00pm
1011 Evans Hall
Abstract:
A challenging problem in cancer genetics is that tumors often consist of a few different groups of cells, so called clones, where each clone has different mutations, like copy-number (CN) variations. In whole genome sequencing the mutations of the different clones get mixed up, according to their relative unknown proportion in the tumor. However, CN's of single clones can only take values in a...
Swupnil Sahai, Tesla (Speaker), Andrej Karpathy, Tesla (Speaker)
Feb 7, 2018 4:00pm
1011 Evans Hall
Abstract:
From estimating the time to failure of battery modules for Reliability Engineering to predicting lane lines from images for Autopilot, statistics plays a vital role in building all of Tesla’s products. In this talk, we present the ways in which Tesla is changing the future of sustainable energy and discuss how statisticians will help us get there.
Guillaume Basse, Harvard University
Feb 15, 2018 4:00pm
1011 Evans Hall
Abstract:
Many important causal questions concern interactions between units, also known as interference. Examples include interactions between individuals in households, students in schools, and firms in markets. Standard analyses that ignore interference can often break down in this setting: estimators can be badly biased, while classical randomization tests can be invalid. In this talk, I present recent...
Ilias Diakonikolas, USC
Feb 21, 2018 4:00pm
1011 Evans Hall
Abstract:
Fitting a model to a collection of observations is one of the quintessential problems in machine learning. Since any model is only approximately valid, an estimator that is useful in practice must also be robust in the presence of model misspecification. It turns out that there is a striking tension between robustness and computational efficiency. Even for the most basic high-dimensional tasks,...
Tengyu Ma, Facebook AI Research
Feb 28, 2018 4:00pm
1011 Evans Hall
Abstract:
Over-parameterized models are widely and successfully used in deep learning, but their workings are far from understood. In many practical scenarios, the learned model generalizes to the test data, even though the hypothesis class contains a model that completely overfits the training data and no regularization is applied. In this talk, we will show that such phenomenon occurs in...
Roderick Little, University of Michigan
Mar 5, 2018 4:00pm
102 Moffitt Undergraduate Library
Abstract:
I recently taught a course entitled "Seminal Papers and Controversies in Statistics", and Leo Breiman's (2001) article "Statistical Modeling: The Two Cultures" was a very popular paper with students. The paper contrasts the machine learning culture, with it's focus on prediction, with more classical parametric modeling approach to statistics. I am more in the parametric modeling camp, but...