# Neyman Seminar

The Neyman seminar is the statistics seminar in the Department. Historically, it has been focused on applications of Statistics to other fields. Nowadays, it has a very broad scope, with topics ranging from applications of statistics to theory.

The seminar is held on Wednesdays from 4 to 5 in the Jerzy Neyman room, 1011 Evans.

Details of individual seminar events are published in the campus' event system.

You can sign up to the department's seminars@stat mailing list to receive related announcements.

Add this series of events to your calendar: ICAL or XML

## Recent & Upcoming Neyman Seminars

Danny Hernandez, OpenAI
Sep 11, 2019 4:00pm
1011 Evans Hall
Abstract:
Everyone makes bets. Scientists bet years of their lives on research agendas, CEO’s bet billions of dollars on new products, and world leaders bet our welfare through their policies. Their decisions often hinge on implicit judgement based predictions about relatively one-off events rather than on data. We’ll review the most promising existing techniques for improving one’s predictions. I’ll...
Neyman Seminar
Hongyuan Cao, Florida State University
Sep 18, 2019 4:00pm
1011 Evans Hall
Abstract:
Extended follow-up with longitudinal data is common in many medical investigations. In regression analyses, a longitudinal covariate may be omitted, often because it is not measured synchronously with the longitudinal response. Naive approach that simply ignores the omitted longitudinal covariate can lead to biased estimators. In this article, we establish conditions under which estimation is...
Neyman Seminar
Purnamrita Sarkar, UT Austin
Sep 25, 2019 4:00pm
1011 Evans Hall
Abstract:
People belong to multiple communities, words belong to multiple topics, and books cover multiple genres; overlapping clusters are commonplace. Many existing overlapping clustering methods model each person (or word, or book) as a non-negative weighted combination of “exemplars” who belong solely to one community, with some small noise. Geometrically, each person is a point on a cone whose corners...
Neyman Seminar
Christian Borgs, Microsoft Research
Oct 7, 2019 4:00pm
BIDS Room Doe Library
Abstract:
There are many examples of sparse network at scale, e.g., the WWW, online social networks, and large bipartite networks used for recommendations. How do we model and learn these networks? In contrast to conventional learning problems, where we have many independent samples, it is often the case for these networks that we can get only one independent sample. How do we use a single snapshot today...
Neyman Seminar
Dylan Small, University of Pennsylvania
Oct 9, 2019 4:00pm
1011 Evans Hall
Abstract:
Gun violence is a problem in America. I will briefly describe some open areas of research about gun violence prevention that statisticians might be able to contribute to. Then I will discuss two attempts at causal inference about gun violence prevention policies that I have worked on, and highlight some ideas about causal inference I have sought to use in this work.
Neyman Seminar
Emily Fox, University of Washington
Oct 16, 2019 4:00pm
1011 Evans Hall
Abstract:
We are increasingly faced with the need to analyze complex data streams; for example, sensor measurements from wearable devices have the potential to transform healthcare. Machine learning—and moreover deep learning—has brought many recent success stories to the analysis of complex sequential data sources, including speech, text, and video. However, these success stories involve a clear...
Neyman Seminar
Balint Virag, University of Toronto
Oct 23, 2019 4:00pm
1011 Evans Hall
Abstract:
The distribution of the top principal value of a random covariance matrix appears in seemingly unrelated models. These include particle systems originating in cell biology, longest increasing subsequences, and the shape of coffee spots. Random planar geometry lurks behind these phenomena. I will discuss the recently constructed common scaling limit, the directed landscape, and its...
Neyman Seminar
Chloé-Agathe Azencott, Mines ParisTech
Oct 30, 2019 4:00pm
1011 Evans Hall
Abstract:
Many problems in genomics require the ability to identify relevant features in data sets containing many more orders of magnitude than samples. One such example is genome-wide association studies (GWAS), in which hundreds of thousands of single nucleotide polymorphisms are measured for orders of magnitude fewer samples. This setup poses statistical and computational challenges, and for...
Neyman Seminar
Roman Vershynin, University of California, Irvine
Nov 5, 2019 4:00pm
1011 Evans Hall
Abstract:
Deep learning is a rapidly developing area of machine learning, which uses artificial neural networks to perform learning tasks. Although mathematical description of neural networks is simple, theoretical explanation of spectacular performance of deep learning remains elusive. Even the most basic questions about remain open. For example, how many different functions can a neural network compute?...
Neyman Seminar
Jason Miller, University of Cambridge
Nov 13, 2019 4:00pm
1011 Evans Hall
Abstract:
Liouville quantum gravity (LQG) is in some sense the canonical model of a two-dimensional Riemannian manifold and is defined using the (formal) metric tensor $e^{\gamma h(z)} (dx^2 + dy^2)$ where $h$ is an instance of some form of the Gaussian free field and $\gamma \in (0,2)$ is a parameter. This expression does not make literal sense since $h$ is a distribution and not a function, so...
Neyman Seminar
Giles Hooker, Cornell University
Nov 20, 2019 4:00pm
1011 Evans Hall
Abstract:
This talk develops methods of statistical inference based around ensembles of decision trees: bagging, random forests, and boosting. Recent results have shown that when the bootstrap procedure in bagging methods is replaced by sub-sampling, predictions from these methods can be analyzed using the theory of U-statistics which have a limiting normal distribution. Moreover, the limiting variance...
Neyman Seminar
Noemi Petra, UC Merced
Dec 4, 2019 4:00pm
1011 Evans Hall
Abstract:
In this talk, we introduce a statistical treatment of inverse problems constrained by models with stochastic terms. The solution of the forward problem is given by a distribution represented numerically by an ensemble of simulations. The goal is to formulate the inverse problem, in particular the objective function, to find the closest forward distribution (i.e., the output of the...
Neyman Seminar