# Statistics

## Covariate-Based Diagnostics for Randomized Experiments are Often Misleading

Over the last couple of years, I’ve seen a large number of people attempt to diagnose the quality of their randomized experiments by looking for imbalances in covariates, because they expect covariate imbalances to be ruled out by successful randomization. But imbalance is, in general, not guaranteed for any specific random assignment — and, as […]

## Type Safety and Statistical Computing

I broadly believe that the statistics community would benefit from greater exposure to computer science concepts. Consistent with that belief, I argue in this post that the concept of type-safety could be used to develop a normative theory for how statistical computing systems ought to behave. I also argue that such a normative theory would […]

## Once Again: Prefer Confidence Intervals to Point Estimates

Today I saw a claim being made on Twitter that 17% of Jill Stein supporters in Louisiana are also David Duke supporters. For anyone familiar with US politics, this claim is a priori implausible, although certainly not impossible. Given how non-credible this claim struck me as being, I decided to look into the origin of […]

## Turning Distances into Distributions

Deriving Distributions from Distances Several of the continuous univariate distributions that frequently come up in statistical theory can be derived by transforming distances into probabilities. Essentially, these distributions only differ in terms of how frequently values are drawn that lie at a distance $$d$$ from the mode. To see how these transformations work (and unify […]

## The Convexity of Improbability: How Rare are K-Sigma Effects?

In my experience, people seldom appreciate just how much more compelling a 5-sigma effect is than a 2-sigma effect. I suspect part of the problem is that p-values don’t invoke the visceral sense of magnitude that statements of the form, “this would happen 1 in K times”, would invoke. To that end, I wrote a […]

## Why I’m Not a Fan of R-Squared

The Big Message People sometimes use $$R^2$$ as their preferred measure of model fit. Unlike quantities such as MSE or MAD, $$R^2$$ is not a function only of model’s errors, its definition contains an implicit model comparison between the model being analyzed and the constant model that uses only the observed mean to make predictions. […]

## A Variant on “Statistically Controlling for Confounding Constructs is Harder than you Think”

Yesterday, a coworker pointed me to a new paper by Jacob Westfall and Tal Yarkoni called “Statistically controlling for confounding constructs is harder than you think”. I quite like the paper, which describes some problems that arise when drawing conclusions about the relationships between theoretical constructs using only measurements of observables that are, at best, […]

## Understanding the Pseudo-Truth as an Optimal Approximation

Introduction One of the things that set statistics apart from the rest of applied mathematics is an interest in the problems introduced by sampling: how can we learn about a model if we’re given only a finite and potentially noisy sample of data? Although frequently important, the issues introduced by sampling can be a distraction […]

## Some Observations on Winsorization and Trimming

Over the last few months, I’ve had a lot of conversations with people about the use of winsorization to deal with heavy-tailed data that is positively skewed because of large outliers. After a conversation with my friend Chris Said this past week, it became clear to me that I needed to do some simulation studies […]

## What’s Wrong with Statistics in Julia?

Introduction Several months ago, I promised to write an updated version of my old post, “The State of Statistics in Julia”, that would describe how Julia’s support for statistical computing has evolved since December 2012. I’ve kept putting off writing that post for several reasons, but the most important reason is that all of my […]