Claims and Evidence: A Joke

The other day a friend posted the following old joke about the level of rigor that mathematicians usually require. (Disclaimer: if you take the joke as a serious claim about the standards of quality in the other fields referenced in the joke, it is an obviously unfair characterization of both astronomy and physics.)

A Mathematician, a Physicist, and an astronomer were travelling north by train. They had just crossed the border into Scotland, when the Astronomer looked out of the window and saw a single black sheep in the middle of a field. “All Scottish sheep are black,” he remarked. “No, my friend,” replied the Physicist, “Some Scottish sheep are black.” At which point the Mathematician looked up from his paper and glanced out the window. After a few second’s thought he said blandly: “In Scotland, there exists at least one field, in which there exists at least one sheep, at least one side of which is black.”

I thought the joke was particularly timely given the recent round of arguments in psychology about the field’s evidentiary standards. As a former psychologist, I am continually surprised that more people in the field won’t yet acknowledge that the field often behaves like the astronomer in the joke even though the empirical tradition in psychology can only justify very weak claims of the sort that the mathematician produces.

No juice for you, CSV format. It just makes you more awful.

I just found a minimal example of how easy it is to confuse R’s CSV parser when providing it with ill-formatted data. To make it easy to understand, I put material for reproducing the problem up on GitHub.

I’m sure one could construct many similar examples for Python and Julia. The problem here is two-fold: (1) the CSV format is just kind of awful and (2) many end-users complain when CSV parsers don’t “successfully” interpret the mis-formatted files they want to parse. To borrow language from the web, essentially all CSV parsers have been compelled to operate in quirks mode all the time.

As far as I can see, there’s very little that would need to be changed to create a less awful CSV format. Here’s some minimal changes that could be used to define a new CSV++ format that would be far easier to parse and validate rigorously:

  1. Every CSV++ file starts with a short preamble.
  2. The first section of the preamble indicates the number of rows and columns.
  3. The second section of the preamble indicates the names and types of all of the columns.
  4. The third section of the preamble indicates the special sequence that’s used to represent NULL values.
  5. The fourth section of the preamble indicates (a) the separator sequence that’s used to separate columns within each row and (b) the separator sequence that’s used to separate rows.
  6. String values are always quoted. Values of other types are never quoted.
  7. CSV++ formatted files must never contain comments or any other non-content-carrying bits.

From my perspective, the only problem with this proposed format is that it assumes CSV++ files are written in batch mode because you need to know the number of rows before you can start writing data. I don’t really think this is so burdensome; in a streaming context, you could just write out files of length N and then do some last-minute patching to the final file that you wrote to correct the inaccurate number of rows that you indicated in the preamble during the initial writing phase.

I have little-to-no hope that such a format would ever become popular, but I think it’s worth documenting just how little work would be required to replace CSV files with a format that’s far easier for machines to parse while only being slightly harder for humans to write. When I think about the trade-off between human and machine needs in computing, the CSV format is my canonical example of something that ignored the machine needs so systematically that it became hard for humans to use without labor-intensive manual data validation.

Update: Ivar Nesje notes that the preamble also needs to indicate the quotation character for quoting strings and specify the rule for escaping that character within strings.

Turning Distances into Distributions

Deriving Distributions from Distances

Several of the continuous univariate distributions that frequently come up in statistical theory can be derived by transforming distances into probabilities. Essentially, these distributions only differ in terms of how frequently values are drawn that lie at a distance \(d\) from the mode. To see how these transformations work (and unify many distinct distributions), let’s make some simplifying assumptions.

First, we’ll assume that the distribution \(D\) we want to construct is unimodal. In other words, there is a typical value, which we’ll call \(m\), that occurs more frequently than any other value drawn from that distribution. For the sake of simplicity, we’re going to assume that \(m = 0\) from now on.

Second, we’ll assume that the probability that a value \(x\) is drawn from the distribution \(D\) decreases as \(x\) gets further from \(0\). To simplify things, we’ll assume that the density function is inversely proportional to a function \(f\) of \(x\)’s distance from \(0\).

Given these assumptions, we know that the density function will look like,
p(x) = \frac{c}{f(|x|)} \propto \frac{1}{f(|x|)},
where \(c\) is chosen to ensure that the density function integrates to \(1\) over \(x \in (-\infty, +\infty)\). From now on, we’ll ignore this constant as it distracts one from seeing the bigger picture.

The really big question given this setup is this: what densities do we get when we consider different transformations for \(f\)?

How Must the Function \(f\) Behave to Define a Valid Density?

What constraints must we place on \(f\) to ensure that the proposed density \(p(x)\) is actually a valid probability density function?

First, we’ll assume that \(f(|x|)\) is bounded at zero to ensure that \(\frac{1}{f(|x|)}\) doesn’t head off towards \(\infty\) or take on negative values. To simplify things we’re going to make an even stronger assumption: we’ll assume that \(f(|0|) = 1\) and that \(f(|x|) > f(0)\) for all \(x \neq 0\).

Second, we’ll assume that \(f(|x|)\) increases fast enough to ensure that,
\int_{-\infty}^{+\infty} \frac{1}{f(|x|)} \, dx< \infty. $$ This latter requirement turns out to be the more interesting requirement. How fast must \(f(|x|)\) increase to guarantee that the probability integral is finite? To see that it's possible for \(f(|x|)\) not to increase fast enough, imagine that \(f(|x|) = 1 + |x|\). In this case, we'd be integrating something that looks like, $$ \int_{-\infty}^{+\infty} \frac{1}{1 + |x|} \, dx, $$ which is not finite. As such, we know that \(f(|x|)\) must grow faster than the absolute value function. It turns out that the next obvious choice for growth after linear growth is sufficient to give us a valid probability density function: if we look at \(f(|x|) = 1 + |x|^k\) for \(k \geq 2\), we already get a valid density function. Indeed, when \(k = 2\) we've derived the functional form of the Cauchy distribution: a distribution famous for having tails so heavy that it has no finite moments. More generally, when \(k > 2\), we get something that looks a bit like the t-statistic’s distribution, although it’s definitely not the same. Our densities look like,
p(x) \propto \frac{1}{1 + |x|^k},
whereas the t-statistic’s distribution looks more like,
p(x) \propto \frac{1}{(1 + \frac{|x|^2}{2k – 1})^k}.

That said, when \(f(|x|)\) is a polynomial, we generally get a heavy-tailed distribution.

From Polynomial Growth to Exponential Growth

What happens if we consider exponential growth for \(f(|x|)\) instead of polynomial growth?

(1) If \(f(|x|) = \exp(|x|)\), we get the Laplace distribution. This is the canonical example of a sub-exponential distributions, whose tails drop off exponentially fast, but not as fast as a Gaussian’s tails.

(2) If \(f(|x|) = \exp(|x|^2)\), we get the Gaussian distribution. This is the quintessentially well-behaved distribution that places the vast majority of its probability mass at points within a small distance from zero.

(3) If \(f(|x|) = \exp(|x|^k)\) and \(k > 2\), we get a subset of the sub-Gaussian distributions, whose tails are even thinner than the Gaussian’s tails.

Essentially, the distance-to-distribution way of thinking about distributions gives us a way of thinking about how fast the tail probabilities for a distribution head to zero. By talking in terms of \(f(|x|)\) without reference to anything else, we can derive lots of interesting results that don’t depend upon the many constants we might otherwise get distracted by.

Conclusion: Ranking Distributions by Their Tails

We can recover a lot of classic probability distributions by assuming that they put mass at values inversely proportional to the value’s distance from zero. In particular, we can transform distributions into distributions as follows:

* Select a function \(f(|x|)\) that increases faster than \(f(|x|) = 1 + |x|\).

* Define the unnormalized density as \(p'(x) = \frac{1}{f(|x|)}\).

* Figure out the integral of \(p'(x)\) from \(-\infty\) to \(+\infty\) and call it \(c\).

* Define the normalized density as \(p(x) = \frac{c}{f(|x|)}\).

This approach gives us the following prominent distributions:

(1) If \(f(|x|) = 1 + |x|^k\) and \(k = 2\), we exactly recover the Cauchy distribution. For larger \(k\), we recover heavy-tailed distributions that seem related to the t-statistic’s distribution, but aren’t identical to it.

(2) If \(f(|x|) = \exp(|x|)\), we exactly recover the Laplace distribution.

(3) If \(f(|x|) = \exp(|x|^k)\) and \(1 < k < 2\), we recover a class of sub-exponential distributions between the Laplace distribution and the Gaussian distribution. (4) If \(f(|x|) = \exp(|x|^2)\), we exactly recover the Gaussian distribution. (5) If \(f(|x|) = \exp(|x|^k)\) and \(k > 2\), we recover a class of sub-Gaussian distributions.

If you want to see why this ranking of distributions in terms of distance is so helpful, imagine that the limiting distribution for a mean was Laplace rather than Gaussian. You’d find that inference would be vastly more difficult. The extreme speed with which the tails of the normal distribution head to zero does much of the heavy-lifting in statistical inference.

References to the Literature?

I imagine this classification of distributions has a name. It seems linked to the exponential family, although the details of that link aren’t very clear to me yet.

Much of the material for this note is canonical, although I’m less familiar with the focus on relating metrics to distributions. One thing that got me interested in the topic long ago was this blog post by Ben Moran.

The Convexity of Improbability: How Rare are K-Sigma Effects?

In my experience, people seldom appreciate just how much more compelling a 5-sigma effect is than a 2-sigma effect. I suspect part of the problem is that p-values don’t invoke the visceral sense of magnitude that statements of the form, “this would happen 1 in K times”, would invoke.

To that end, I wrote a short Julia script to show how often a K-sigma effect would occur if the null hypothesis were true. A table of examples for K between 1 and 12 is shown below.

A K Sigma Effect Occurs 1 in N Times under the Null
1 1 in 3 times
2 1 in 22 times
3 1 in 370 times
4 1 in 15,787 times
5 1 in 1,744,278 times
6 1 in 506,797,346 times
7 1 in 390,682,215,445 times
8 1 in 803,734,397,655,352 times
9 1 in 4430,313,100,526,877,940 times
10 1 in 65,618,063,552,490,270,597,321 times
11 1 in 2,616,897,361,902,875,526,344,715,013 times
12 1 in 281,455,127,862,356,011,929,402,519,726,336 times

One thing I like about showing things in this format: it’s much more obvious that increasing K by one additional unit has an ever-increasing effect on strengthening your claims. This is what I’m jokingly calling the “convexity of improbability” in the title: each additional sigma makes your claim much stronger than the previous one.

Why I’m Not a Fan of R-Squared

The Big Message

People sometimes use \(R^2\) as their preferred measure of model fit. Unlike quantities such as MSE or MAD, \(R^2\) is not a function only of model’s errors, its definition contains an implicit model comparison between the model being analyzed and the constant model that uses only the observed mean to make predictions. As such, \(R^2\) answers the question: “does my model perform better than a constant model?” But we often would like to answer a very different question: “does my model perform worse than the true model?”

In theoretical examples, it is easy to see that the answers to these two questions are not interchangeable. We can construct examples in which our model performs no better than a constant model, even though it also performs no worse than the true model. But we can also construct examples in which our model performs much better than a constant model, even when it also performs much worse than the true model.

As with all model comparisons, \(R^2\) is a function not only of the models being compared, but also a function of the data set being used to perform the comparison. For almost all models, there exist data sets that are wholly incapable of distinguishing between the constant model and the true model. In particular, when using a dataset with insufficient model-discriminating power, \(R^2\) can be pushed arbitrarily close to zero — even when we are measuring \(R^2\) for the true model. As such, we must always keep in mind that \(R^2\) does not tell us whether our model is a good approximation to the true model: \(R^2\) only tells us whether our model performs noticeably better than the constant model on our dataset.

A Theoretical Example

To see how comparing a proposed model with a constant model could lead to opposite conclusions than comparing the same proposed model with the true model, let’s consider a simple example: we want to model the function \(f(x)\), which is observed noisily over an equally spaced grid of \(n\) points between \(x_{min}\) and \(x_{max}\).

To start, let’s assume that:

  • \(f(x) = \log(x)\).
  • \(x_{min} = 0.99\).
  • \(x_{max} = 1.01\).
  • At 1,000 evenly spaced values of \(x\) between \(x_{min}\) and \(x_{max}\), we observe \(y_i = f(x_i) + \epsilon_i\), where \(\epsilon_i \sim \mathbf{N}(0, \sigma^2)\).

Based on this data, we’ll attempt to learn a model of \(f(x)\) using univariate OLS regression. We’ll fit both a linear model and a quadratic model. An example realization of this modeling process looks like the following:

In this graph, we can see that \(f(x)\) is well approximated by a line and so both our linear and quadratic regression models come close to recovering the true model. This is because \(x_{min}\) and \(x_{max}\) are very close and our target function is well approximated by a line in this region, especially relative to the amount of noise in our observations.

We can see the quality of these simple regression models if we look at two binary comparisons: our model versus the constant model and our model versus the true logarithmic model. To simplify these calculations, we’ll work with an alternative \(R^2\) calculation that ignores corrections for the number of regressors in a model. Thus, we’ll compute \(R^2\) for a model \(m\) (versus the constant model \(c\)) as follows:

R^2 = \frac{\text{MSE}_c – \text{MSE}_m}{\text{MSE}_c} = 1 – \frac{\text{MSE}_m}{\text{MSE}_c}

As with the official definition \(R^2\), this quantity tells us how much of the residual errors left over by the constant model are accounted for by our model. To see how this comparison leads to different conclusions than a comparison against the true model, we’ll also consider a variant of \(R^2\) that we’ll call \(E^2\) to emphasize that it measures how much worse the errors are than we’d see using the true model \(t\):

E^2 = \frac{\text{MSE}_m – \text{MSE}_t}{\text{MSE}_t} = \frac{\text{MSE}_m}{\text{MSE}_t} – 1

Note that \(E^2\) has the opposite sense as \(R^2\): better fitting models have lower values of \(E^2\).

Computing these numbers on our example, we find that \(R^2 = 0.007\) for the linear model and \(R^2 = 0.006\) for the true model. In contrast, \(E^2 = -0.0008\) for the linear model, indicating that it is essentially indistinguishable from the true model on this data set. Even though \(R^2\) suggests our model is not very good, \(E^2\) tells us that our model is close to perfect over the range of \(x\).

Now what would happen if the gap between \(x_{max}\) and \(x_{min}\) grew larger? For a monotonic function like \(f(x) = \log(x)\), we’ll see that \(E^2\) will constantly increase, but that \(R^2\) will do something very strange: it will increase for a while and then start to decrease.

Before considering this idea in general, let’s look at one more specific example in which \(x_{min} = 1\) and \(x_{max} = 1000\):

In this case, visual inspection makes it clear that the linear model and quadratic models are both systematically inaccurate, but their values of \(R^2\) have gone up substantially: \(R^2 = 0.760\) for the linear model and \(R^2 = 0.997\) for the true model. In contrast, \(E^2 = 85.582\) for the linear model, indicating that this data set provides substantial evidence that the linear model is worse than the true model.

These examples show how the linear model’s \(R^2\) can increase substantially, even though most people would agree that the linear model is becoming an increasingly unacceptably bad approximation of the true model. Indeed, although \(R^2\) seems to improve when transitioning from one example to the other, \(E^2\) gets strictly worse as the gap between \(x_{min}\) and \(x_{max}\) increases. This suggests that \(R^2\) might be misleading if we interpret it as a proxy for the generally unmeasurable \(E^2\). But, in truth, our two extreme examples don’t tell the full story: \(R^2\) actually changes non-monotonically as we transition between the two cases we’ve examined.

Consider performing a similar analysis to the ones we’ve done above, but over many possible grids of points starting with \((x_{min}, x_{max}) = (1, 1.1)\) and ending with \((x_{min}, x_{max}) = (1, 1000)\). If we calculate \(R^2\) and \(E^2\) along the way, we get graphs like the following (after scaling the x-axis logarithmically to make the non-monotonicity more visually apparent).

First, \(R^2\):

Second, \(E^2\):

Notice how strange the graph for \(R^2\) looks compared with the graph for \(E^2\). \(E^2\) always grows: as we consider more discriminative data sets, we find increasingly strong evidence that our linear approximation is not the true model. In contrast, \(R^2\) starts off very low (precisely when \(E^2\) is very low because our linear model is little worse than the true model) and then changes non-monotonically: it peaks when there is enough variation in the data to rule out the constant model, but not yet enough variation in the data to rule out the linear model. After that point it decreases. A similar non-monotonicity is seen in the \(R^2\) value for the quadratic model. Only the true model shows a monotonic increase in \(R^2\).


I wrote down all of this not to discourage people from ever using \(R^2\). I use it sometimes and will continue to do so. But I think it’s important to understand both that (a) the value of \(R^2\) is heavily determined by the data set being used and that (b) the value of \(R^2\) can decrease even when your model is becoming an increasingly good approximation to the true model. When deciding whether a model is useful, a high \(R^2\) can be undesirable and a low \(R^2\) can be desirable.

This is an inescapable problem: whether a wrong model is useful always depends upon the domain in which the model will be applied and the way in which we evaluate all possible errors over that domain. Because \(R^2\) contains an implicit model comparison, it suffers from this general dependence on the data set. \(E^2\) also has such a dependence, but it at least seems not to exhibit a non-monotonic relationship with the amount of variation in the values of the regressor variable, \(x\).


The code for this post is on GitHub.

Retrospective Edits

In retrospect, I would like to have made more clear that measures of fit like MSE and MAD also depend upon the domain of application. My preference for their use is largely that the lack of an implicit normalization means that they are prima facie arbitrary numbers, which I hope makes it more obvious that they are highly sensitive to the domain of application — whereas the normalization in \(R^2\) makes the number seem less arbitrary and might allow one to forget how data-dependent they are.

In addition, I really ought to have defined \(E^2\) using a different normalization. For a model \(m\) compared with both the constant model \(c\) and the true model \(t\), it would be easier to work with the bounded quantity:

E^2 = \frac{\text{MSE}_m – \text{MSE}_t}{\text{MSE}_c – \text{MSE}_t}

This quantity asks where on a spectrum from the worse defensible model’s performance (which is \(\text{MSE}_c – \text{MSE}_t\)) to the best possible model’s performance (which is \(\text{MSE}_t – \text{MSE}_t\) = 0), one lies. Note that in the homoscedastic regression setup I’ve considered, \(\text{MSE}_t = \sigma^2\), which can be estimated from empirical data using the identifying assumption of homoscedasticity and repeated observations at any fixed value of the regressor, \(x\). (I’m fairly certain that this normalized quantity has a real name in the literature, but I don’t know it off the top of my head.)

I should also have noted that some additional hacks are required to guarantee this number lies in \([0, 1]\) because the lack of model nesting means that violations are possible. (Similar violations are not possible in the linear regression setup because the constant model is almost always nested in the more elaborate models being tested.)

Retrospective Edits Round 2

After waking up to see this on Hacker News, I should probably have included a graphical representation of the core idea to help readers who don’t have any background in theoretical statistics. Here’s that graphical representation, which shows that, given three models to be compared, we can always place our model on a spectrum from the performance of the worst model (i.e. the constant model) to the performance of the best model (i.e. the true model).


When measuring a model’s position on this spectrum, \(R^2\) and \(E^2\) are complementary quantities.

Retrospective Edits Round 3

An alternative presentation of these ideas would have focused on a bias/variance decompositions of the quantities being considered. From that perspective, we’d see that:

R^2 = 1 – \frac{\text{MSE}_m}{\text{MSE}_c} = 1 – \frac{\text{bias}_m^2 + \sigma^2}{\text{bias}_c^2 + \sigma^2}

Using the redefinition of \(E^2\) I mentioned in the first round of “Retrospective Edits”, the alternative to \(R^2\) would be:

E^2 = \frac{\text{MSE}_m – \text{MSE}_t}{\text{MSE}_c – \text{MSE}_t} = \frac{(\text{bias}_m^2 + \sigma^2) – (\text{bias}_t^2 + \sigma^2)} {(\text{bias}_c^2 + \sigma^2) – (\text{bias}_t^2 + \sigma^2)} = \frac{\text{bias}_m^2 – \text{bias}_t^2}{\text{bias}_c^2 – \text{bias}_t^2} = \frac{\text{bias}_m^2 – 0}{\text{bias}_c^2 – 0} = \frac{\text{bias}_m^2}{\text{bias}_c^2}

In large part, I find \(R^2\) hard to reason about because of the presence of the \(\sigma^2\) term in the ratios shown above. \(E^2\) essentially removes that term, which I find useful for reasoning about the quality of a model’s fit to data.

A Variant on “Statistically Controlling for Confounding Constructs is Harder than you Think”

Yesterday, a coworker pointed me to a new paper by Jacob Westfall and Tal Yarkoni called “Statistically controlling for confounding constructs is harder than you think”. I quite like the paper, which describes some problems that arise when drawing conclusions about the relationships between theoretical constructs using only measurements of observables that are, at best, approximations to those theoretical constructs.

Given their intended audience, I think Westfall and Yarkoni’s approach is close to optimal: they motivate their concerns using examples that depend only upon the reader having a basic understanding of linear regression and measurement error. That said, the paper made me wish that more psychologists were familiar with Judea Pearl’s work on graphical techniques for describing causal relationships.

To see how a graphical modeling approach could be used to articulate Westfall and Yarkoni’s point, let’s assume that the true causal structure of the world looks like the graph shown in Figure 1 below:

Figure 1

For those who’ve never seen such a graph before, the edges are directed in a way that indicates causal influence. This graph says that the objective temperature on any given day is the sole cause of (a) the subjective temperature experienced by any person, (b) the amount of ice cream consumed on that day, and (c) the number of deaths in swimming pools on the day.

If this graph reflects the true causal structure of the world, then the only safe way to analyze data about the relationship between ice cream consumption and swimming pool deaths is to condition on objective temperature. If highlighting represents conditioning on a variable, the conclusion is that a correct analysis must look like the following colored-in graph:

Figure 2

Westfall and Yarkoni’s point is that this correct analysis is frequently not performed because the objective temperature variable is not observed and therefore cannot be conditioned on. As such, one conditions on subjective temperature instead of objective temperature, leading to this colored-in graph:

Figure 3

The problem with conditioning on subjective temperature instead of objective temperature is that you have failed to block the relevant path in the graph, which is the one that links ice cream consumption and swimming pool deaths by passing through objective temperature. Because the relevant path of dependence is not properly blocked, an erroneous relationship between ice cream consumption and swimming pool deaths leaks through. It is this leaking through that causes the high false positive rates that are the focus of Westfall and Yarkoni’s paper.

One reason I like this graphical approach is that it makes it clear that the opposite problem can occur as well: the world might have also have a causal structure such that one must condition on subjective temperature, but conditioning on objective temperature is always insufficient.

To see how that could occur, consider this alternative hypothetical causal structure for the world:

Figure 4

If this alternative structure were correct, then a correct analysis would need to color in subjective temperature:

Figure 5

An incorrect analysis would be to color in objective temperature rather than subjective temperature:

Figure 6

But, somewhat surprising, coloring in both would be fine:

Figure 7

It’s taken me some time to master this formalism, but I now find it quite easy to reason about these kinds of issues thanks to the brevity of graphical models as a notational technique. I’d love to see this approach become more popular in psychology, given that it has already become quite widespread in other fields. Of course, Westfall and Yarkoni are already advocating for something very similar by advocating for the use of SEM’s, but the graphical approach is strictly more general than SEM’s and, in my personal opinion, strictly simpler to reason about.

Understanding the Pseudo-Truth as an Optimal Approximation


One of the things that set statistics apart from the rest of applied mathematics is an interest in the problems introduced by sampling: how can we learn about a model if we’re given only a finite and potentially noisy sample of data?

Although frequently important, the issues introduced by sampling can be a distraction when the core difficulties you face would persist even with access to an infinite supply of noiseless data. For example, if you’re fitting a misspecified model \(m_1\) to data generated by a model \(m_2\), this misspecification will persist even as the supply of data becomes infinite. In this setting, the issues introduced by sampling can be irrelevant: it’s often more important to know whether or not the misspecified model, \(m_1\), could ever act as an acceptable approximation to the true model, \(m_2\).

Until recently, I knew very little about these sorts of issues. While reading Cosma Shalizi’s draft book, “Advanced Data Analysis from an Elementary Point of View”, I learned about the concept of pseudo-truth, which refers to the version of \(m_1\) that’s closest to \(m_2\). Under model misspecification, estimation procedures often converge to the pseudo-truth as \(n \to \infty\).

Thinking about the issues raised by using a pseudo-true model rather than the true model has gotten me interested in learning more about quantifying approximation error. Although I haven’t yet learned much about the classical theory of approximations, it’s clear that a framework based on approximation errors allows one to formalize issues that would otherwise require some hand-waving.

A General Framework for Quantifying Approximation Error

Suppose that I want to approximate a function \(f(x)\) with another function \(g(x)\) that comes from a restricted set of functions, \(G\). What does the optimal approximation look like when \(g(x)\) comes from a parametric class of functions \(G = \{g(x, \theta) | \theta \in \mathbb{R}^{p}\}\)?

To answer this question with a single number, I think that one natural approach looks like:

  1. Pick a point-wise loss function, \(L(f(x), g(x))\), that evaluates the gap between \(f(x)\) and \(g(x)\) at any point \(x\).
  2. Pick a set, \(S\), of values of \(x\) over which you want to evaluate this point-wise loss function.
  3. Pick an aggregation function, \(A\), that summarizes all of the point-wise losses incurred by any particular approximation \(g(x, \theta)\) over the set \(S\).

Given \(L\), \(S\) and \(A\), you can define an optimal approximation from \(G\) to be a function, \(g \in G\), that minimizes the aggregated point-wise loss function over all values in \(S\).

An Analytically Tractable Special Case

This framework is so general that I find it hard to reason about. Let’s make some simplifying assumptions so that we can work through an analytically tractable special case:

  1. Let’s assume that \(f(x) = x^2\). Now we’re trying to find the optimal approximation of a quadratic function.
  2. Let’s assume that \(g(x, \theta) = g(x, a, b) = ax + b\), where \(\theta = \langle a, b \rangle \in \mathbb{R}^{2}\). Now we’re trying to find the optimal linear approximation to the quadratic function, \(f(x) = x^2\).
  3. Let’s assume that the point-wise loss function is \(L(f(x), g(x, \theta)) = [f(x) – g(x, \theta)]^2\). Now we’re quantifying the point-wise error using the squared error loss that’s commonly used in linear regression problems.
  4. Let’s assume that \(S\) is a closed interval on the real line, that is \(S = [l, u]\).
  5. Let’s assume that the aggregation function, \(A\), is the integral of the loss function, \(L\), over the interval, \([l, u]\).

Under these assumptions, the optimal approximation to \(f(x)\) from the parametric family \(g(x, \theta)\) is defined as a solution, \(\theta^{*}\), to the following minimization problem:

\theta^{*} = \arg \min_{\theta} \int_{l}^{u} L(f(x), g(x, \theta)) dx = \arg \min_{\theta} \int_{l}^{u} [f(x) – g(x, \theta)]^2 dx = \arg \min_{\theta} \int_{l}^{u} [x^2 – (ax + b)]^2 dx.

I believe that this optimization problem describes the pseudo-truth towards which OLS univariate regression converges when the values of the covariate, \(x\), are drawn from a uniform distribution over the interval \([l, u]\). Simulations agree with this belief, although I may be missing some important caveats.

Solving the Problem Analytically

To get a feeling for how the pseudo-truth behaves in this example problem, I found it useful to solve for the optimal linear approximation analytically by computing the gradient of the cost function with respect to \(\theta\) and then solving for the roots of the resulting equations:

\frac{\partial}{\partial a} \int_{l}^{u} [x^2 – (ax + b)]^2 dx = 0 \\
\frac{\partial}{\partial b} \int_{l}^{u} [x^2 – (ax + b)]^2 dx = 0 \\

After some simplification, this reduces to a matrix equation:

\frac{2}{3} (u^3 – l^3) & u^2 – l^2 \\
u^2 – l^2 & 2 (u – l) \\
\end{bmatrix} \begin{bmatrix}
a \\
b \\
\end{bmatrix} = \begin{bmatrix}
\frac{1}{2} (u^4 – l^4) \\
\frac{2}{3} (u^3 – l^3) \\

In practice, I suspect it’s much easier to solve for the optimal approximation on a computer by using quadrature to approximate the aggregated point-wise loss function and then minimizing that aggregation function using a quasi-Newton method. I experimented a bit with this computational approach and found that it reproduces the analytic solution for this problem. The computational approach generalizes to other problems readily and requires much less work to get something running.

Some Optimal Linear Approximations to a Quadratic Function over Different Intervals

You can see examples of the pseudo-truth versus the truth below for a few intervals \([l, u]\):


I don’t think there’s anything very surprising in these plots, but I nevertheless find the plots useful for reminding myself how sensitive the optimal approximation is to the set \(S\) over which it will be applied.

Formally Defining Asymptotic Bias

One reason that I find it useful to think formally about the quality of optimal approximations is that it makes it easier to rigorously define some of the problems that arise in machine learning.

Consider, for example, the issues raised in this blog post by Paul Mineiro:

In statistics the bias-variance tradeoff is a core concept. Roughly speaking, bias is how well the best hypothesis in your hypothesis class would perform in reality, whereas variance is how much performance degradation is introduced from having finite training data.

I would say that this definition of bias is quite far removed from the definition used in statistics and econometrics, although it can be formalized in terms of the expected point-wise loss.

The core issue, as I see it, is that, for a statistician or econometrician, bias is not an asymptotic property of a model class, but rather a finite-sample property of an estimator. Variance is also a finite-sample property of an estimator. Some estimators, such as the ridge estimator for a linear regression model, decrease variance by increasing bias, which induces a trade-off between bias and variance in finite samples.

But Paul Mineiro is not, I think, interested in these finite-sample properties of estimators. I believe he’s concerned about the intrinsic error introduced by approximating one function with another. And that’s a very important topic that I haven’t seen discussed as often as I’d like.

Put another way, I think Mineiro’s concern is much more closely linked to formalizing an analogue for regression problems of the concepts of VC dimension and Rademacher complexity that come up in classification problems. Because I wasn’t familiar with any such tools, I found it helpful to work through some problems on optimal approximations. Given this framework, I think it’s easy to replace Mineiro’s concept of bias with an approximation error formalism that might be called asymptotic bias or definitional bias, as Efron does in this paper. These tools let one quantify the costs incurred by the use of a pseudo-true model rather than the true model.

I’d love to learn more about this issue. As it stands, I know too much about sampling and too little about quantifying the quality of approximations.

Some Observations on Winsorization and Trimming

Over the last few months, I’ve had a lot of conversations with people about the use of winsorization to deal with heavy-tailed data that is positively skewed because of large outliers. After a conversation with my friend Chris Said this past week, it became clear to me that I needed to do some simulation studies to understand the design space of techniques for dealing with outliers.

In this post, I’m going to write up my current understanding of the topic. To motivate my perspective, I’m going to describe a specific inferential problem. Trying to solve that problem will require me to explore a small portion of the design space of algorithms for dealing with outliers. It is not at all clear to me how well the conclusions I reach will generalize to other settings, but my hope is that a thorough examination of one specific problem will touch on some of the core issues that would arise in other settings. I’ve also put the code for this simulation study on GitHub so that others can easily build variants of it.

Defining Outliers

To simplify things, I’m going to define an outlier as any value greater than a certain percentile of all of the observed data points. For example, I might say that all observations above the 90th percentile are outliers. In this case, I will say that the “outlier window” is 10%, because I will consider all observations above the (100 – 10)th percentile to be outliers. This outlier window parameter is the first dimension of the design space that I want to understand. Most applications employ a very gentle outlier window, which labels no more than 5% of observations as outliers. But sometimes one sees more radical windows used. In this post, I will consider windows ranging from 1% up to 40%.

Outlier Removal

Given that we’ve flagged some observations as outliers, what do we do with them? I’ll consider three options:

  • Do nothing and leave the data unadjusted.
  • Discard all of the outliers. The removal of extreme values is usually called trimming or truncation.
  • Replace all of the outliers with the largest value that is not considered an outlier. The replacement of extreme values is usually called winsorization.

The Data Generation Process and Inferential Problem

As a further simplification, I’m only going to consider a single inferential problem: estimating the difference in means between two groups of observations. In this problem, I will generate data as follows:

  • Draw \(n\) IID observations from a distribution \(D\). Label this set of observations as Group A.
  • Draw \(n\) additional IID observations from the same distribution \(D\) and add a constant, \(\delta\), to every one of them. Label this set of observations as Group B.

In this post, the distribution, \(D\), that I will use is a log-normal distribution with mean 1.649 and standard deviation 2.161. These strange values correspond to a mean of 0 and a standard deviation of 1 on the pre-exponentiation scale used by my RNG.

Given that distribution, \(D\), I’ll consider three values of \(\delta\): 0.00, 0.05 and 0.10. Each value of \(\delta\) defines a new distribution, \(D^{\prime}\).

To contrast these distributions, I want to consider the quantity: \(\delta = \mathbb{E}[D^{\prime}] – \mathbb{E}[D]\). The outlier approaches I’ll consider will lead to several different estimators. I’ll refer to each of these estimators as \(\hat{\delta}\). From each estimator, I’ll construct point estimates and interval estimates; I’ll also conduct hypothesis tests based on the estimators using t-tests.

Selection of Outlier Windows

Given that we have two data sets, \(A\) and \(B\), applying outlier detection requires us to make another decision about our position in design space: which data sets do we use to compute the percentile that defines the outlier threshold?

I’ll consider three options:

  • Compute a single percentile from a new data set formed by combining A and B. In the results, I’ll call this the “shared threshold” rule.
  • Compute a single percentile using data from A only. In the results, I’ll call this the “threshold from A” rule.
  • Compute two percentiles: one using data from A only and another using data from B only. We will then apply the threshold from A to A’s data and apply the threshold from B to B’s data. In the results, I’ll call this the “separate thresholds” rule.

Combining this new design decision with the others already mentioned, the design space I’m interested in has three dimensions:

  • The outlier window.
  • The method for addressing outliers, which is either (a) no adjustment, (b) trimming, or (c) winsorization.
  • The data set used to compute the outlier threshold, which is a shared threshold, a separate threshold or a threshold derived from A’s observations only.


I consider the quality of inferences in terms of several metrics:

  • Point Estimation:
    • The bias of \(\hat{\delta}\) as an estimator of \(\delta\).
    • The RMSE of \(\hat{\delta}\) as an estimator of \(\delta\).
  • Interval Estimation:
    • The coverage probability of nominal 95% CI’s for \(\delta\) built around \(\hat{\delta}\).
  • Hypothesis Testing:
    • The false positive rate of a t-test when \(\delta = 0.00\).
    • The false negative rates of t-tests when \(\delta = 0.05\) or \(\delta = 0.10\).

Point Estimation

The results for the point estimation problem are shown graphically below. I begin by plotting KDE’s of the sampling distributions of the seven estimators I’m considering, which give an intuition for all of the core results shown later. The columns in these plots reflect the different values of \(\delta\), which are either 0.00, 0.05 or 0.10.


Next I plot the bias of these estimators as a function of the outlier window. I split these plots according to the true value of \(\delta\):


Finally, I plot the RMSE of these estimators as a function of the outlier window, again splitting the plots according to the true value of \(\delta\):


Interval Estimation

To assess the quality of interval estimates derived from these estimators of \(\hat{\delta}\), I plot the empirical coverage probability of nominal 95% CI’s:


Hypothesis Testing

In this last set of analyses, I plot the false positive and false negative rates as a function of the outlier window:



And, for one final analysis, I include a plot of false positive rate when \(\delta = 0.00\) vs false negative rate when \(\delta = 0.05\). This makes it easier to see which methods are strictly dominated by others:



  • Both winsorization and trimming can improve the quality of point estimates of \(\delta\). But they can make inferences worse because they can introduce bias into these point estimates. The only way to prevent bias is to compute outlier thresholds in Groups A and B separately.
  • There is sometimes an optimal outlier window that minimizes the RMSE of point estimates of \(\delta\). It is not clear how to calculate this without knowing the ground truth parameter that you are trying to infer. Perhaps cross-validation would work. Importantly, choosing a bad value for the outlier window makes both winsorization and trimming inferior to doing no adjustment.
  • Both winsorization and trimming can improve the FP/FN tradeoffs made in a system, although one must use outlier thresholds that are shared across the A and B groups to reap these benefits
  • CI’s only provide the desired coverage probabilities when the outlier window is very small.

Putting all this together, my understanding is that there is no universally best method without a more complex approach to outlier control. Whether winsorization or trimming improves the quality of inference depends on the structure of the data being analyzed and the inferential approach being used. Some methods work well for point estimation and interval estimation, others work well for hypothesis testing. No method universally beats out the option of doing no adjustment for outliers.

A Note about Likely Failures of Generalization

If you consider a different kind of difference between the A and B sets of observations than the constant additive difference rule that I’ve considered, I am fairly sure that the conclusions I’ve reached will not apply.

Why Julia’s DataFrames are Still Slow


Although I’ve recently decided to take a break from working on OSS for a little while, I’m still as excited as ever about Julia as a language.

That said, I’m still unhappy with the performance of Julia’s core data analysis infrastructure. The performance of code that deals with missing values has been substantially improved thanks to the beta release of the NullableArrays package, which David Gold developed during this past Julia Summer of Code. But the DataFrames package is still a source of performance problems.

The goal of this post is to explain why Julia’s DataFrames are still unacceptably slow in many important use cases — and will remain slow even after the current dependency on the DataArrays package is replaced with a dependency on NullableArrays.

Problematic Interactions with Julia’s Compiler

The core problem with the DataFrames library is that a DataFrame is, at its core, a black-box container that could, in theory, contain objects of arbitrary types. In practice, a DataFrame contains highly constrained objects, but those constraints are (a) hard to express to the compiler and (b) still too weak to allow the compiler to produce the most efficient machine code.

The use of any black-box container creates the potential for performance problems in Julia because of the way that Julia’s compiler works. In particular, Julia’s compiler is able to execute code quickly because it can generate custom machine code for every function call — and this custom machine code is specialized for the specific run-time types of the function’s arguments.

This run-time generation of custom machine code is called specialization. When working with black-box containers, Julia’s approach to specialization is not used to full effect because machine code specialization based on run-time types only occurs at function call sites. If you access objects from a black-box container and then perform extended computations on the results, those computations will not be fully specialized because there is no function call between (a) the moment at which type uncertainty about the contents of the black-box container is removed and (b) the moment at which code that could benefit from type information is executed.

A Minimal Example

To see this concern in practice, consider the following minimal example of a hot loop being executed on values that are extracted from a black-box container:

function g1(black_box_container)
    x, y = black_box_container[1], black_box_container[2]
    n = length(x)
    s = 0.0
    for i in 1:n
        s += x[i] * y[i]
function hot_loop(x, y)
    n = length(x)
    s = 0.0
    for i in 1:n
        s += x[i] * y[i]
function g2(black_box_container)
    x, y = black_box_container[1], black_box_container[2]
    hot_loop(x, y)
container = Any[randn(10_000_000), randn(10_000_000)];
@time g1(container)
# 2.258571 seconds (70.00 M allocations: 1.192 GB, 5.03% gc time)
@time g2(container)
# 0.015286 seconds (5 allocations: 176 bytes)

g1 is approximately 150x slower than g2 on my machine. But g2 is, at a certain level of abstraction, exactly equivalent to g1 — the only difference is that the hot loop in g1 has been put inside of a function call. To convince yourself that the function call boundary is the only important difference between these two functions, consider the following variation of g2 and hot_loop:

@inline function hot_loop_alternative(x, y)
    n = length(x)
    s = 0.0
    for i in 1:n
        s += x[i] * y[i]
function g3(black_box_container)
    x, y = black_box_container[1], black_box_container[2]
    hot_loop_alternative(x, y)
@time g1(container)
# 2.290116 seconds (70.00 M allocations: 1.192 GB, 4.90% gc time)
@time g2(container)
# 0.017835 seconds (5 allocations: 176 bytes)
@time g3(container)
# 2.250301 seconds (70.00 M allocations: 1.192 GB, 5.08% gc time)

On my system, forcing the hot loop code to be inlined removes all of the performance difference between g1 and g2. Somewhat ironically, by inlining the hot loop, we’ve prevented the compiler from generating machine code that’s specialized on the types of the x and y values we pull out of our black_box_container. Inlining removes a function call site — and function call sites are the only times when machine code can be fully specialized based on run-time type information.

This problem is the core issue that needs to be resolved to make Julia’s DataFrames as efficient as they should be. Below I outline three potential solutions to this problem. I do not claim that these three are the only solutions; I offer them only to illustrate important issues that need to be addressed.

Potential Solutions to the Under-Specialization Problem

One possible solution to the problem of under-specialization is to change Julia’s compiler. I think that work on that front could be very effective, but the introduction of specialization strategies beyond Julia’s current "specialize at function call sites" would make Julia’s compiler much more complex — and could, in theory, make some code slower if the compiler were to spend more time performing compilation and less time performing the actual computations that a user wants to perform.

A second possible solution is to generate custom DataFrame types for every distinct DataFrame object. This could convert DataFrames from black-box containers that contain objects of arbitrary type into fully typed containers that can only contain objects of types that are fully known to the compiler.

The danger with this strategy is that you could generate an excessively large number of different specializations — which would again run the risk of spending more time inside the compiler than inside of the code you actually want to execute. It could also create excessive memory pressure as an increasing number of specialized code paths are stored in memory. Despite these concerns, a more aggressively typed DataFrame might be a powerful tool for doing data analysis.

The last possible solution I know of is the introduction of a high-level API that ensures that operations on DataFrames always reduce down to operations on objects whose types are known when hot loops execute. This is essentially the computational model used in traditional databases: take in a SQL specification of a computation, make use of knowledge about the data actually stored in existing tables to formulate an optimized plan for performing that computation, and then perform that optimized computation.

I think this third option is the best because it will also solve another problem Julia’s data infrastructure will hit eventually: the creation of code that is insufficiently generic and not portable to other backends. If people learn to write code that only works efficiently for a specific implementation of DataFrames, then their code will likely not work when they try to apply it to data stored in alternative backends (e.g. traditional databases). This would trap users into data structures that may not suit their needs. The introduction of a layer of appropriate abstractions (as in dplyr and Ibis) would resolve both issues at once.


  • Making Julia’s DataFrames better is still a work-in-progress.
  • The core issue is still the usage of data structures that are not amenable to Julia’s type inference machinery. One of the two main issues is now resolved; another must be addressed before things function smoothly.
  • Several solutions to this remaining are possible; we will probably see one or more of these solutions gain traction in the near-term future.

Rereading Meehl

Lately, I’ve been rereading a lot of Meehl’s papers on the epistemological problems with research in psychology. This passage from “The Problem Is Epistemology, Not Statistics: Replace Significance Tests by Confidence Intervals and Quantify Accuracy of Risky Numerical Predictions” strikes me as an almost perfect summary of his concerns, although it’s quite abstract and assumes familiarity with his “crud factor principle”, which asserts that all variables in psychology are correlated with all other variables.

I said that the crud factor principle is the concrete empirical form, realized in the sciences, of the logician’s formal point that the third figure of the implicative (mixed hypothetical) syllogism is invalid, the error in purported deductive reasoning termed affirming the consequent. Speaking methodologically, in the language of working scientists, what it comes to is that there are quite a few alternative theories \(T’\), \(T”\), \(T”’\), \(\ldots\) (in addition to the theory of interest \(T\)) that are each capable of deriving as a consequence the statistical counternull hypothesis \(H^{*}: \delta = (\mu_1 – \mu_2) > 0\), or, if we are correlating quantitative variables, that \(\rho > 0\). We might imagine (Meehl, 1990e) a big pot of variables and another (not so big but still sizable) pot of substantive causal theories in a specified research domain (e.g., schizophrenia, social perception, maze learning in the rat). We fantasize an experimenter choosing elements from these two pots randomly in picking something to study to get a publication. (We might impose a restriction that the variables have some conceivable relation to the domain being investigated, but such a constraint should be interpreted very broadly. We cannot, e.g., take it for granted that eye color will be unrelated to liking introspective psychological novels, because there is evidence that Swedes tend to be more introverted than Irish or Italians.) Our experimenter picks a pair of variables randomly out of the first pot, and a substantive causal theory randomly out of the second pot, and then randomly assigns an algebraic sign to the variables’ relation, saying, “\(H^{*}: \rho > 0\), if theory \(T\) is true.” In this crazy example there is no semantic-logical-mathematical relation deriving \(H^{*}\) from \(T\), but we pretend there is. Because \(H_0\) is quasi-always false, the counternull hypothesis \(~H_0\) is quasi-always true. Assume perfect statistical power, so that when \(H_0\) is false we shall be sure of refuting it. Given the arbitrary assignment of direction, the directional counternull \(H^{*}\) will be proved half the time; that is, our experiment will “come out right” (i.e., as pseudo-predicted from theory \(T\)) half the time. This means we will be getting what purports to be a “confirmation” of \(T\) 10 times as often as the significance level \(\alpha = .05\) would suggest. This does not mean there is anything wrong with the significance test mathematics; it merely means that the odds of getting a confirmatory result (absent our theory) cannot be equated with the odds given by the \(t\) table, because those odds are based on the assumption of a true zero difference. There is nothing mathematically complicated about this, and it is a mistake to focus one’s attention on the mathematics of \(t\), \(F\), chi-square, or whatever statistic is being employed. The population from which we are drawing is specified by variables chosen from the first pot, and one can think of that population as an element of a superpopulation of variable pairs that is gigantic in size but finite, just as the population, however large, of theories defined as those that human beings will be able to construct before the sun burns out is finite. The methodological point is that \(T\) has not passed a severe test (speaking Popperian), the “successful” experimental outcome does not constitute what philosopher Wesley Salmon called a “strange coincidence” (Meehl, 1990a, 1990b; Nye, 1972; Salmon, 1984), because with high power \(T\) has almost an even chance of doing that, absent any logical connection whatever between the variables and the theory.

Put another way: what’s wrong with psychology is that the theories being debated make such vague predictions as to be quasi-unfalsifiable. NHST is bad for the field because it encourages researchers to make purely directional predictions — that is, to make predictions that are so vague that their success is, to use an old-fashioned word, vain-glorious.