Ben Letham
Core Data Science, Facebook.

Ben Letham I am in the Core Data Science group at Facebook. I build statistics tools, develop new ways of analyzing data, and run large online experiments. Prior to this I finished my PhD in operations research at MIT. I can be contacted at
News from the last year:

- May 2019: I wrote a post "In Defense of Fahrenheit" in which I analyze data from nearly 4000 weather stations to conclude that the Fahrenheit scale is just about perfect for capturing U.S. daily temperatures.

- April 2019: The Bayesian optimization software that I work on has been open sourced in a pair of packages, Ax and botorch.

- April 2019: We've posted a new paper to arxiv that describes how we combine online A/B tests with a naive offline simulator to tune News Feed with multi-task Bayesian optimization (arxiv) (blog post).

- December 2018: We released v0.4 of Prophet on PyPI and CRAN (site).

- December 2018: The Bayesian optimization tools that I develop are used for improving Instagram feed ranking, as described in this IG engineering blog post (post).

- November 2018: I gave a talk on Bayesian optimization in the University of California Santa Cruz statistics department seminar series.

- October 2018: I gave a guest lecture on A/B testing at CU Boulder.

- September 2018: I wrote a blog post about Bayesian Optimization at Facebook for the FB research site (post)

- September 2018: I'll be on a panel at the Bayesian nonparametrics workshop at NIPS in December.

- June 2018: Our paper on transfer learning for hyperparameter optimization was accepted at the ICML AutoML workshop (arxiv).

- June 2018: I spoke at the SLDS conference for the statistical learning section of the ASA.

- June 2018: We just released v0.3 of Prophet with some exciting new features, now on PyPI and CRAN (site).

- May 2018: Our paper on Bayesian optimization for A/B testing at Facebook has been accepted at Bayesian Analysis (paper).

Bayesian optimization and experimentation

Bayesian optimization with field experiments

At Facebook we have many backend systems with parameters that could presumably be optimized to improve system performance. However, often the only way to measure performance is through time consuming, noisy field experiments. In this paper we describe methodological advances for using Bayesian optimization to design a set of experiments that efficiently search the parameter space. The paper also describes two systems optimizations that we've done with this approach: tuning a production ranking system, and optimizing HHVM compiler flags.

Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy (2019) Constrained Bayesian optimization with noisy experiments. Bayesian Analysis 14(2): 495-519. (pdf) (supplement) (blog post)

Ranking systems can be naively simulated offline by replaying sessions to a simulator that produces the ranked list, and then applying existing event prediction models to predict online outcomes. This approach provides a high-throughput way of testing new value model policies, but the estimates suffer from off-policy bias and related issues and end up being too inaccurate to be directly useful. In this paper we show that these biased offline estimates can be combined with online A/B tests through multi-task Bayesian optimization. Including the simulator in the optimization provides significant gains, and we study the empirical generalization behavior. We used this approach to successfully improve the News Feed ranking model.

Benjamin Letham, Eytan Bakshy (2019) Bayesian optimization for policy search via online-offline experimentation. Submitted. (arxiv) (blog post)

Hyperparameter optimization

Hyperparameter optimization of machine learning models is a popular application of Bayesian optimization. In a machine learning platform we may have many similar models that are being optimized, and we may want to use the results of these optimizations to warm-start others. Here we developed a method for doing this, and used it for hyperparameter optimization in Facebook's computer vision platform.

Matthias Feurer, Benjamin Letham, Eytan Bakshy (2018) Scalable meta-learning for Bayesian optimization. In: ICML AutoML Workshop. (arxiv)


The Prophet forecasting package

Forecasting is a common data science task, yet also a specialized skill outside the expertise of many data scientists. The Prophet forecasting package is designed to be flexible enough to handle a range of business time series, while still being configurable by non-experts. We developed it for a collection of important forecasting tasks at Facebook, and have since open sourced it. It is built on Stan and has R and Python versions. It has received a tremendous response, filling a real need for forecasting these types of time series. There is code, a site with documentation, and a paper.

Sean J. Taylor, Benjamin Letham (2018) Forecasting at scale. The American Statistician 72(1): 37-45. (pdf) (code) (site)

Nonlinear dynamics and computational biology

Prediction uncertainty and optimal experimental design

During the work below on modeling the immune response to HIV infection, we ran into two questions for which we couldn't find a satisfactory answer. The first was to determine the extent to which the collected data constrained the model predictions (prediction uncertainty). The second was to figure out what additional experiments would be most helpful for reducing that uncertainty (optimal experimental design). While eating a hamburger one weekend, I had an idea for how both of these questions might be answered by a particular optimization problem. After hashing out the details, running some experiments, and doing some theoretical work, we wrote it up in this paper.

Benjamin Letham, Portia A. Letham, Cynthia Rudin, Edward P. Browne (2016) Prediction uncertainty and optimal experimental design for learning dynamical systems. Chaos 26: 063110. (pdf) (erratum)

The immune response to HIV infection

I provided computational and statistical expertise for this study of the dynamics of HIV inhibition by interferon.

Edward P. Browne, Benjamin Letham, Cynthia Rudin (2016) A computational model of inhibition of HIV-1 by interferon-alpha. PLoS ONE 11(3): e0152316. (pdf) (PLOS)

Learning from sales transaction data

Estimating arrivals and demand in the presence of stockouts

Suppose we wish to use sales transaction data to estimate demand for a collection of substitutable products. When an item goes out of stock, these data no longer reflect the original demand, since some customers leave with no purchase while others substitute alternative products for the one that was out of stock. We developed a Bayesian hierarchical model for simultaneously estimating the customer arrival rate and primary demand, which we use on data from a local bakery.

Benjamin Letham, Lydia M. Letham, and Cynthia Rudin (2016) Bayesian inference of arrival rate and substitution behavior from sales transaction data with stockouts. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16. (pdf) (Video)

Bundle pricing from retail transaction data

Bundle offers are often a low-risk way for retailers to increase their profits, if they are priced correctly. We show how historical sales data for the individual items in the bundle can be leveraged to predict how profits will be affected by introducing a bundle at a particular price. We use a copula model that considers the correlations in item demands, and requires far less stringent assumptions than past work in bundling. This work was done during my summer internship at IBM Research.

Benjamin Letham, Wei Sun, and Anshul Sheopuri (2014) Latent variable copula inference for bundle pricing from retail transaction data. In: Proceedings of the 31st International Conference on Machine Learning, ICML '14, pages 217-225. (pdf) (errata)

Predictions from rules

Learning a decision list to predict stroke risk

A decision list is a particular way of organizing association rules to form a classifier which can be both powerful and interpretable. We developed a Bayesian approach for fitting decision lists and used it to create a simple system for predicting stroke risk in atrial fibrillation patients (a la CHADS2).

Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan (2015) Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Annals of Applied Statistics 9(3): 1350-1371. (pdf) (Python code)

Sequential event prediction

Here we discuss using techniques from supervised ranking to make predictions on data that are sequentially revealed, using a model that is a linear combination of rules. It makes predictions using only the set of past events, not their order, and is demonstrated with email recipient recommendation, medical symptom prediction, and an e-commerce recommender system.

Benjamin Letham, Cynthia Rudin, and David Madigan (2013) Sequential event prediction. Machine Learning 93: 357-380. (pdf)

Association rules, collaborative filtering, and recommender systems

The ECML PKDD Discovery Challenge 2013 was to develop a recommender system for recommending given names to parents looking for the perfect baby name at For the competition, I developed a recommender system based on association rules, combined with ideas from user-based collaborative filtering. Despite its simplicity and a lack of feature engineering, I won 3rd place for the offline challenge. A major advantage of my approach was that it was fast enough that recommendations could be made entirely online, and I won 2nd place in an online challenge, where my recommender system was actually implemented into the nameling website.

Benjamin Letham (2013) Similarity-weighted association rules for a name recommender system. In: Proceedings of the ECML PKDD 2013 Discovery Challenge Workshop. (pdf)

Learning theory and association rules

We developed a learning theory framework for using association rules for classification and sequential event prediction. We use both VC bounds and results from algorithmic stability. The work was first published at COLT, and then an extended version at JMLR.

Cynthia Rudin, Benjamin Letham, Ansaf Salleb-Aouissi, Eugene Kogan and David Madigan (2011) Sequential event prediction with association rules. In: Proceedings of the 24th Annual Conference on Learning Theory, COLT '11, pages 615-634. (pdf)
Cynthia Rudin, Benjamin Letham, David Madigan (2013) Learning theory analysis for association rules and sequential event prediction. Journal of Machine Learning Research 14: 3385-3436. (pdf)

Information retrieval

Growing a list using the Internet

There are many situations in which important information is scattered across the Internet in many incomplete lists. Given a few seed examples of what we're interested in, can we find and compile all of these incomplete lists of related items to grow a complete list in an unsupervised way? We developed an algorithm for doing this and which works well enough to be useful on a wide range of list growing problems. This work was presented at ECML PKDD 2013.

Benjamin Letham, Cynthia Rudin, and Katherine Heller (2013) Growing a list. Data Mining and Knowledge Discovery 27: 372-395. (pdf) (supplement)

This project received some attention in the media, including a segment about the project on 89.7 WGBH Boston Public Radio. A recording of the live interview is here.


Cross-sensory interactions

While working at The Martinos Center for Biomedical Imaging at Mass. General Hospital I was involved in a study of cross-sensory interactions (auditory stimuli activate visual cortex and vice versa) using MEG and fMRI.

Tommi Raij, Jyrki Ahveninen, Fa-Hsuan Lin, Thomas Witzel, Iiro P. Jaaskelainen, Benjamin Letham, Emily Israeli, Cherif Sahyoun, Christos Vasios, Steven Stufflebeam, Matti Hamalainen (2010) Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices. European Journal of Neuroscience, 31(10): 1772-1782. (pdf)

The difficulties of measuring the timing of weak cross-sensory interactions led to this work:

Benjamin Letham and Tommi Raij (2011) Statistically robust measurement of evoked response onset latencies. Journal of Neuroscience Methods, 194(2): 374-379. (pdf)

Data science posts

In Defense of Fahrenheit

On a scale of 0 to 100, how warm is it? I combine Census data with daily temperatures from nearly 4000 weather stations to estimate the distribution of temperatures experienced by people in the U.S. The oft-maligned Fahrenheit scale nearly perfectly captures this distribution and is a great temperature scale for U.S. weather.

Debate bingo: 2016 presidential edition

Presidential candidates tend to repeat themselves, and our two for 2016 are no exception. I extracted some of their most commonly used phrases and used them to create a bingo card generator, for use during the debates.

The best part of a lightsaber duel is the talking

Why are the original Star Wars lightsaber duels so much better than those from the Prequels? It's not the simple fight choreography, it's the talking. In this post I quantify the balance between talking and fighting in Star Wars lightsaber duels and show that there is a clear difference between the Original Trilogy and the Prequels. Unfortunately, The Force Awakens is a typical Prequels duel.

Was 2015 Boston's worst winter yet?

When I finally had some time after spending the entire month of February shoveling snow, I did an analysis of snowfall data to determine exactly how unusual of a winter it was. Although 2015 beat the all-time total snowfall record by only a couple inches, a deeper look at the data revealed that this was by far the worst winter in recorded history.

Technical expositions

On NP-completeness

The theory of NP-completeness has its roots in a foundational result by Cook, who showed that Boolean satisfiability (SAT) is NP-complete and thus unlikely to admit an efficient solution. In this short paper I prove an analogous result using binary integer programming in the place of SAT. The proof is notationally cleaner and more straightforward to those for whom the language of integer programming is more natural than that of SAT.

Benjamin Letham (2011) An integer programming proof of Cook's theorem of NP-completeness. (pdf)

On Bayesian Analysis

In this work I provide a somewhat rigorous yet simultaneously informal introduction to Bayesian analysis. It covers everything from the theory of posterior asymptotics to practical considerations of MCMC sampling. It assumes the reader is comfortable with basic probability and some mathematical rigor.

Benjamin Letham (2012) An overview of Bayesian analysis. (pdf)

About me

When I'm not working on research, I enjoy sailing, traveling, playing the violin, and programming. I developed (and continue to develop) the free, open source race timing software fsTimer, which has been used for timing races around the world. I am also a developer and maintainer of the Prophet time series forecasting package for R and Python.

My Erdös number is 4.