Allen B. Downey's Blog: Probably Overthinking It
May 28, 2025
Announcing Think Linear Algebra
I’ve been thinking about Think Linear Algebra for more than a decade, and recently I started working on it in earnest. If you want to get a sense of it, I’ve posted a .
In one way, I am glad I waited � I think it will be better, faster [to write], and stronger [?] because of AI tools. To be clear, I am writing this book, not AI. But I’m finding ChatGPT helpful for brainstorming and Copilot and Cursor helpful for generating and testing code.
If you are curious, here’s my . Before you read it, I want to say in my defense that I often ask questions where I think I know the answer, as a way of checking my understanding without leading too strongly. That way I avoid one of the more painful anti-patterns of working with AI tools, the spiral of confusion the can happen if you start from an incorrect premise.
My next step is to write a proposal, and I will probably use AI tools for that, too. Here’s a first draft that outlines the features I have in mind:
1. Case-Based, Code-FirstEach chapter is built around a case study—drawn from engineering, physics, signal processing, or beyond—that demonstrates the power of linear algebra methods. These examples unfold in Jupyter notebooks that combine explanation, Python code, visualizations, and exercises, all in one place.
2. Multiple Computational PerspectivesThe book uses a variety of tools—NumPy for efficient arrays, SciPy for numerical methods, SymPy for symbolic manipulation, and even NetworkX for graph-based systems. Readers see how different libraries offer different lenses on the same mathematical ideas—and how choosing the right one can make thinking and doing more effective.
3. Top-Down LearningRather than starting from scratch with low-level implementations, we use robust, well-tested libraries from day one. That way, readers can solve real problems immediately, and explore how the algorithms work only when it’s useful to do so. This approach makes linear algebra more motivating, more intuitive—and more fun.
4. Linear Algebra as a Language for ThoughtVectors and matrices are more than data structures—they’re conceptual tools. By expressing problems in linear algebra terms, readers learn to think in higher-level chunks and unlock general-purpose solutions. Instead of custom code for each new problem, they learn to use elegant, efficient abstractions. As I wrote in , modern programming lets us collapse the gap between expressing, exploring, and executing ideas.
Finally, here’s what ChatGPT thinks the cover should look like:

The post appeared first on .
May 22, 2025
My very busy week
I’m not sure who scheduled ODSC and PyConUS during the same week, but I am unhappy with their decisions. Last Tuesday I presented a talk and co-presented a workshop at ODSC, and on Thursday I presented a tutorial at PyCon.
If you would like to follow along with my very busy week, here are the resources:
Practical Bayesian Modeling with PyMCCo-presented with for ODSC East 2025
In this tutorial, we explore Bayesian regression using PyMC � the primary library for Bayesian sampling in Python � focusing on survey data and other datasets with categorical outcomes. Starting with logistic regression, we’ll build up to categorical and ordered logistic regression, showcasing how Bayesian approaches provide versatile tools for developing and evaluating complex models. Participants will leave with practical skills for implementing Bayesian regression models in PyMC, along with a deeper appreciation for the power of Bayesian inference in real-world data analysis. Participants should be familiar with Python, the SciPy ecosystem, and basic statistics, but no experience with Bayesian methods is required.
The ; it includes notebooks where you can run the examples, and there’s a link to the slides.
And then later that day I presented�
Mastering Time Series Analysis with StatsModels: From Decomposition to ARIMATime series analysis provides essential tools for modeling and predicting time-dependent data, especially data exhibiting seasonal patterns or serial correlation. This tutorial covers tools in the StatsModels library including seasonal decomposition and ARIMA. As examples, we’ll look at weather data and electricity generation from renewable sources in the United States since 2004 � but the methods we’ll cover apply to many kinds of real-world time series data. Outline Introduction to time series Overview of the data Seasonal decomposition, additive model Seasonal decomposition, multiplicative model Serial correlation and autoregression ARIMA Seasonal ARIMA
This talk is based on . .
Unfortunately there’s no video from the talk, but I presented related material in this :
After the talk, Seamus McGovern presented me with an award for being, apparently, !

On Wednesday I flew to Pittsburgh, and on Thursday I presented�
Analyzing Survey Data with Pandas and StatsModelsPyConUS 2025 tutorial
Whether you are working with customer data or tracking election polls, Pandas and StatsModels provide powerful tools for getting insights from survey data. In this tutorial, we’ll start with the basics and work up to age-period-cohort analysis and logistic regression. As examples, we’ll use data from the General Social Survey to see how political beliefs have changed over the last 50 years in the United States. We’ll follow the essential steps of a data science project, from loading and validating data, exploring and visualizing, modeling and predicting, and communicating results.
Here’s the.
Sadly, the tutorial was not recorded.
Now that I have a moment of calm, I’m getting back to Think Linear Algebra. More about that soon!
The post appeared first on .
April 6, 2025
Announcing Think Stats 3e
The third edition of Think Stats is on its way to the printer! You can preorder now from and (those are affiliate links), or if you can’t wait to get a paper copy, you can .
Here’s the new cover, still featuring a suspicious-looking archerfish.

If you are not familiar with the previous editions, Think Stats is an introduction to practical methods for exploring and visualizing data, discovering relationships and trends, and communicating results.
The organization of the book follows the process I use when I start working with a dataset:
For the third edition, I started by moving the book into Jupyter notebooks. This change has one immediate benefit � you can read the text, run the code, and work on the exercises all in one place. And the notebooks are designed to work on Google Colab, so you can get started without installing anything.
The move to notebooks has another benefit � the code is more visible. In the first two editions, some of the code was in the book and some was in supporting files available online. In retrospect, it’s clear that splitting the material in this way was not ideal, and it made the code more complicated than it needed to be. In the third edition, I was able to simplify the code and make it more readable.
Since the last edition was published, I’ve developed a library called that provides objects that represent statistical distributions. This library is more mature now, so the updated code makes better use of it.
When I started this project, NumPy and SciPy were not as widely used, and Pandas even less, so the original code used Python data structures like lists and dictionaries. This edition uses arrays and Pandas structures extensively, and makes more use of functions these libraries provide.
The third edition covers the same topics as the original, in almost the same order, but the text is substantially revised. Some of the examples are new; others are updated with new data. I’ve developed new exercises, revised some of the old ones, and removed a few. I think the updated exercises are better connected to the examples, and more interesting.
Since the first edition, this book has been based on the thesis that many ideas that are hard to explain with math are easier to explain with code. In this edition, I have doubled down on this idea, to the point where there is almost no mathematical notation left.
New Data, New ExamplesIn the previous edition, I was not happy with the chapter on time-series analysis, so I almost entirely replaced it, using as an example data on renewable electricity generation from U.S. Energy Information Administration. This dataset is more interesting than the one it replaced, and it works better with time-series methods, including seasonal decomposition and ARIMA.

Example from Chapter 12, showing electricity production from solar power in the US.
And for the chapters on regression (simple and multiple) I couldn’t resist using the now-famous .

Example from Chapter 10, showing a scatter plot of penguin measurements.
Other examples use some of the same datasets from the previous edition, including the National Survey of Family Growth (NSFG) and Behavioral Risk Factor Surveillance System (BRFSS).
Overall, I’m very happy with the results. I hope you like it!
The post appeared first on .
March 19, 2025
Young Adults Want Fewer Children
The most recent data from the National Survey of Family Growth (NSFG) provides a first look at people born in the 2000s as young adults and an updated view of people born in the 1990s at the peak of their child-bearing years. Compared to previous generations at the same ages, these cohorts have fewer children, and they are less likely to say they intend to have children. Unless their plans change, trends toward lower fertility are likely to continue for the next 10-20 years.
The following figure shows the number of children fathered by male respondents as a function of their age when interviewed, grouped by decade of birth. It includes the most recent data, collected in 2022-23, combined with data from previous iterations of the survey going back to 1982.

Men born in the 1990s and 2000s have fathered fewer children than previous generations at the same ages:
At age 33, men born in the 1990s (blue line) have 0.6 children on average, compared to 1.1 � 1.4 in previous cohorts. At age 24, men born in the 2000s (violet line) have 0.1 children on average, compared to 0.2 � 0.4 in previous cohorts.The pattern is similar for women.

Women born in the 1990s and 2000s are having fewer children, later, than previous generations.Â
At age 33, women in the 1990s cohort have 1.4 children on average, compared to 1.7 � 1.8 in previous cohorts. At age 24, women in the 2000s cohort have 0.3 children on average, compared to 0.6 � 0.8 in previous cohorts. Desires and IntentionsThe NSFG asks respondents whether they want to have children and whether they intend to. These questions are useful because they distinguish between two possible causes of declining fertility. If someone says they want a child, but don’t intend to have one, it seems like something is standing in their way. In that case, changing circumstances might change their intentions. But if they don’t want children, that might be less likely to change.
Let’s start with stated desires. The following figure shows the fraction of men who say they want a child � or another child if they have at least one � grouped by decade of birth.

Men born in the 2000s are less likely to say they want to have a child � about 86% compared to 92% in previous cohorts. Men born in the 1990s are indistinguishable from previous cohorts.
The pattern is similar for women � the following figure shows the fraction who say they want a baby, grouped by decade of birth.

Women born in the 2000s are less likely to say they want a baby â€� about 76%, compared to 87% for previous cohorts when they were interviewed at the same ages. Women born in the 1990s are in line with previous generations.Â
Maybe surprisingly, men are more likely to say they want children. For example, of young men (15 to 24) born in the 2000s, 86% say they want children, compared to 76% of their female peers. Lyman Stone .
What About Intentions?The patterns are similar when people are asked whether they intend to have a child. Men and women born in the 1990s are indistinguishable from previous generations, but
Men born in the 2000s are less likely to say they intend to have a child � about 80% compared to 85�86% in previous cohorts at the same ages (15 to 24).Women born in the 2000s are less likely to say they intend to have a child � about 69% compared to 80�82% in previous cohorts.Now let’s look more closely at the difference between wants and intentions. The following figure shows the percentage of men who want a child minus the percentage who intend to have a child.

Among young men, the difference is small � most people who want a child intend to have one. The difference increases with age. Among men in their 30s, a substantial number say they would like another child but don’t intend to have one.
Here are the same differences for women.

The patterns are similar � among young women, most who want a child intend to have one. Among women in their 30s, the gap sometimes exceeds 20 percentage points, but might be decreasing in successive generations.
These results suggest that fertility is lower among people born in the 1990s and 2000s � at least so far � because they want fewer children, not because circumstances prevent them from having the children they want.
From the point of view of reproductive freedom, that conclusion is better than an alternative where people want children but can’t have them. But from the perspective of public policy, these results suggest that reversing these trends would be difficult: removing barriers is relatively easy � changing what people want is generally harder.
DATA NOTE: In the most recent iteration of the NSFG, about 75% of respondents were surveyed online; the other 25% were interviewed face-to-face, as all respondents were in previous iterations. Changes like this can affect the results, especially for more sensitive questions. And in the NSFG, that there are non-negligible differences when we compare online and face-to-face responses. Specifically, people who responded online were less likely to say they want children and less likely to say they intend to have children. At first consideration, it’s possible that these differences could be due to social desirability bias.
However, people who responded online also reported substantially lower parity (women) and number of biological children (men), on average, than people interviewed face-to-face � and it is much less likely that these responses depend on interview format. It is more likely that the way respondents were assigned to different formats depended on parity/number of children, and that difference explains the observed differences in desire and intent for more children. Since there is no strong evidence that the change in format accounts for the differences we see, I’m taking the results at face value for now.
The post appeared first on .
Young Adults Are Having the Children They Want � But They Want Fewer
The most recent data from the National Survey of Family Growth (NSFG) provides a first look at people born in the 2000s as young adults and an updated view of people born in the 1990s at the peak of their child-bearing years. Compared to previous generations at the same ages, these cohorts have fewer children, and they are less likely to say they intend to have children. Unless their plans change, trends toward lower fertility are likely to continue for the next 10-20 years.
The following figure shows the number of children fathered by male respondents as a function of their age when interviewed, grouped by decade of birth. It includes the most recent data, collected in 2022-23, combined with data from previous iterations of the survey going back to 1982.

Men born in the 1990s and 2000s have fathered fewer children than previous generations at the same ages:
At age 33, men born in the 1990s (blue line) have 0.6 children on average, compared to 1.1 � 1.4 in previous cohorts. At age 24, men born in the 2000s (violet line) have 0.1 children on average, compared to 0.2 � 0.4 in previous cohorts.The pattern is similar for women.

Women born in the 1990s and 2000s are having fewer children, later, than previous generations.Â
At age 33, women in the 1990s cohort have 1.4 children on average, compared to 1.7 � 1.8 in previous cohorts. At age 24, women in the 2000s cohort have 0.3 children on average, compared to 0.6 � 0.8 in previous cohorts. Desires and IntentionsThe NSFG asks respondents whether they want to have children and whether they intend to. These questions are useful because they distinguish between two possible causes of declining fertility. If someone says they want a child, but don’t intend to have one, it seems like something is standing in their way. In that case, changing circumstances might change their intentions. But if they don’t want children, that might be less likely to change.
Let’s start with stated desires. The following figure shows the fraction of men who say they want a child � or another child if they have at least one � grouped by decade of birth.

Men born in the 2000s are less likely to say they want to have a child � about 86% compared to 92% in previous cohorts. Men born in the 1990s are indistinguishable from previous cohorts.
The pattern is similar for women � the following figure shows the fraction who say they want a baby, grouped by decade of birth.

Women born in the 2000s are less likely to say they want a baby â€� about 76%, compared to 87% for previous cohorts when they were interviewed at the same ages. Women born in the 1990s are in line with previous generations.Â
Maybe surprisingly, men are more likely to say they want children. For example, of young men (15 to 24) born in the 2000s, 86% say they want children, compared to 76% of their female peers. Lyman Stone .
What About Intentions?The patterns are similar when people are asked whether they intend to have a child. Men and women born in the 1990s are indistinguishable from previous generations, but
Men born in the 2000s are less likely to say they intend to have a child � about 80% compared to 85�86% in previous cohorts at the same ages (15 to 24).Women born in the 2000s are less likely to say they intend to have a child � about 69% compared to 80�82% in previous cohorts.Now let’s look more closely at the difference between wants and intentions. The following figure shows the percentage of men who want a child minus the percentage who intend to have a child.

Among young men, the difference is small � most people who want a child intend to have one. The difference increases with age. Among men in their 30s, a substantial number say they would like another child but don’t intend to have one.
Here are the same differences for women.

The patterns are similar � among young women, most who want a child intend to have one. Among women in their 30s, the gap sometimes exceeds 20 percentage points, but might be decreasing in successive generations.
These results suggest that fertility is lower among people born in the 1990s and 2000s � at least so far � because they want fewer children, not because circumstances prevent them from having the children they want.
From the point of view of reproductive freedom, that conclusion is better than an alternative where people want children but can’t have them. But from the perspective of public policy, these results suggest that reversing these trends would be difficult: removing barriers is relatively easy � changing what people want is generally harder.
DATA NOTE: In the most recent iteration of the NSFG, about 75% of respondents were surveyed online; the other 25% were interviewed face-to-face, as all respondents were in previous iterations. Changes like this can affect the results, especially for more sensitive questions. And in the NSFG, that there are non-negligible differences when we compare online and face-to-face responses. Specifically, people who responded online were less likely to say they want children and less likely to say they intend to have children. At first consideration, it’s possible that these differences could be due to social desirability bias.
However, people who responded online also reported substantially lower parity (women) and number of biological children (men), on average, than people interviewed face-to-face � and it is much less likely that these responses depend on interview format. It is more likely that the way respondents were assigned to different formats depended on parity/number of children, and that difference explains the observed differences in desire and intent for more children. Since there is no strong evidence that the change in format accounts for the differences we see, I’m taking the results at face value for now.
January 20, 2025
Algorithmic Fairness
This is the last in a series of excerpts from Elements of Data Science, now and online booksellers.
This article is based on the Recidivism Case Study, which is about algorithmic fairness. The goal of the case study is to explain the statistical arguments presented in two articles from 2016:
“�, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, and published by .A response by Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel: “�, published in the Washington Post.Both are about COMPAS, a statistical tool used in the justice system to assign defendants a “risk score� that is intended to reflect the risk that they will commit another crime if released.
The ProPublica article evaluates COMPAS as a binary classifier, and compares its error rates for black and white defendants. In response, the Washington Post article shows that COMPAS has the same predictive value black and white defendants. And they explain that the test cannot have the same predictive value and the same error rates at the same time.
I replicated the analysis from the ProPublica article. I replicated the analysis from the WaPo article. In this article I use the same methods to evaluate the performance of COMPAS for male and female defendants. I find that COMPAS is unfair to women: at every level of predicted risk, women are less likely to be arrested for another crime.
You can run this Jupyter notebook on .
Male and female defendantsThe authors of the ProPublica article published a supplementary article, , which describes their analysis in more detail. In the supplementary article, they briefly mention results for male and female respondents:
The COMPAS system unevenly predicts recidivism between genders. According to Kaplan-Meier estimates, women rated high risk recidivated at a 47.5 percent rate during two years after they were scored. But men rated high risk recidivated at a much higher rate � 61.2 percent � over the same time period. This means that a high-risk woman has a much lower risk of recidivating than a high-risk man, a fact that may be overlooked by law enforcement officials interpreting the score.
We can replicate this result using the methods from the previous notebooks; we don’t have to do Kaplan-Meier estimation.
According to the binary gender classification in this dataset, about 81% of defendants are male.
male = cp["sex"] == "Male"male.mean()0.8066260049902967female = cp["sex"] == "Female"female.mean()0.19337399500970334Here are the confusion matrices for male and female defendants.
from rcs_utils import make_matrixmatrix_male = make_matrix(cp[male])matrix_malePred PositivePred NegativeActualPositive17321021Negative9942072matrix_female = make_matrix(cp[female])matrix_femalePred PositivePred NegativeActualPositive303195Negative288609And here are the metrics:
from rcs_utils import compute_metricsmetrics_male = compute_metrics(matrix_male, "Male defendants")metrics_malePercentMale defendantsFPR32.4FNR37.1PPV63.5NPV67.0Prevalence47.3metrics_female = compute_metrics(matrix_female, "Female defendants")metrics_femalePercentFemale defendantsFPR32.1FNR39.2PPV51.3NPV75.7Prevalence35.7The fraction of defendants charged with another crime (prevalence) is substantially higher for male defendants (47% vs 36%).
Nevertheless, the error rates for the two groups are about the same. As a result, the predictive values for the two groups are substantially different:
PPV: Women classified as high risk are less likely to be charged with another crime, compared to high-risk men (51% vs 64%).NPV: Women classified as low risk are more likely to “survive� two years without a new charge, compared to low-risk men (76% vs 67%).The difference in predictive values implies that COMPAS is not calibrated for men and women. Here are the calibration curves for male and female defendants.

For all risk scores, female defendants are substantially less likely to be charged with another crime. Or, reading the graph the other way, female defendants are given risk scores 1-2 points higher than male defendants with the same actual risk of recidivism.
To the degree that COMPAS scores are used to decide which defendants are incarcerated, those decisions:
Are unfair to women.Are less effective than they could be, if they incarcerate lower-risk women while allowing higher-risk men to go free.What would it take?Suppose we want to fix COMPAS so that predictive values are the same for male and female defendants. We could do that by using different thresholds for the two groups. In this section, we’ll see what it would take to re-calibrate COMPAS; then we’ll find out what effect that would have on error rates.
From the previous notebook, sweep_threshold loops through possible thresholds, makes the confusion matrix for each threshold, and computes the accuracy metrics. Here are the resulting tables for all defendants, male defendants, and female defendants.
from rcs_utils import sweep_thresholdtable_all = sweep_threshold(cp)table_male = sweep_threshold(cp[male])table_female = sweep_threshold(cp[female])
As we did in the previous notebook, we can find the threshold that would make predictive value the same for both groups.
from rcs_utils import predictive_valuematrix_all = make_matrix(cp)ppv, npv = predictive_value(matrix_all)from rcs_utils import crossingcrossing(table_male["PPV"], ppv)array(3.36782883)crossing(table_male["NPV"], npv)array(3.40116329)With a threshold near 3.4, male defendants would have the same predictive values as the general population. Now let’s do the same computation for female defendants.
crossing(table_female["PPV"], ppv)array(6.88124668)crossing(table_female["NPV"], npv)array(6.82760429)To get the same predictive values for men and women, we would need substantially different thresholds: about 6.8 compared to 3.4. At those levels, the false positive rates would be very different:
from rcs_utils import interpolateinterpolate(table_male["FPR"], 3.4)array(39.12)interpolate(table_female["FPR"], 6.8)array(9.14)And so would the false negative rates.
interpolate(table_male["FNR"], 3.4)array(30.98)interpolate(table_female["FNR"], 6.8)array(74.18)If the test is calibrated in terms of predictive value, it is uncalibrated in terms of error rates.
ROCIn the previous notebook I defined the . The following figure shows ROC curves for male and female defendants:
from rcs_utils import plot_rocplot_roc(table_male)plot_roc(table_female)
The ROC curves are nearly identical, which implies that it is possible to calibrate COMPAS equally for male and female defendants.
SummaryWith respect to sex, COMPAS is fair by the criteria posed by the ProPublica article: it has the same error rates for groups with different prevalence. But it is unfair by the criteria of the WaPo article, which argues:
A risk score of seven for black defendants should mean the same thing as a score of seven for white defendants. Imagine if that were not so, and we systematically assigned whites higher risk scores than equally risky black defendants with the goal of mitigating ProPublica’s criticism. We would consider that a violation of the fundamental tenet of equal treatment.
With respect to male and female defendants, COMPAS violates this tenet.
So who’s right? We have two competing definitions of fairness, and it is mathematically impossible to satisfy them both. Is it better to have equal error rates for all groups, as COMPAS does for men and women? Or is it better to be calibrated, which implies equal predictive values? Or, since we can’t have both, should the test be “tempered�, allowing both error rates and predictive values to depend on prevalence?
I explore these trade-offs in more detail. And I summarized these results in Chapter 9 of .
This is the last in a series of excerpts from Elements of...
This is the last in a series of excerpts from Elements of Data Science, now and online booksellers.
This article is based on the Recidivism Case Study, which is about algorithmic fairness. The goal of the case study is to explain the statistical arguments presented in two articles from 2016:
“�, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, and published by .A response by Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel: “�, published in the Washington Post.Both are about COMPAS, a statistical tool used in the justice system to assign defendants a “risk score� that is intended to reflect the risk that they will commit another crime if released.
The ProPublica article evaluates COMPAS as a binary classifier, and compares its error rates for black and white defendants. In response, the Washington Post article shows that COMPAS has the same predictive value black and white defendants. And they explain that the test cannot have the same predictive value and the same error rates at the same time.
I replicated the analysis from the ProPublica article. I replicated the analysis from the WaPo article. In this article I use the same methods to evaluate the performance of COMPAS for male and female defendants. I find that COMPAS is unfair to women: at every level of predicted risk, women are less likely to be arrested for another crime.
You can run this Jupyter notebook on .
Male and female defendantsThe authors of the ProPublica article published a supplementary article, , which describes their analysis in more detail. In the supplementary article, they briefly mention results for male and female respondents:
The COMPAS system unevenly predicts recidivism between genders. According to Kaplan-Meier estimates, women rated high risk recidivated at a 47.5 percent rate during two years after they were scored. But men rated high risk recidivated at a much higher rate � 61.2 percent � over the same time period. This means that a high-risk woman has a much lower risk of recidivating than a high-risk man, a fact that may be overlooked by law enforcement officials interpreting the score.
We can replicate this result using the methods from the previous notebooks; we don’t have to do Kaplan-Meier estimation.
According to the binary gender classification in this dataset, about 81% of defendants are male.
male = cp["sex"] == "Male"male.mean()0.8066260049902967female = cp["sex"] == "Female"female.mean()0.19337399500970334Here are the confusion matrices for male and female defendants.
from rcs_utils import make_matrixmatrix_male = make_matrix(cp[male])matrix_malePred PositivePred NegativeActualPositive17321021Negative9942072matrix_female = make_matrix(cp[female])matrix_femalePred PositivePred NegativeActualPositive303195Negative288609And here are the metrics:
from rcs_utils import compute_metricsmetrics_male = compute_metrics(matrix_male, "Male defendants")metrics_malePercentMale defendantsFPR32.4FNR37.1PPV63.5NPV67.0Prevalence47.3metrics_female = compute_metrics(matrix_female, "Female defendants")metrics_femalePercentFemale defendantsFPR32.1FNR39.2PPV51.3NPV75.7Prevalence35.7The fraction of defendants charged with another crime (prevalence) is substantially higher for male defendants (47% vs 36%).
Nevertheless, the error rates for the two groups are about the same. As a result, the predictive values for the two groups are substantially different:
PPV: Women classified as high risk are less likely to be charged with another crime, compared to high-risk men (51% vs 64%).NPV: Women classified as low risk are more likely to “survive� two years without a new charge, compared to low-risk men (76% vs 67%).The difference in predictive values implies that COMPAS is not calibrated for men and women. Here are the calibration curves for male and female defendants.

For all risk scores, female defendants are substantially less likely to be charged with another crime. Or, reading the graph the other way, female defendants are given risk scores 1-2 points higher than male defendants with the same actual risk of recidivism.
To the degree that COMPAS scores are used to decide which defendants are incarcerated, those decisions:
Are unfair to women.Are less effective than they could be, if they incarcerate lower-risk women while allowing higher-risk men to go free.What would it take?Suppose we want to fix COMPAS so that predictive values are the same for male and female defendants. We could do that by using different thresholds for the two groups. In this section, we’ll see what it would take to re-calibrate COMPAS; then we’ll find out what effect that would have on error rates.
From the previous notebook, sweep_threshold loops through possible thresholds, makes the confusion matrix for each threshold, and computes the accuracy metrics. Here are the resulting tables for all defendants, male defendants, and female defendants.
from rcs_utils import sweep_thresholdtable_all = sweep_threshold(cp)table_male = sweep_threshold(cp[male])table_female = sweep_threshold(cp[female])
As we did in the previous notebook, we can find the threshold that would make predictive value the same for both groups.
from rcs_utils import predictive_valuematrix_all = make_matrix(cp)ppv, npv = predictive_value(matrix_all)from rcs_utils import crossingcrossing(table_male["PPV"], ppv)array(3.36782883)crossing(table_male["NPV"], npv)array(3.40116329)With a threshold near 3.4, male defendants would have the same predictive values as the general population. Now let’s do the same computation for female defendants.
crossing(table_female["PPV"], ppv)array(6.88124668)crossing(table_female["NPV"], npv)array(6.82760429)To get the same predictive values for men and women, we would need substantially different thresholds: about 6.8 compared to 3.4. At those levels, the false positive rates would be very different:
from rcs_utils import interpolateinterpolate(table_male["FPR"], 3.4)array(39.12)interpolate(table_female["FPR"], 6.8)array(9.14)And so would the false negative rates.
interpolate(table_male["FNR"], 3.4)array(30.98)interpolate(table_female["FNR"], 6.8)array(74.18)If the test is calibrated in terms of predictive value, it is uncalibrated in terms of error rates.
ROCIn the previous notebook I defined the . The following figure shows ROC curves for male and female defendants:
from rcs_utils import plot_rocplot_roc(table_male)plot_roc(table_female)
The ROC curves are nearly identical, which implies that it is possible to calibrate COMPAS equally for male and female defendants.
SummaryWith respect to sex, COMPAS is fair by the criteria posed by the ProPublica article: it has the same error rates for groups with different prevalence. But it is unfair by the criteria of the WaPo article, which argues:
A risk score of seven for black defendants should mean the same thing as a score of seven for white defendants. Imagine if that were not so, and we systematically assigned whites higher risk scores than equally risky black defendants with the goal of mitigating ProPublica’s criticism. We would consider that a violation of the fundamental tenet of equal treatment.
With respect to male and female defendants, COMPAS violates this tenet.
So who’s right? We have two competing definitions of fairness, and it is mathematically impossible to satisfy them both. Is it better to have equal error rates for all groups, as COMPAS does for men and women? Or is it better to be calibrated, which implies equal predictive values? Or, since we can’t have both, should the test be “tempered�, allowing both error rates and predictive values to depend on prevalence?
I explore these trade-offs in more detail. And I summarized these results in Chapter 9 of .
January 3, 2025
Confidence In the Press
This is the fifth in a series of excerpts from Elements of Data Science, now and online booksellers. It’s based on Chapter 16, which is part of the political alignment case study. You can read the complete example , or run the Jupyter notebook on .
Because this is a teaching example, it builds incrementally. If you just want to see the results, scroll to the end!
Chapter 16 is a template for exploring relationships between political alignment (liberal or conservative) and other beliefs and attitudes. In this example, we’ll use that template to look at the ways confidence in the press has changed over the last 50 years in the U.S.
The dataset we’ll use is an excerpt of data from the General Social Survey. It contains three resamplings of the original data. We’ll start with the first.
datafile = "gss_pacs_resampled.hdf"gss = pd.read_hdf(datafile, "gss0")gss.shape(72390, 207)It contains one row for each respondent and one column per variable.
Changes in ConfidenceThe General Social Survey includes several questions about a confidence in various institutions. Here are the names of the variables that contain the responses.
' '.join(column for column in gss.columns if 'con' in column)'conarmy conbus conclerg coneduc confed confinan coninc conjudge conlabor conlegis conmedic conpress conrinc consci contv'Here’s how this section of the survey is introduced.
I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?
The variable we’ll explore is conpress, which is about “the press�.
varname = "conpress"column = gss[varname]column.tail()72385 2.072386 3.072387 3.072388 2.072389 2.0Name: conpress, dtype: float64As we’ll see, response to this question have changed substantiall over the last few decades.
ResponsesHere’s the distribution of responses:
column.value_counts(dropna=False).sort_index()1.0 69682.0 244033.0 16769NaN 24250Name: conpress, dtype: int64The special value NaN indicates that the respondent was not asked the question, declined to answer, or said they didn’t know.
The following cell shows the numerical values and the text of the responses they stand for.
responses = [1, 2, 3]labels = [ "A great deal", "Only some", "Hardly any",]Here’s what the distribution looks like. plt.xticks puts labels on the
-axis.
pmf = Pmf.from_seq(column)pmf.bar(alpha=0.7)decorate(ylabel="PMF", title="Distribution of responses")plt.xticks(responses, labels);
About had of the respondents have “only some� confidence in the press � but we should not make too much of this result because it combines different numbers of respondents interviewed at different times.
Responses over timeIf we make a cross tabulation of year and the variable of interest, we get the distribution of responses over time.
xtab = pd.crosstab(gss["year"], column, normalize="index") * 100xtab.head()conpress1.02.03.0year197322.69647762.39837414.905149197424.84683555.75221219.400953197523.92807758.16044317.911480197629.32330853.58851717.088175197724.48436559.14837016.367265Now we can plot the results.
for response, label in zip(responses, labels): xtab[response].plot(label=label)decorate(xlabel="Year", ylabel="Percent", title="Confidence in the press")
The percentages of “A great deal� and “Only some� have been declining since the 1970s. The percentage of “Hardly any� has increased substantially.
Political alignmentTo explore the relationship between these responses and political alignment, we’ll recode political alignment into three groups:
d_polviews = { 1: "Liberal", 2: "Liberal", 3: "Liberal", 4: "Moderate", 5: "Conservative", 6: "Conservative", 7: "Conservative",}Now we can use replace and store the result as a new column in the DataFrame.
gss["polviews3"] = gss["polviews"].replace(d_polviews)With this scale, there are roughly the same number of people in each group.
pmf = Pmf.from_seq(gss["polviews3"])pmf.bar(color="C1", alpha=0.7)decorate( xlabel="Political alignment", ylabel="PMF", title="Distribution of political alignment",)
Now we can use groupby to group the respondents by political alignment.
by_polviews = gss.groupby("polviews3")Here’s a dictionary that maps from each group to a color.
muted = sns.color_palette("muted", 5)color_map = {"Conservative": muted[3], "Moderate": muted[4], "Liberal": muted[0]}Now we can make a PMF of responses for each group.
for name, group in by_polviews: plt.figure() pmf = Pmf.from_seq(group[varname]) pmf.bar(label=name, color=color_map[name], alpha=0.7) decorate(ylabel="PMF", title="Distribution of responses") plt.xticks(responses, labels)


Looking at the “Hardly any� response, it looks like conservatives have the least confidence in the press.
RecodeTo quantify changes in these responses over time, one option is to put them on a numerical scale and compute the mean. Another option is to compute the percentage who choose a particular response or set of responses. Since the changes have been most notable in the “Hardly any� response, that’s what we’ll track. We’ll use replace to recode the values so “Hardly any� is 1 and all other responses are 0.
d_recode = {1: 0, 2: 0, 3: 1}gss["recoded"] = column.replace(d_recode)gss["recoded"].name = varnameWe can use value_counts to confirm that it worked.
gss["recoded"].value_counts(dropna=False)0.0 31371NaN 242501.0 16769Name: conpress, dtype: int64Now if we compute the mean, we can interpret it as the fraction of respondents who report “hardly any� confidence in the press. Multiplying by 100 makes it a percentage.
gss["recoded"].mean() * 10034.833818030743664Note that the Series method mean drops NaN values before computing the mean. The NumPy function mean does not.
Average by groupWe can use by_polviews to compute the mean of the recoded variable in each group, and multiply by 100 to get a percentage.
means = by_polviews["recoded"].mean() * 100meanspolviews3Conservative 44.410101Liberal 27.293806Moderate 34.113831Name: conpress, dtype: float64By default, the group names are in alphabetical order. To get the values in a particular order, we can use the group names as an index:
groups = ["Conservative", "Moderate", "Liberal"]means[groups]polviews3Conservative 44.410101Moderate 34.113831Liberal 27.293806Name: conpress, dtype: float64Now we can make a bar plot with color-coded bars:
title = "Percent with hardly any confidence in the press"colors = color_map.values()means[groups].plot(kind="bar", color=colors, alpha=0.7, label="")decorate( xlabel="", ylabel="Percent", title=title,)plt.xticks(rotation=0);
Conservatives have less confidence in the press than liberals, and moderates are somewhere in the middle.
But again, these results are an average over the interval of the survey, so you should not interpret them as a current condition.
Time seriesWe can use groupby to group responses by year.
by_year = gss.groupby("year")From the result we can select the recoded variable and compute the percentage that responded “Hardly any�.
time_series = by_year["recoded"].mean() * 100And we can plot the results with the data points themselves as circles and a local regression model as a line.
plot_series_lowess(time_series, "C1", label='')decorate( xlabel="Year", ylabel="Percent", title=title)
The fraction of respondents with “Hardly any� confidence in the press has increased consistently over the duration of the survey.
Time series by groupSo far, we have grouped by polviews3 and computed the mean of the variable of interest in each group. Then we grouped by year and computed the mean for each year. Now we’ll use pivot_table to compute the mean in each group for each year.
table = gss.pivot_table( values="recoded", index="year", columns="polviews3", aggfunc="mean") * 100table.head()polviews3ConservativeLiberalModerateyear197422.48243617.31207316.604478197522.33502510.88435417.481203197619.49541317.79448614.901257197722.39819013.20754714.650767197827.17622118.04878016.819013The result is a table that has years running down the rows and political alignment running across the columns. Each entry in the table is the mean of the variable of interest for a given group in a given year.
Plotting the resultsNow let’s see the results.
for group in groups: series = table[group] plot_series_lowess(series, color_map[group]) decorate( xlabel="Year", ylabel="Percent", title="Percent with hardly any confidence in the press",)
Confidence in the press has decreased in all three groups, but among liberals it might have leveled off or even reversed.
ResamplingThe figures we’ve generated so far in this notebook are based on a single resampling of the GSS data. Some of the features we see in these figures might be due to random sampling rather than actual changes in the world. By generating the same figures with different resampled datasets, we can get a sense of how much variation there is due to random sampling. To make that easier, the following function contains the code from the previous analysis all in one place.
def plot_by_polviews(gss, varname): """Plot mean response by polviews and year. gss: DataFrame varname: string column name """ gss["polviews3"] = gss["polviews"].replace(d_polviews) column = gss[varname] gss["recoded"] = column.replace(d_recode) table = gss.pivot_table( values="recoded", index="year", columns="polviews3", aggfunc="mean" ) * 100 for group in groups: series = table[group] plot_series_lowess(series, color_map[group]) decorate( xlabel="Year", ylabel="Percent", title=title, )Now we can loop through the three resampled datasets and generate a figure for each one.
datafile = "gss_pacs_resampled.hdf"for key in ["gss0", "gss1", "gss2"]: df = pd.read_hdf(datafile, key) plt.figure() plot_by_polviews(df, varname)


If you see an effect that is consistent in all three figures, it is less likely to be due to random sampling. If it varies from one resampling to the next, you should probably not take it too seriously.
Based on these results, it seems likely that confidence in the press is continuing to decrease among conservatives and moderates, but not liberals � with the result that polarization on this issue has increased since the 1990s.
December 20, 2024
Political Alignment and Outlook
This is the fourth in a series of excerpts from Elements of Data Science, now and online booksellers. It’s from Chapter 15, which is part of the political alignment case study. You can read the complete chapter , or run the Jupyter notebook on .
In the , we used data from the General Social Survey (GSS) to plot changes in political alignment over time. In this notebook, we’ll explore the relationship between political alignment and respondents� beliefs about themselves and other people.
First we’ll use groupby to compare the average response between groups and plot the average as a function of time. Then we’ll use the Pandas function pivot table to compute the average response within each group as a function of time.
Are People Fair?In the GSS data, the variable fair contains responses to this question:
Do you think most people would try to take advantage of you if they got a chance, or would they try to be fair?
The possible responses are:
CodeResponse1Take advantage2Fair3DependsAs always, we start by looking at the distribution of responses, that is, how many people give each response:
values(gss["fair"])1.0 160892.0 234173.0 2897Name: fair, dtype: int64The plurality think people try to be fair (2), but a substantial minority think people would take advantage (1). There are also a number of NaNs, mostly respondents who were not asked this question.
gss["fair"].isna().sum()29987To count the number of people who chose option 2, “people try to be fair�, we’ll use a dictionary to recode option 2 as 1 and the other options as 0.
recode_fair = {1: 0, 2: 1, 3: 0}As an alternative, we could include option 3, “depends�, by replacing it with 1, or give it less weight by replacing it with an intermediate value like 0.5. We can use replace to recode the values and store the result as a new column in the DataFrame.
gss["fair2"] = gss["fair"].replace(recode_fair)And we’ll use values to make sure it worked.
values(gss["fair2"])0.0 189861.0 23417Name: fair2, dtype: int64Now let’s see how the responses have changed over time.
Fairness Over TimeAs we saw in the previous chapter, we can use groupby to group responses by year.
gss_by_year = gss.groupby("year")From the result we can select fair2 and compute the mean.
fair_by_year = gss_by_year["fair2"].mean()Here’s the result, which shows the fraction of people who say people try to be fair, plotted over time. As in the previous chapter, we plot the data points themselves with circles and a local regression model as a line.
plot_series_lowess(fair_by_year, "C1")decorate( xlabel="Year", ylabel="Fraction saying yes", title="Would most people try to be fair?",)
Sadly, it looks like faith in humanity has declined, at least by this measure. Let’s see what this trend looks like if we group the respondents by political alignment.
Political Views on a 3-point ScaleIn the previous notebook, we looked at responses to polviews, which asks about political alignment. The valid responses are:
CodeResponse1Extremely liberal2Liberal3Slightly liberal4Moderate5Slightly conservative6Conservative7Extremely conservativeTo make it easier to visualize groups, we’ll lump the 7-point scale into a 3-point scale.
recode_polviews = { 1: "Liberal", 2: "Liberal", 3: "Liberal", 4: "Moderate", 5: "Conservative", 6: "Conservative", 7: "Conservative",}We’ll use replace again, and store the result as a new column in the DataFrame.
gss["polviews3"] = gss["polviews"].replace(recode_polviews)With this scale, there are roughly the same number of people in each group.
values(gss["polviews3"])Conservative 21573Liberal 17203Moderate 24157Name: polviews3, dtype: int64Fairness by GroupNow let’s see who thinks people are more fair, conservatives or liberals. We’ll group the respondents by polviews3.
by_polviews = gss.groupby("polviews3")And compute the mean of fair2 in each group.
by_polviews["fair2"].mean()polviews3Conservative 0.577879Liberal 0.550849Moderate 0.537621Name: fair2, dtype: float64It looks like conservatives are a little more optimistic, in this sense, than liberals and moderates. But this result is averaged over the last 50 years. Let’s see how things have changed over time.
Fairness over Time by GroupSo far, we have grouped by polviews3 and computed the mean of fair2 in each group. Then we grouped by year and computed the mean of fair2 for each year. Now we’ll group by polviews3 and year, and compute the mean of fair2 in each group over time.
We could do that computation “by hand� using the tools we already have, but it is so common and useful that it has a name. It is called a pivot table, and Pandas provides a function called pivot_table that computes it. It takes the following arguments:
values, which is the name of the variable we want to summarize: fair2 in this example.index, which is the name of the variable that will provide the row labels: year in this example.columns, which is the name of the variable that will provide the column labels: polview3 in this example.aggfunc, which is the function used to “aggregate�, or summarize, the values: mean in this example.Here’s how we run it.
table = gss.pivot_table( values="fair2", index="year", columns="polviews3", aggfunc="mean")The result is a DataFrame that has years running down the rows and political alignment running across the columns. Each entry in the table is the mean of fair2 for a given group in a given year.
table.head()polviews3ConservativeLiberalModerateyear19750.6256160.6171170.64728019760.6316960.5717820.61210019780.6949150.6594200.66545519800.6000000.5549450.64026419830.5724380.5853660.463492Reading across the first row, we can see that in 1975, moderates were slightly more optimistic than the other groups. Reading down the first column, we can see that the estimated mean of fair2 among conservatives varies from year to year. It is hard to tell looking at these numbers whether it is trending up or down � we can get a better view by plotting the results.
Plotting the ResultsBefore we plot the results, I’ll make a dictionary that maps from each group to a color. Seaborn provide a palette called muted that contains the colors we’ll use.
muted = sns.color_palette("muted", 5)sns.palplot(muted)
And here’s the dictionary.
color_map = {"Conservative": muted[3], "Moderate": muted[4], "Liberal": muted[0]}Now we can plot the results.
groups = ["Conservative", "Liberal", "Moderate"]for group in groups: series = table[group] plot_series_lowess(series, color_map[group])decorate( xlabel="Year", ylabel="Fraction saying yes", title="Would most people try to be fair?",)
The fraction of respondents who think people try to be fair has dropped in all three groups, although liberals and moderates might have leveled off. In 1975, liberals were the least optimistic group. In 2022, they might be the most optimistic. But the responses are quite noisy, so we should not be too confident about these conclusions.
December 14, 2024
Reject Math Supremacy
The premise of Think Stats, and the other books in the Think series, is that � and many ideas that are commonly presented in math notation can be more clearly presented in code.
In the of Think Stats there is almost no math � not because I made a special effort to avoid it, but because I found that I didn’t need it. For example, here’s how I present the binomial distribution in :
Mathematically, the distribution of these outcomes follows a binomial distribution, which has a PMF that is easy to compute.
from scipy.special import comb
def binomial_pmf(k, n, p):
return comb(n, k) * (p**k) * ((1 - p) ** (n - k))SciPy provides the comb function, which computes the number of combinations of n things taken k at a time, often pronounced “n choose k�.
binomial_pmf computes the probability of getting k hits out of n attempts, given p.
I could also present the PMF in math notation, but I’m not sure how it would help � the Python code represents the computation just as clearly. Some readers find math notation intimidating, and even for the ones who don’t, it takes some effort to decode. In my opinion, the payoff for this additional effort is too low.
But one of the people who read the draft disagrees. They wrote:
Provide equations for the distributions. You assume that the reader knows them and then you suddenly show a programming code for them � the code is a challenge to the reader to interpret without knowing the actual equation.
I acknowledge that my approach defies the expectation that we should present math first and then translate it into code. For readers who are used to this convention, presenting the code first is “sudden�.
But why? I think there are two reasons, one practical and one philosophical:
The practical reason is the presumption that the reader is more familiar with math notation and less familiar with code. Of course that’s true for some people, but for other people, it’s the other way around. People who like math have lots of books to choose from; people who like code don’t.The philosophical reason is what I’m calling math supremacy, which is the idea that math notation is the real thing, and everything else � including and especially code � is an inferior imitation. My correspondent hints at this idea with the suggestion that the reader should see the “actual equation�. Math is actual; code is not.I reject math supremacy. Math notation did not come from the sky on stone tablets; it was designed by people for a purpose. Programming languages were also designed by people, for different purposes. Math notation has some good properties � it is concise and it is nearly universal. But programming languages also have good properties � most notably, they are executable. When we express an idea in code, we can run it, test it, and debug it.
So here’s a thought: if you are writing for an audience that is comfortable with math notation, and your ideas can be expressed well in that form � go ahead and use math notation. But if you are writing for an audience that understands code, and your ideas can be expressed well in code � well then you should probably use code. “Actual� code.
Probably Overthinking It
- Allen B. Downey's profile
- 232 followers
