Here’s an idea for not getting tripped up with default priors . . .

I put this in the Prior Choice Recommendations wiki awhile ago:

“The prior can often only be understood in the context of the likelihood”:

Here’s an idea for not getting tripped up with default priors: For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for any parameter (or qoi) is more than 0.1 times the prior sd, then print out a note: “The prior distribution for this parameter is informative.” Then the user can go back and check that the default prior makes sense for this particular example.

I’ve not incorporated this particular method into my workflow, but I like the idea and I’d like to study it further. I think this idea, or something like it, could be important.

David Weakliem on the U.S. electoral college

The sociologist and public opinion researcher has a series of excellent posts here, here, and here on the electoral college. Here’s the start:

The Electoral College has been in the news recently. I [Weakliem] am going to write a post about public opinion on the Electoral College vs. popular vote, but I was diverted into writing about the arguments offered in favor of it.

An editorial in the National Review says “it prevents New York and California from imposing their will on the rest of the country.” Taken literally, that is ridiculous–those two states combined had about 16% of the popular vote in 2016. But presumably the general idea is that the Electoral College makes it harder for a small number of large states to provide a victory. . . . In 2016, 52% of the popular vote came from 10 states: California, Florida, Texas, New York, Pennsylvania, Illinois, Ohio, Michigan, North Carolina, and Georgia (in descending order of number of votes). In the Electoral College, those states combined had 256 electoral votes–in order to win, you would need to add New Jersey (14). Even if you think the difference between ten and eleven states is important, the diversity of the ten biggest states is striking–there’s no way a candidate could win all of them without winning a lot of others.

Good point. Weakliem continues:

The National Review also says that the Electoral College keeps candidates from “retreating to their preferred pockets and running up the score.” That assumes that it’s easier to add to your lead when you already have a lead than when you are close or behind. That may be true in some sports, but in getting votes it seems that things would be more likely to go in the other direction–if you don’t have much support in a place, you have little to lose and a lot to gain. If it made any difference, election by popular vote would probably encourage parties to look outside their “preferred pockets”–e.g., the Republicans might try to compete in California rather than write it off.

I’d not thought of that before, but that sounds right. I guess we’re assuming there’s no large-scale cheating. There could be a concern that one-party-dominant states could cheat in the vote counting, or even more simply by making it harder for voters of one party to vote. Then again, this already happens, so if cheating is a concern, I think the appropriate solution is more transparency in vote counting and in the rules for where people can vote.

Weakliem then talks about public opinion:

There is always more support for abolishing [the electoral college] than keeping it—until 2016, a lot more. . . . The greatest support for abolishing it (80%) was in November 1968, right after the third-party candidacy of George Wallace, which had the goal of preventing an Electoral College majority. The election of 2000 had much less impact on opinions that 2016, maybe because of the general increase in partisanship since 2000.

A lot of recent commentary has treated abolishing the Electoral College as a radical cause, but the public generally likes the idea. . . .


I suspect that most people don’t have strong opinions, and will just follow their party, so that if it becomes a significant topic of debate there will be something close to a 50/50 split.

And then he breaks things down a bit:

The percent in favor of electing the president by popular vote in surveys ending on October 9, 2011 and November 20, 2016:

2011 2016
Democrats 74% 77%
Independents 70% 60%
Republicans 53% 28%

Weakliem presented these numbers to the fractional decimal place, but that is poor form given that variation in these numbers is much more than 1 percentage point, so it would be like reporting your weight as 193.4 pounds.

One thing I do appreciate is that Weakliem just presents the Yes proportions. Lots of times, people present both Yes and No rates, which gives you twice as many numbers to wade through, and then comparisons become much more difficult. So good job on the clean display.

Anyway, he continues with some breakdowns by state:

I used the 2011 survey to look for factors affecting state-level support. I considered number of electoral votes, margin of victory, and region. Support for the electoral college was somewhat higher in small states, which is as expected since it gives their voters more weight. There was no evidence that being in a state where the vote was close made any difference . . . Finally, the only regional distinction that appeared to matter was South vs. non-South. That makes some sense, since despite the talk about “coastal enclaves” vs. “heartland,” the South is still the most regionally distinctive part, and southerners may think that the electoral college protects their regional interests . . .

Funny that support for the electoral college isn’t higher in swing states. It’s not that I think swing-state voters are so selfish that they want the electoral college to preserve their power; it’s more the opposite, that I’d think voters in non-swing states would get annoyed that their votes don’t count. But, hey, I guess not: voters are thinking at the national, not the state level.

Lots more to look at here, I’m sure; also this is an instructive example of how much can be learned by looking carefully at available data.

P.S. I’m posting this now rather than with the usual 6-month delay, not because the subject is particularly topical—if anything, I expect it will become more topical as we go forward toward the next election—but because it demonstrates this general point of learning from observational data by looking at interesting comparisons and time trends. I’d like to have this post up, so I can point students to it when they are thinking of projects involving learning from social science data.

An interview with Tina Fernandes Botts

Hey—this is cool!

What happened was, I was scanning this list of Springbrook High School alumni. And I was like, Tina Fernandes? Class of 1982? I know that person. We didn’t know each other well, but I guess we must have been in the same homeroom a few times? All I can remember from back then is that Tina was a nice person and that she was outspoken. So it was fun to see this online interview, by Cliff Sosis, from 2017. Thanks, Cliff!

P.S. As a special bonus, here’s an article about Chuck Driesell. Chuck and I were in the same economics class, along with Yitzhak. Chuck majored in business in college, Yitzhak became an economics professor, and I never took another econ course again. Which I guess explains how I feel so confident when pontificating about economics.

P.P.S. And for another bonus, I came across this page where Ted Alper (class of 1980) answers random questions. It’s practically a blog!

Surgeon promotes fraudulent research that kills people; his employer, a leading hospital, defends him and attacks whistleblowers. Business as usual.

Paul Alper writes:

A couple of time at my suggestion, you’ve blogged about Paulo Macchiarini.

Here is an update from Susan Perry in which she interviews the director of the Swedish documentary about Macchiarini:

Indeed, Macchiarini made it sound as if his patients had recovered their health when, in fact, the synthetic tracheas he had implanted in their bodies did not work at all. His patients were dying, not thriving.

In 2015, the investigator concluded that Macchiarini had, indeed, committed research fraud. Yet the administrators [at Sweden’s Karolinska Institute] continued to defend their star surgeon — and threatened the whistleblowers with dismissal.

But then there was the fact that the leadership of the hospital and the institute had, instead of listening to the complaints, gone after the whistleblowers and had even complained [about them] to the police.

What was he thinking???

Check out this stunning exchange from the interview:

MinnPost: Did you come to any conclusion about what was motivating [Macchiarini]? It seemed at times at the documentary that he really cared about the patients. He seemed moved by them. And, yet, he then abandons them. He doesn’t follow up with them.

Bosse Lindquist [director of the documentary about this story]: I think that he feels that he deserves success in life and that he ultimately deserves something like a Nobel Prize or something like that. He thinks the world just hasn’t quite seen his excellence yet and that they will eventually. He believes that he’s helping mankind, and I think that he construes reality in such a way that he actually thinks that he was doing good with these patients, but that there were minor problems and stuff that sort of [tripped him up].

This jibes with my impressions in other, nonlethal, examples of research incompetence and research fraud: The researcher believes that he or she is an important person doing important work, and thinks of criticisms of any sort as a bunch of technicalities getting in the way of pathbreaking, potentially life-changing advances. And, of course, once you frame things in this way, a simple utilitarian calculation implies that you’re justified in all sorts of questionable behavior to derail your critics.

All of this is, in some sense, a converse to Clarke’s Law, and it also points to a general danger with utilitarianism—or, to put it another way, it points to the general value of rules and norms.

And what about the whistleblowers?

MP: And what about the whistleblowers? Have they been able to go back to their careers without any professional harm?

BL: No. Two of them have had to change cities and hospitals. Two are still there, but they have been subjected to threats from management and from some of their colleagues who were involved with Macchiarini. They have not received any new grants since this whole thing happened. It’s a crying shame.

MP: That’s quite a terrible outcome, because that may stop other people from stepping forward in similar situations.

BL: Exactly.

MP: Do you feel that everyone who was responsible for ignoring the warnings about Macchiarini has resigned or been fired?

BL: No, no, no. A number of people are still there and have their old jobs and just carry on. Some have been forced to change jobs, to get another job — but in some other function within the hospital or in the government.

And, finally . . .


MP: What has happened to the patients. One was able to successfully have the tube removed, is that correct?

BL: Yeah. One person.

MP: And everybody else has died?

BL: Yes.

The whole thing is no damn joke.

I originally called this “research-lies-allegations-windpipe update update,” but I can’t laugh about this anymore, hence the revised title above.

P.S. Alper writes:

According to the NYT’s Gretchen Reynolds, the Institute is looking into breathing again:

Two dozen healthy young male and female volunteers inhaled 12 different scents from small vials held to their noses. Some of the smells were familiar, like the essence of orange, while others were obscure. The subjects were told to memorize each scent. They went through this process on two occasions. For one, they sat quietly for an hour immediately after the sniffing, with their noses clipped shut to prevent nasal breathing; on the other, they sat for an hour with tape over their mouths to prevent oral breathing.

The men and women were consistently much better at recognizing smells if they breathed through their noses during the quiet hour. Mouth breathing resulted in fuzzier recall and more incorrect answers.

But, no numerical notion of “how much better.” And only “two dozen” subjects? Despite the defrocking of Paolo Macchiarini, the Karolinska Institute is undoubtedly still solvent so it seems strange that it undertakes a study that is more typical of a psychology professor, who has little or no funding, and seeks a publication using his students as convenient subjects. One is reminded of the famous sweaty T-shirt study.

I guess there’s always a market for one-quick-trick-that-will-change-your-life.

Most Americans like big businesses.

Tyler Cowen asks:

Why is there so much suspicion of big business?

Perhaps in part because we cannot do without business, so many people hate or resent business, and they love to criticize it, mock it, and lower its status. Business just bugs them. . . .

The short answer is, No, I don’t think there is so much suspicion of big business in this country. No, I don’t think people love to criticize, mock and lower the status of big business.

This came up a few years ago, and at the time I pulled out data from a 2007 survey showing that just about every big business you could think of was popular, with the only exception being oil companies. Microsoft, Walmart, Citibank, GM, Pfizer: you name it, the survey respondents were overwhelmingly positive.

Nearly two-thirds of respondents say corporate profits are too high, but, “more than seven in ten agree that ‘the strength of this country today is mostly based on the success of American business’ – an opinion that has changed very little over the past 20 years.”

Corporations are more popular with Republicans than with Democrats, but most of the corporations in the survey were popular with a clear majority in either party.

Big business does lots of things for us, and the United States is a proudly capitalist country, so it’s no shocker that most businesses in the survey were very popular.

So maybe the question is, Why did an economist such as Cowen think that people view big business so negatively?

My quick guess is that we notice negative statements more than positive statements. Cowen himself roots for big business, he’s generally on the side of big business, so when he sees any criticism of it, he bristles. He notices the criticism and is bothered by it. When he sees positive statements about big business, that all seems so sensible that perhaps he hardly notices. The negative attitudes are jarring to him so more noticeable. Perhaps in the same way that I notice bad presentations of data. An ugly table or graph is to me like fingernails on the blackboard.

Anyway, it’s perfectly reasonable for Cowen to be interested in those people who “hate or resent business, and they love to criticize it, mock it, and lower its status.” We should just remember that, at least from these survey data, it seems that this is a small minority of people.

Why did I write this post?

The bigger point here is that this is an example of something I see a lot, which is a social scientist or pundit coming up with theories to explain some empirical pattern in the world, but it turns out the pattern isn’t actually real. This came up years ago with Red State Blue State, when I noticed journalists coming up with explanations for voting patterns that were not happening (see for example here) and of course it comes up a lot with noise-mining research, whether it be a psychologist coming up with theories to explain ESP, or a sociologist coming up with theories to explain spurious patterns in sex ratios.

It’s fine to explain data; it’s just important to be aware of what’s being explained. In the context of the above-linked Cowen post, it’s fine to answer the question, “If business is so good, why is it so disliked?”—as long as this sentence is completed as follows: “If business is so good, why is it so disliked by a minority of Americans?” Explaining minority positions is important; we should just be clear it’s a minority.

Or of course it’s possible that Cowen has access to other data I haven’t looked at, perhaps more recent surveys that would modify my empirical understanding. That would be fine too.

P.S. The title of this post was originally “Most Americans like big business.” I changed the last word to “businesses” in response to comments who pointed out that most Americans express negative views about “big business” in general, but they like most individual big businesses that they’re asked about.

Markov chain Monte Carlo doesn’t “explore the posterior”

First some background, then the bad news, and finally the good news.

Spoiler alert: The bad news is that exploring the posterior is intractable; the good news is that we don’t need to explore all of it.

Sampling to characterize the posterior

There’s a misconception among Markov chain Monte Carlo (MCMC) practitioners that the purpose of sampling is to explore the posterior. For example, I’m writing up some reproducible notes on probability theory and statistics through sampling (in pseudocode with R implementations) and have just come to the point where I’ve introduced and implemented Metropolis and want to use it to exemplify convergence mmonitoring. So I did what any right-thinking student would do and borrowed one of my mentor’s diagrams (which is why this will look familiar if you’ve read the convergence monitoring section of Bayesian Data Analysis 3).

First M steps of of isotropic random-walk Metropolis with proposal scale normal(0, 0.2) targeting a bivariate normal with unit variance and 0.9 corelation. After 50 iterations, we haven’t found the typical set, but after 500 iterations we have. Then after 5000 iterations, everything seems to have mixed nicely through this two-dimensional example.

This two-dimensional traceplot gives the misleading impression that the goal is to make sure each chain has moved through the posterior. This low-dimensional thinking is nothing but a trap in higher dimensions. Don’t fall for it!

Bad news from higher dimensions

It’s simply intractable to “cover the posterior” in high dimensions. Consider a 20-dimensional standard normal distribution. There are 20 variables, each of which may be positive or negative, leading to a total of 2^{20}, or more than a million orthants (generalizations of quadrants). In 30 dimensions, that’s more than a billion. You get the picture—the number of orthant grows exponentially so we’ll never cover them all explicitly through sampling.

Good news in expectation

Bayesian inference is based on probability, which means integrating over the posterior density. This boils down to computing expectations of functions of parameters conditioned on data. This we can do.

For example, we can construct point estimates that minimize expected square error by using posterior means, which are just expectations conditioned on data, which are in turn integrals, which can be estimated via MCMC,

\begin{array}{rcl} \hat{\theta} & = & \mathbb{E}[\theta \mid y] \\[8pt] & = & \int_{\Theta} \theta \times p(\theta \mid y) \, \mbox{d}\theta. \\[8pt] & \approx & \frac{1}{M} \sum_{m=1}^M \theta^{(m)},\end{array}

where \theta^{(1)}, \ldots, \theta^{(M)} are draws from the posterior p(\theta \mid y).

If we want to calculate predictions, we do so by using sampling to calculate the integral required for the expectation,

p(\tilde{y} \mid y) \ = \ \mathbb{E}[p(\tilde{y} \mid \theta) \mid y] \ \approx \ \frac{1}{M} \sum_{m=1}^M p(\tilde{y} \mid \theta^{(m)}),

If we want to calculate event probabilities, it’s just the expectation of an indicator function, which we can calculate through sampling, e.g.,

\mbox{Pr}[\theta_1 > \theta_2] \ = \ \mathbb{E}\left[\mathrm{I}[\theta_1 > \theta_2] \mid y\right] \ \approx \ \frac{1}{M} \sum_{m=1}^M \mathrm{I}[\theta_1^{(m)} > \theta_2^{(m)}].

The good news is that we don’t need to visit the entire posterior to compute these expectations to within a few decimal places of accuracy. Even so, MCMC isn’t magic—those two or three decimal places will be zeroes for tail probabilities.

Jonathan (another one) does Veronica Geng does Robert Mueller

Frequent commenter Jonathan (another one) writes:

I realize that so many people bitch about the seminar showdown that you might need at one thank you. This year, I managed to re-read the bulk of Geng, and for that I thank you. I have not yet read any Sattouf, but it clearly has made an impression on you, so it’s on my list.

In thanks, my first brief foray into pseudo-Gengiana, I think I’ve got the tone roughly right, but I’m way short on whimsy, but this is what I managed in a sustained fifteen minute effort. Thanks again.

My fellow Americans:

As you are no doubt aware, I have completed my investigation and report. I write this to inform you of an unfortunate mishap from Friday. Many news outlets have reported that my final report was taken by security guard from my offices to the Justice Department. That is not true. In an attempt to maintain my obsessive secrecy, that was a dummy report, actually containing the text of an unpublished novel by David Foster Wallace that we found in Michael Cohen’s safe. We couldn’t understand it—maybe Bill Barr will have better luck.

The real one was handed to my intern, Jeff, in an ordinary interoffice envelope, and Jeff was told to drop it off at Justice on his way home. He lives nearby with six other interns. Not knowing what he had, he stopped off at the Friday Trivia Happy Hour at the Death and Taxes Pub, drank a little too much, and left the report there. We’ve gone back to look and nobody can find it.
So why not just print out another one? Or for that matter, why didn’t I just email the first report? As you’ve no doubt gleaned by now, computers and email aren’t my thing. As my successor at the FBI, Mr. Comey, demonstrated, email baffles just about all of us. And I don’t use a computer. So there isn’t another copy of the real report. I’ve got all my notes, though, so I ought to be able to cobble together a new report in a couple of months.

Apologies for the delay,
Robert Mueller

PS: Jeff has been chastised. We haven’t fired him, but in asking him about this he let slip that his parents didn’t pay taxes on the nanny who raised him and they may have strongly implied that he played on a high school curling team to get into college. His parents are going to jail and the nanny’s immigration status is being investigated. This requires a short re-opening of the investigation.

The mention of “Jeff” seems particularly Geng-like to me. Perhaps I’m reminded of “Ed.” Thinking of Geng makes me a bit sad, though, not just for her but because it reminds me of the passage of time. I associate Geng, Bill James, and Spy magazine with the mid-1980s. Ahhh, lost youth!

Yes, I really really really like fake-data simulation, and I can’t stop talking about it.

Rajesh Venkatachalapathy writes:

Recently, I had a conversation with a colleague of mine about the virtues of synthetic data and their role in data analysis. I think I’ve heard a sermon/talk or two where you mention this and also in your blog entries. But having convinced my colleague of this point, I am struggling to find good references on this topic.

I was hoping to get some leads from you.

My reply:

Hi, here are some refs: from 2009, 2011, 2013, also this and this and this from 2017, and this from 2018. I think I’ve missed a few, too.

If you want something in dead-tree style, see Section 8.1 of my book with Jennifer Hill, which came out in 2007.

Or, for some classic examples, there’s Bush and Mosteller with the “stat-dogs” in 1954, and Ripley with his simulated spatial processes from, ummmm, 1987 I think it was? Good stuff, all. We should be doing more of it.

Postdoc in Chicago on statistical methods for evidence-based policy

Beth Tipton writes:

The Institute for Policy Research and the Department of Statistics is seeking applicants for a Postdoctoral Fellowship with Dr. Larry Hedges and Dr. Elizabeth Tipton. This fellowship will be a part of a new center which focuses on the development of statistical methods for evidence-based policy. This includes research on methods for meta-analysis, replication, causal generalization, and, more generally, the design and analysis of randomized trials in social, behavioral, and education settings.

The position will include a variety of tasks, including: Conducting simulation studies to understand properties of different estimators; performing reviews of available methods (in the statistics literature) and the use of these methods (in the education and social science literatures); the development of examples of the use of new methods; writing white papers summarizing methods developments for researchers conducting evidence-based policy; and the development of new methods in these areas.

Job Requirements

Required: Ph.D. (expected or obtained) in statistics, biostatistics, the quantitative social sciences, education research methods, or a related field; strong analytical and written communication skills; strong programming skills (R, desired) and familiarity with cluster-computing; and experience with education research, randomized trials, meta-analysis, and/or evidence-based policy.

This will be a one-year appointment beginning September 2019 (or a mutually agreed upon date), with the possibility of renewal for a second year based upon satisfactory performance.

Candidates should submit the following documents in PDF to Valerie Lyne ( with subject line “Post-Doc”:

· CV

· A 1-page statement regarding the candidate’s research interests, qualifications, and prior research experience relevant to this position

· Names and addresses of three references (no letters are required at this time)

We plan to begin reviewing applications on April 12th, 2019 and will continue to do so until the position is filled.

Looks fun, also this is important work.

New golf putting data! And a new golf putting model!

Part 1

Here’s the golf putting data we were using, typed in from Don Berry’s 1996 textbook. The columns are distance in feet from the hole, number of tries, and number of successes:

x n y
2 1443 1346
3 694 577
4 455 337
5 353 208
6 272 149
7 256 136
8 240 111
9 217 69
10 200 67
11 237 75
12 202 52
13 192 46
14 174 54
15 167 28
16 201 27
17 195 31
18 191 33
19 147 20
20 152 24

Graphed here:

Here’s the idealized picture of the golf putt, where the only uncertainty is the angle of the shot:

Which we assume is normally distributed:

And here’s the model expressed in Stan:

data { int J; int n[J]; vector[J] x; int y[J]; real r; real R;
parameters { real sigma;
model { vector[J] p; for (j in 1:J){ p[j] = 2*Phi(asin((R-r)/x[j]) / sigma) - 1; } y ~ binomial(n, p);
generated quantities { real sigma_degrees; sigma_degrees = (180/pi())*sigma;

Fit to the above data, the estimate of sigma_degrees is 1.5. And here’s the fit:

Part 2

The other day, Mark Broadie came to my office and shared a larger dataset, from 2016-2018. I’m assuming the distances are continuous numbers because the putts have exact distance measurements and have been divided into bins by distance, with the numbers below representing the average distance in each bin.

x n y
0.28 45198 45183
0.97 183020 182899
1.93 169503 168594
2.92 113094 108953
3.93 73855 64740
4.94 53659 41106
5.94 42991 28205
6.95 37050 21334
7.95 33275 16615
8.95 30836 13503
9.95 28637 11060
10.95 26239 9032
11.95 24636 7687
12.95 22876 6432
14.43 41267 9813
16.43 35712 7196
18.44 31573 5290
20.44 28280 4086
21.95 13238 1642
24.39 46570 4767
28.40 38422 2980
32.39 31641 1996
36.39 25604 1327
40.37 20366 834
44.38 15977 559
48.37 11770 311
52.36 8708 231
57.25 8878 204
63.23 5492 103
69.18 3087 35
75.19 1742 24

Comparing the two datasets in the range 0-20 feet, the success rate is similar for longer putts but is much higher than before for the short putts. This could be a measurement issue, if the distances to the hole are only approximate for the old data.

Beyond 20 feet, the empirical success rates are lower than would be predicted by the old model. This makes sense: for longer putts, the angle isn’t the only thing you need to control; you also need to get the distance right too.

So Broadie fit a new model in Stan. See here and here for further details.