Here’s a cautionary tale about small samples, researcher degrees of freedom, how misleading published studies can be, and how ridiculously misleading press releases can be.
First, though, here are some background facts. The U.S. General Social Survey has long included an objective measure of family income in dollars (REALINC), a subjective measure of family income from “far below average” to “far above average” (FINRELA), and a number of policy items, including one asking about whether government “ought to reduce income differences between the rich and the poor, perhaps by raising taxes on wealthy families or by giving income assistance to the poor” or whether “government should not concern itself with reducing this income difference between the rich and the poor” (EQWLTH).
Looking at the GSS data from 2002-2012 (including those individuals with data on all three items; N = 6,837), the correlation between REALINC and FINRELA is .501 (naturally, of course, given that a big hint about whether one is richer or poorer than most people is obviously one’s actual income), the correlation between REALINC and EQWLTH is .207 (indicating that people with objectively higher incomes are more likely to oppose income redistribution), and the correlation between FINRELA and EQWLTH is .192 (indicating that people with subjectively higher incomes are more likely to oppose income redistribution).
If we predict EQWLTH simultaneously with REALINC and FINRELA, the standardized beta for REALINC is .148, the standardized beta for FINRELA is .117, and the model has an overall multiple correlation of .231. This indicates that neither REALINC nor FINRELA dominates the other in predicting EQWLTH (both predictors maintain significant coefficients that aren’t materially larger than the other), and that neither predictor is much of an improvement on the other (i.e., the multiple correlation of .231 is only marginally better than the individual correlations of the two predictors, .207 and .192).
Pretty simple so far, right? Basically, objective income and subjective income are largely redundant predictors of views on income redistribution at around a .2 correlation. And this isn’t a theory, it’s a fact about a very large and representative sample.
Bring on the small samples
In a new article in Psychological Science titled Subjective Status Shapes Political Preferences, a team of social psychologists looks at these matters. They include objective measures of socioeconomic status (SES), a subjective measure of social status (the MacArthur Ladder), and a scale combining various policy preferences relating to income redistribution.
Like many psychological studies, they use small samples. They include a power analysis assuming they’re looking for a .30 correlation, which leads them to samples in the 100 to 200 range. Really, though, we already know they’re looking for a correlation around .20, which means they needed a sample more in the 250-300 range.
They have two samples measuring objective and subjective SES, one with 135 participants and the other with 152. The second sample also included some false (randomly assigned) feedback on relative SES prior to measuring subjective SES. They report for the first sample (a) that, predicting views on income redistribution, neither income nor education carries a significant coefficient in a multiple regression that also includes liberal-conservative ideology and party affiliation and (b) that subjective SES does significantly predict redistribution views in a multiple regression with all the prior predictors. They report for the second sample (a) that both objective income and subjective SES have significant correlations with redistribution views and (b) that the false-feedback manipulation resulted in differences (without controlling for anything else), in separate analyses, in both subjective SES and redistribution views.
Now, I’m always a bit skeptical when I see similar samples analyzed in different ways. Why were we showed correlations without controls in sample 2 but not sample 1? Why were lib-con ideology and party affiliation used as controls in sample 1 but not sample 2? Why was objective income used as a control in testing subjective SES in sample 1 but not sample 2?
Thankfully, the researchers posted the data online, so I was able to find out the answers to these questions.
We were showed correlations in sample 2 but not sample 1 because the correlation between subjective SES and redistribution views was not significant in sample 1 (p = .237). In sample 1, the relationship between subjective SES and redistribution views is similarly not significant with objective income in the model (p = .161) or with both income and education in the model (.124). Add lib-con ideology to the model and subjective SES is almost significant (p = .065). And, at long last, add party identification to the model and we’re finally there (p = .011).
So this is the first hidden ball — in fact, in sample 1, the only way subjective SES significantly predicts redistribution views is when party identification is in the model. This is despite the fact that party identification otherwise plays no role in the theory or discussions in the paper. And this is despite the fact that party identification is as likely to be an effect rather than a cause of core views on income redistribution, and so should be viewed with real caution when thrown into a multiple regression that can’t tell the differences between causes and effects.
Contrast this with the reason given in the paper for including the ideology and party controls in sample 1: “Ideology and party affiliation were controlled because they tend to be associated with both income and attitudes toward redistribution, and we wished to reduce the possibility of third-variable explanations for any observed association between subjective SES and support for redistribution.” This is highly misleading, as it implies that subjective SES had a stand-alone relationship with redistribution views, and they were just making sure to rule out alternate explanations. In fact, the stand-alone relationship didn’t exist and required dubious “controls” to arise in the first place.
In sample 2, we see the opposite problem. Here, the stand-alone relationship between subjective SES and redistribution views is significant (p = .022), but not when objective income is entered into the model (p = .203) or when running the full model from sample 1 (p = .214). So this is the second hidden ball — they show the correlations but don’t run a model that controls for anything.
The issue here is that objective income was not a significant correlate of redistribution views in sample 1 (p = .906), but it was in sample 2 (p = .038).
Then there’s a further problem. They ran a false-feedback manipulation in sample 2, which in fact impacted subjective SES (p < .001) and redistribution views (p = .008). But then their theoretical claims imply a model they never test, namely, one in which the false-feedback condition affects subjective SES and subjective SES then affects redistribution views. In fact, when I run a multiple regression predicting redistribution views with both subjective SES and condition, subjective SES has a non-significant coefficient (p = .121) while condition is significant (p = .041). Add objective income to the model and now subjective SES drops to a dismal p = .645.
Again, contrast this with the claim in the paper: “Because random assignment experimentally controlled for objective SES in Study 2, we did not statistically control for this variable.” This isn’t as bad as the statement regarding why they controlled for ideology and party in study 1, but it’s still pretty bad given the context of the paper. In study 1, they made a big deal out of how objective income doesn’t really predict redistribution and doesn’t interfere with subjective income in predicting redistribution views. In study 2, they had data directly contradicting those findings and obscured rather than disclosed that fact.
So there are problematic conclusions in the study. They say: “In Study 1, feeling higher in relative status was associated with lower support for redistribution.” Well, OK, but only when controlling for party identification (a variable not at all central to their theory) and not otherwise. Further: “In Study 2, feeling higher in status caused reduced support for redistribution.” Actually, the manipulation caused differences in feelings about status and separately caused differences in support for redistribution, but the feelings about status didn’t cause the support for redistribution.
Bring on the press release
In the press release, things really get out of hand. I’m not blaming the study authors here — they probably had some input, but might not have had much ultimate control over the final version of the press release.
Start with the title of the press release: “Feeling — Not Being — Wealthy Drives Opposition to Wealth Redistribution.” This statement is supported by study 1 but immediately contradicted by study 2. Further, as I showed at the outset, when we go to a big publicly available sample (N = 6,837) rather than relying on samples in the mid-100s, the statement is plainly false insofar as it implies that subjective views of income matter, but objective measures of income don’t, in predicting redistribution views.
Then the first sentence: “People’s views on income inequality and wealth distribution may have little to do with how much money they have in the bank and a lot to do with how wealthy they feel in comparison to their friends and neighbors.” Again, this contradicts study 2 and the GSS data. Further, it sort of ignores the obvious point that people’s subjective views on their SES correlate really very strongly with their, umm, actual incomes (e.g., a correlation of .501 in the GSS sample, and correlations of .472 and .613 in the two samples in the paper the press release discusses). Do they really mean to imply that how much money people have doesn’t have a strong effect on how wealthy they feel?
And then later: “Support for redistribution wasn’t related to participants’ actual household income.” Again, true of study 1 but contradicted by study 2.
It’s not all bad news
So I’ve been pretty hard on the authors of the study and the press release. Let me say a couple of things in the study authors’ defense.
First, the study does have some really good things going on. Primarily, the experimental manipulations in studies 2, 3, and 4 are way cool. The findings suggest that artificially manipulating one’s perceptions of one’s relative social status really can impact one’s policy views on redistribution as well as one’s broader political and moral principles. That’s a nice result and something a lot more people in political science should take seriously.
Second, the deep problems with the paper are as much the fault of The System as they are of these authors. Mainly: the p < .05 standard is batshit crazy (it should really be at least p < .01), samples are routinely way too small (something related to the crazy-high p-value threshold), reviewers are often statistically naïve and/or don’t really have the time or inclination to call people on questionable analyses, and so on.
So, it’s always nice to see new papers on interesting topics. But there are lots of reasons to be dubious of broad statements in papers (and especially press releases) without really scrubbing the numbers.
I chanced upon your site somehow, perhaps through Psychology Today, and found myself reading through two of your latest posts. These are really for nerdy nerds, I admit. But I enjoyed reading it. My take-home was this: “Artificially manipulating one’s perceptions of one’s relative social status really can impact one’s policy views on redistribution as well as one’s broader political and moral principles.” Wish it was in bold, and in a simpler wording as a summary. Thanks!