Putting the Moral Back in Moral Dumbfounding (Part II)

Let me briefly review my last post.

First, I assume, along with others, that a core part, even the core part of ethics is harm. It’s unethical to harm people absent some compelling reason. Second, preventing people from doing what they want to do is one really important kind of harm. So, taking those two really simple ideas together, it’s unethical to prevent people from doing what they want absent compelling reason.

Now, from these basic assumptions, I added one piece. Judging something immoral is one way to prevent people from doing what they want. After all, when we all agree X is immoral, then people get punished for doing X, preventing them from doing it. These simple ideas lead to the conclusion if you judge X to be immoral, then you are preventing others from doing something, which is harmful, and so whenever you do judge something immoral, you must, ethically, have a compelling reason for that judgment. If you don’t, you are unethically constraining others’ behavior.

Now, this idea is crucial, I suggested, given moral dumbfounding, the idea that people cannot always justify their moral judgments. So, I suggested, if you are morally dumbfounded – you know, or have the intuition that X is wrong but cannot provide a logical justification – you cannot, ethically, stop there; you must be able to explain and justify your moral judgments that something is wrong.

So then the question is, what do you do when morally dumbfounded? Suppose  you have just read Jon Haidt’s vignette, and you have the deep intuition that incest is wrong. You are willing, for argument’s sake, to grant the hypothetical that no one was harmed, that there was no coercion, and that the sex was non-reproductive. You cannot, however, provide a justification for your moral judgment.

Now, the issue doesn’t have to be incest. The point here is simply to think of anything that you think is wrong but cannot provide a logical and coherent justification for your view. It seems to me that you have a few choices, though others might be possible as well. For the sake of argument, instead of considering the incest case, read the following with something else in mind, such as same-sex marriage.

First, you can simply revise your view. If you try and fail to come up with a principle that you really believe in to justify your judgment that X is wrong, then you can decide that X is not, in fact, wrong after all. There’s no need to feel shy about this. History is replete with examples of things that fluctuate between right and wrong. Right now, marijuana is going from wrong to not wrong in many places, after having gone from not wrong to wrong not all that long ago. It happens. The list of behaviors that used to be considered morally wrong in some time or place but now is not in this time and place is lengthy (homosexuality, inter-racial relationships, consuming alcohol, divorce, lending money, dancing, sex before marriage, to name a very small few from a very large set).

 Second, you can try to think more deeply about your moral principles and try to discover what your principles really say about X. This process can lead you down some interesting roads. Taking the incest example, you might be left with the (morally surprising to many) view that the two characters in the story did not, in fact, do anything wrong. This conclusion will be unsettling to some – though presumably not the Lannisters – who feel that they simply find their intuitions about incest too powerful to allow moral reasoning to have the day.

Now, if you do come up with a principle that justifies the judgment that X is wrong, you must road test it. State the principle as clearly as you can. Now consider all the things that your principle tells you are wrong and all those that are considered right. If you think that the reason that the characters in Haidt’s vignette did something unethical for reason R – it is “unnatural” –  explore what other things R implies are also unethical (e.g., driving).

Now, bear in mind that life and morality are complicated, and many judgments will come down to tradeoffs. Remember, it’s always easy to come up with harm to someone, so restrain yourself from cheating and telling a harm story that is plausible but isn’t really driving your view. Many readers of blogs such as this one can take opposition to same-sex marriage as an example. What did you think of the argument that same-sex argument somehow harmed marriages between the sexes? Probably not much.  

Along the same lines, some situations are so vexing that it’s really hard to figure out how to think about them. Philosophers like to come up with these situations, in part to give our sense of morality a really tough road test. Thus we have the famous Trolley Problem and its variants. Most people think it’s not OK to push the poor person off the footbridge to save five, but are more OK with various other ways of causing the death of one to save five. It’s fine if your moral principles bend and break when faced with such cases. They’re there in part to vex you.

Of course, a third route you can take is to adopt or modify one of your bedrock ethical principles. If you really want to find X to be wrong, but your moral principles as they are won’t let you, well, you could change them. But beware travelling this route. Be sure to road test your new principle to be sure you don’t now condemn something you really don’t want to.

Finally, you can just declare the behavior in question to be an exception. I think this runs into the problem I discussed before – if you allow exceptions, then you can now convince yourself to declare anything immoral – but you could go that route if you wanted.

Now, all of this can be confusing, so I want to end by switching my language from ethics and morality to the simpler language of Good Guy and Bad Guy. (I intend the word “guy” here to be gender neutral. I apologize for this use, but the reason I’m choosing it is because of the role that the word plays in the idioms, Good Guy and Bad Guy. I feel that I would lose rhetorical force if I replaced “guy” with “person” in this context.)

One view is that the two people in Jon Haidt’s vignettes are, in fact, innocents. They are the Good Guys. They are exercising their freedom without harming anyone. They are, to my way of thinking, behaving ethically. In contrast, I take the position that condemning these characters is the act of a Bad Guy. Notice how this flips the script. We think ourselves Good and Noble for condemning the characters, who we view as the Bad Guys, breaking the Rules. This bears close scrutiny.

At the end of last year, New York Times columnist David Brooks gave one of his annual Sidney awards to Helen Andrews for her piece “Shame Storm.” The whole piece – Andrews’, I mean –  is worth a careful read. Discussing online shaming, Andrews writes:

The more online shame cycles you observe, the more obvious the pattern becomes: Everyone comes up with a principled-sounding pretext that serves as a barrier against admitting to themselves that, in fact, all they have really done is joined a mob. Once that barrier is erected, all rules of decency go out the window, but the pretext is almost always a lie. 

Towards the end of her piece, she talks about solutions to the problem that she sees with online shaming:

The solution, then, is not to try to make shame storms well targeted, but to make it so they happen as infrequently as possible. Editors should refuse to run stories that have no value except humiliation, and readers should refuse to click on them. It is, after all, the moral equivalent of contributing your rock to a public stoning. We should all develop a robust sense of what is and is not any of our business. Shame can be useful—and even necessary—but it is toxic unless a relationship exists between two people first. A Twitter mob is no more a basis for salutary shaming than an actual mob is for reasoned discussion.

It’s really, really important to remember that everyone thinks they’re the good guy. If you are in favor of same-sex marriage, remember that the people who are or were on the other side of the issue aren’t thinking to themselves, “Well, I’m a bad guy, and the bad guy does whatever they can to make sure no one’s happy, so….” Of course not. They think they are motivated by the side of light for whatever reason they have or tell themselves they have. No one joins the online mob because they think that helping destroy another person’s life is the best thing to do with their time. Again, of course not. They see themselves as the virtuous moralist contributing to the common good, holding Andrews, in her case, to account for her supposed sins.

So, ask yourself a key question when you condemn others, whether to yourself, privately, or to others, out loud, whether in a hypothetical vignette, or, as they say these days, irl.

Ask yourself this: are you sure you’re being the Good Guy?

Putting the Moral Back in Moral Dumbfounding

As a kid, just like every other kid, when my parents asked me why I did something bad, occasionally I would answer “just because.” And just like all other parents, my parents did not accept this as an explanation. And imagine if they did.

“Why did you chop down the cherry tree?”

“Just because.”

“Ok, well then, I guess it’s time to get a new cherry tree!”

Parents understand that it’s illogical to accept “just because” as a reason or explanation because, of course, it’s not. And, more concretely, it undermines deterrence. Next time I’m contemplating mischief, I know I can fall back on the (non-)excuse that it was “just because.”

Now, a peculiar thing about the (non-)reason “just because” is that we let ourselves get away with this sometimes.

Many people are familiar with Jon Haidt’s classic work on what he called “moral dumbfounding.” I won’t reproduce it yet again here. If you’re unfamiliar, you can read about it, but the summary is that if you ask people if (harmless, consensual, non-reproductive) incest is morally wrong they’ll generally say that it is, but they won’t be able to give you a principled reason for their view. Just because.

Now, the usual lesson people take from this finding is about how people decide what is right and what is wrong. Moral dumbfounding shows that in such cases, roughly, we haven’t worked through a careful, rational explanation of why we came to that view. It’s “just because.” (If you want to read more, Jon Haidt’s book on the topic is a good place to start.)

I’m not interested in that idea per se, that people use their intuitions to make moral judgments. I’m interested in the next step, the cherry tree part: if you morally condemn something but are unable to produce a principled reason for that judgment, then continuing to condemn that thing is itself unethical.

Maybe it’s obvious that saying “just because” is as bad, or worse, in the case of morality as it is in the cherry tree case. But let’s dig in a bit deeper just to be sure it’s really clear.

When you judge something to be wrong, you are, in essence, trying to prevent people from doing X on pain of punishment. So, in the past, when people said – and they did – that same-sex sexual relationships were “just wrong,” such relationships were prohibited and punished. (Note that in some cases punishment for doing X is and was informal rather than formal – social opprobrium as opposed to jail time – but the argument is the same.) So when you lend your voice (or vote) to the chorus of people who say X is wrong, you are preventing people who want to X from doing so. This perspective shows how important it is to get moral judgments right. Because moral judgments are tools we all use to constrain what others may do, just as in the cherry tree case, “just because” is not a sufficient reason to justify a moral judgment.

Indeed, pushing further, my view is that judging something to be wrong without a justification beyond “just because” is itself unethical. Notice that allowing a non-reason to justify a moral view allows anything to be prevented. If you say that X is “just wrong,” sticking by your view that it’s wrong without being able to provide a reason, what you are saying is that for any X, you are preventing people from Xing if they wish but have no principled reason for doing so. Down this road lies exactly the moral world we don’t want, in which whatever practice people feel like preventing – homosexuality, inter-racial dating, dancing – can be. And anyone who supports moralizing (and so preventing) these practices is complicit in being unethical: preventing people from doing what they wish just because. So, when you are morally dumbfounded and you are content with relying on your intuition that something is wrong, you are saying that you yourself have no particular duty to have an actual reason to try to constrain what other people can do. You are allowing yourself to chop down the cherry tree – and, indeed, other people’s cherry trees – “just because.” That, in my view, is Bad.

Notice that other familiar moral rules, such as those surrounding theft, don’t run into this problem. The principles at work here are the notions of property rights and harm. As a general matter, we believe (in the West) that, foundationally, people have a right to their property, physical and intellectual. Therefore, taking property harms the person – they no longer have the property – and so it is morally wrong. This principle itself lies within a more general set of principles of freedom and harm. People ought to have the freedom to do what they wish with their property (up to certain limits) and that is why theft is a kind of harm – making someone worse off – and so should be prevented.

And on that note, it’s important to bear in mind that reasons to justify moral judgment should be scrutinized. The reason that reasons should be looked at carefully is that people might say that such and such is harmful – because as we’ve just seen, harm is seen as a legitimate justification – but in many cases there is no actual harm, and this reason is simply given as an excuse to justify the moral view.

So, to summarize. First, moral judgments constrain what other people can do. When societies agree that X is morally wrong, people can’t X anymore, or are punished if they do. Second, if we decide that X is wrong and don’t feel the need to provide a principled reason, then we can prevent anyone from doing pretty much anything. Historically, this has led to all kinds of constraints on people’s freedom, as the case of homosexuality shows. This second point is why it’s important to be very skeptical of catch-all reasons, such as appeals to “nature,” religious texts, or (supposed) harm. One can nearly always come up with some plausible reason that others might believe, or find hard to challenge. (In our culture, a religious justification for a moral view is hard to challenge because it is viewed as unethical to challenge others’ religious views. This point makes moral conversations fraught because religious writings can be used to justify a very large array of moral views; religious texts can be “interpreted” in many different ways.)

I’ll discusses some consequences of these arguments in a post I’ll put up shortly.

Note: This entry also appeared at by bog at Psychology Today

The Company Holiday After-Party

Let’s check in again on Nancy and her colleague Diane, who we last saw in a conversation about the ethics of company holiday parties.

               It’s a chilly Thursday evening in Philadelphia, and Nancy is exploring Love Park near City Hall.

               “Of course,” Diane answers.

               “Great! I have been thinking so much about our conversation at the company holiday party. And I think I’m a better person for it, to be honest. I’ve stopped inviting any colleagues to anything outside of work, and I restrained myself from asking anyone to contribute to my son’s school’s drive to raise money for the arts program. I recognize now that it’s coercive and unethical, and I have you to thank for it. It’s weird because my team doesn’t seem all that grateful, but… oh, wait. Is it ethical for us to be having this conversation? I mean, we’re not at work, so if you feel obligated…?”

               Diane smiles wanly. “You really did digest that lesson. I’m happy to see it. And, yes, as soon as you asked if you could join me, there was an ethical issue. But, look, with the echo of the holidays and good cheer all around us, let’s pretend for the moment we’re in a Magical Moral Bubble, just the two of us, and, just for the moment, ethics don’t apply. How’s that?”

               Nancy beams a warm smile, her whole body noticeably relaxing. “That’s wonderful. I’m in.” The two clink their glasses of mulled wine, and a quiet moment of comradery passes.  

               “To be honest, “ Nancy begins, “I’ve been thinking a lot about the implications of our previous discussion. As you know, I studied ethics at Princeton, and I just love these theoretical discussion, even if they don’t really mean anything.”

               Diane’s lips forms a smile, if a little forced.

               Not noticing, Nancy continues, “Here is the part I’m struggling with. Last time you persuaded me that it is unethical for me, as your superior, or the company, as your employer, to ask or invite you to do anything that is not within the scope of your contract because such invitations carry implicit threats if you don’t. Fair enough. But here’s the tricky part. It seems to me that I’m entitled to my personal choices in my personal life. Your argument, as we just discussed, forced us into this bubble. How can it be right that the fact that we work for the same company, Acme Widgets, means that I can’t invite you over to my table for a drink if I see you out? Surely there is an ethical argument to be made here about limits on how working for the same company can constrain my autonomy. You and I agree that the foundation of ethics is the lack of coercion, so isn’t the company coercing me to avoid your friendship?”

               “It might seem that way,” Diane replies, “but no. I agree that individual autonomy is an important, perhaps the important ethical principle. I think we both believe that, everything else equal, people, or companies, cannot restrict what other people may do. (Now, the question of government is another issue, so let’s avoid that for the moment.) The piece that is confusing you is that the invitation constrains my autonomy. Remember our discussion last time. The invitation carries an implicit threat, and the implicit threat diminishes my autonomy because I can’t refuse in the same way that a victim can’t refuse a mugger’s “invitation” to hand over their wallet. So it’s really just a question of your autonomy set against my autonomy. But the key point that’s easy to forget is that you gave part of yours away when you chose to work for Acme Widgets. When we joined the company, we both agreed, as an ethical matter, for the reasons we discussed last time, not to have any non-work social relationships with other people who work at the company.”

               Nancy squints her eyes into tiny lines. “Now that can’t be right. Surely if you want to be my friend and you invited me over, well, that doesn’t run into the problem we discussed last time, that an invitation from me is really a threat, in at least some sense, to you.”

               “It absolutely does. If we weren’t in this bubble, consider the situation I would be in right now. Suppose you were enjoying this debate and I no longer was. I might well feel as if I couldn’t leave because of the professional consequences because you’re my boss. And that’s a problem. An ethical problem.”

               Diane swirls the mulled wine around in her glass, her eyes shifting up and to the side in thought. “So then even if you initiate a friendship outside of work and outside of the office, for me to accept would be unethical? Is that what you’re saying?”

               “Of course. To see it even more clearly, consider what would happen if our friendship grew, but I suddenly, say, developed a new friendship that took all my time. You might come to feel rejected and take it out on me professionally, even if you didn’t do so consciously. If there’s anything our friends in psychology have taught us, it’s the power of implicit biases.”

               Diane nods slowly. “Ok, I see your point. So I guess any two people at different levels in a company can’t ethically have any social relationship, even if it’s mutually consensual, insofar as one is always going to have some formal power over the other. That seems like a bummer.”

               “Oh, it is,” Nancy answers brightly. “But it’s worse than that.”

               “Worse?”

               “Oh yes,” Nancy continues. “You know Sharon, three cubicles down from mine?”

               “You know that I do. I have been grooming her for my position so that, well, you know why…” Diane trails off.

               “I do indeed. And so does everyone else. And that’s the problem. Suppose that Sharon asked me to, say, go on a nice ski trip for the weekend. Would that be ethical?”

               “I don’t see why not. She’s not your boss, at least not yet, so…”

               “Not yet,” Nancy interrupts. “But we all know that she probably will be soon. She knows that, and I know that, so…”

               “So when she invites you skiing, there’s an implicit threat there too. It’s along the lines of, if you don’t come on the ski trip with me, then I will punish you professionally when I’m your boss.” Diane nods her head sagely. “So really even people at the same level can’t have a social relationship.”

               “Exactly. It’s totally unethical because it’s coercive. You can’t have any non-work interaction with co-workers. But even if you don’t agree with that argument, there’s another ethical problem that makes non-work relationships unethical.”

               “Oh?”

               “Yes. Let’s say, just for the sake of argument, that tomorrow, everyone decides that the discussions that you and I have had over the last two blog posts are correct, and now everyone has come to believe that friendships among coworkers outside work are totally unethical.”

               “Ok. I don’t see where this is going, but ok…”

               “Fine,” Diane continues, “Now suppose that on the day after tomorrow I go to your office and tell you that Sharon invited me skiing last week.”

               Diane shrugs noncommittally. “I don’t see the problem. You can’t complain about the invitation because when she invited you, last week, we didn’t think that such relationships were a problem. Therefore Sharon’s actions were ethical at the time under the prior standard. That is, people don’t, as a general matter, condemn or punish others for doing something that was more or less fine at the time, or nearly so, but isn’t now. That makes no sense and would be, in fact, unethical. It would be like passing a law that makes something illegal retroactively and punishing someone for doing it.”

               Nancy nods approvingly. “Exactly. And doing that is such a big no-no the founders put it in the Constitution. Governments – federal or state – can’t pass so-called ex post facto laws.”

               “So we agree,” Diane says. “No one would condemn Sharon for her skiing invitation if it came before the new view of the ethics of such invitations. I mean, nobody of any ethical fiber would condemn someone for breaking a rule that wasn’t in effect when they broke it.”

               Nancy looks Diane straight in the eyes. A moment or two passes. Diane’s brow furrows in confusion. “Diane, can you think of a time when someone you know condemned others for breaking rules that weren’t in effect at the time.”

               Diane begins to shake her head, then stops. “Are you talking about Robert E. Lee?”

               Diane had been a fervent activist for removing any honoring of the general because of his relationship to slavery. “Of course. Or really anyone being reviled for breaking our current moral understanding even if they complied with the ethical rules as they understood them at the time. I’m not saying there isn’t some sort of ethical argument that they should, but I am saying that you don’t know what will be considered unethical in the future. The only way to avoid acting unethically is to restrict yourself to what is in the employment contract. Everything else must be out of bounds. The skiing case is unethical both because Sharon is likely to be my boss, or at least we believe that she might be, and because future norms might make coworker socializing unethical even if it isn’t right now. So, such invitations are completely unethical today.”

               Diane takes the last sip of her mulled wine. “So last time you persuaded me that any social interaction between colleagues of different levels was unethical. And today you’re trying to convince me, with some success, that social interactions between colleagues at the same level is unethical. What’s going to happen next time we get together? Are you going to try to convince me that socializing with anyone in the same business is unethical?”

               “No, not at all.”

               “Good.”

               “I’m going to show you that socializing with anyone at all is unethical.”

               A moment of silence passes between them. “Nancy, I don’t think I like you.”

               “I get that a lot.”

               “Happy holidays.”

               “You too.”

The Ethics of a Company Holiday Party

I once had an extremely introverted friend who disliked attending parties. Each year, when her company Christmas party rolled around, she felt compelled to attend, even though she hated it. Over the years, this dilemma gave rise to a number of conversations about morality in the workplace. Similarly, in the realm of leisure activities, introverts may seek alternatives such as zonder registratie casino, where they can enjoy gaming without the social pressures of attending events. Just as my friend navigated the balance between social expectations and personal comfort, individuals exploring online gaming platforms without registration can prioritize their preferences while still indulging in enjoyable pastimes. The concept of personal agency and choice extends beyond the workplace into various aspects of life, including recreational activities like online gaming. As the holidays are upon us, I thought it an appropriate time to share some of the questions we considered.  I’ll illustrate that those conversations with the following fictitious one…

By a strange coincidence, Diane and her boss Nancy had both been philosophy majors in college, and find themselves discussing the ethics of the company holiday party at, well, the company holiday party.

They had found a quiet corner of the office, shared a brief toast, and get right into it.

“Here’s the thing,” Diane began. “Now, Nancy, as you know, I’m tremendously introverted. It’s not a coincidence that in addition to my philosophy major, I also took classes in accounting and now spend most of my work day in the company of spreadsheets rather than coworkers. We’ve talked about Susan Cain’s TED talk in which she explains that for people like me, being at loud parties is draining and, to be honest, pretty aversive. Forcing me to come to these things is, therefore, unethical.”

Nancy takes a sip of her beer, but appears unimpressed. “Diane, if I forced you to come to the company holiday party, then I might be inclined to agree with you. I might! But you have misrepresented the case. The holiday party is a benefit that we provide to employees – you should try the hummus before you go – and all of the announcements for the party have indicated that the party was optional. How can inviting you to take advantage of an optional benefit be considered using force?”

Diane’s brow furrows. “You’ve made two implicit claims in your question there, Nancy, and both claims are, with all due respect, false. First, the fact that the party is offered as a benefit does not logically entail that attending is a benefit to me. This confuses the intent of the party with the effect of the party. I think you will have to concede that the two are not identical, and if we wanted we could come up with a ton of examples in which a benefit is intended but in the end the effect was costly. Remember the time you got the office tickets to see The Happytime Murders?

Nancy gives a barely perceptible shrug and lets her lids close for an instant. Diane acknowledges the concession with a tip of her wine glass, and continues. “Now, as to whether the party is optional or not, you have assumed, incorrectly, that the fact that the party is explicitly stated as optional entails that it is, as a matter of fact, optional. To take an exaggerated version, suppose someone sticks a gun to your head and says, hey, you can give me your wallet or not. It’s up to you. Is handing over the money optional?”

Diane takes Nancy’s silence as an invitation to continue. “Saying the party is optional neglects the fact that calling an option presumes that there is genuine freedom of choice. But, choices aren’t free if they are made under threat.”

“Who threatened you?” Nancy asks, concern in her voice.

“You did.”

“I did?”

“Yes, though your threat was implicit. It is well known that Department heads take pride in how many of their employees attend the party; attendance is a signal of high morale. If I don’t come, I know that you might hold it against me, even if you don’t mean to.  Not only that, but relationships are built at these events that can have professional repercussions. So no matter what, if I don’t come I lose ground to colleagues who use these events to build their relationships, which puts me at a disadvantage. In some sense, whether or not I suffer some negative consequences if I don’t come to the holiday party is irrelevant because I might. Therefore, because not attending has some chance of harm, my attendance is under threat. So, just as in the case of the mugger, you’re telling me it’s optional, and in some sense it is, but it really isn’t. A choice made under implicit threat is not a free choice, and presenting someone with that choice is therefore unethical.”

Nancy pauses a moment to reflect. “Well, hold on. I’m your boss. There’s nothing wrong with me threatening you. For example, imagine I said, “You must finish the report by Monday, or else you’re fired!  You would have no ethical objection to that threat right? It’s permissible for a boss to threaten an employee, and therefore the implicit threat in the holiday party isn’t unethical.”

Diane shakes her head. “Hold on. The fact that some threats are ethical doesn’t mean that all are. For example, you couldn’t threaten to hurt me if I don’t turn over my wallet to you.” Nancy again shrugs her agreement. “In fact, let’s look at why the Monday deadline is ethical.  That threat is ethical because when I took the job here, I willingly agreed to certain kinds of threats, and promises for that matter. I was saying, sure, in exchange for such and such a salary and benefits, I’m going to allow my boss to threaten me with certain costs, such as being fired, if I don’t fulfill certain duties. Entering into a work contract is to grant, ethically, your ability to impose conditionals – threats and promises – on me.”

“But, crucially, those conditionals are circumscribed by the contract. You certainly can’t mug me. That’s clear. But let’s look at both sides of the conditional: If you don’t X then I will Y. In terms of X, the only things I’ve granted you are Xs that fall within my professional duties. And the only Ys I’ve granted you are Ys that fall within your professional purview. That’s why you can’t say, ‘if you go to the Eagles game with me, I will promote you.’ That violates X. And you can’t say ‘if you don’t get your report in by Monday I will force you to wear the Lazy Hat for a week.’ That violates Y.”

Nancy squints her eyes, reflecting. Diane presses on.

“From this it should be clear that at most the only ethical conditionals must respect X and Y. Even then, there are limits. You can’t threaten me with being fired for being 10 seconds late to a meeting, for example. The point about X and Y is that they put a maximum on explicit or implicit threats. Attending a holiday party is not within my professional duties, so it’s not an X, so it doesn’t pass the test.”

Nancy continues to mull, and takes another sip of her beer. “Well, hold on. Now you’ve blurred a key distinction. True, you contract doesn’t say, explicitly, that you must attend the holiday party. But the employment contract is incomplete. It doesn’t say everything you must do. If the norm here is that employees attend the party, well, that’s part of your duty because following these norms is implicitly part of the contract.” Nancy folds her arms in front of her, a look of pride growing on her face.

“That’s absurd,” Diane says, Nancy’s face returning to its prior state. “The argument that agreeing to a work contract entails that you are then agreeing to threats to comply with any norm is subject to a slippery slope problem. Suppose the norm is for the team to get drinks after work on Friday, and a Jewish person such as me joins the team. Surely it’s unethical to compel complying with that norm. Indeed, there could be any number of reasons that someone might not want – or even might not be able – to comply with a workplace norm, including ones they don’t know about when they are hired.”

“Well, they should have known when they took the job.”

“All of the norms? These things change all the time, and many of them are implicit, and only arrive occasionally… such as the holiday party.”

“True. But there’s no solution to that problem.”

“Yes, there is. Put the required norms in the contract. And, if a norm can’t be put in for whatever reason, then that’s exactly the sort of thing the employee can’t be compelled to do. The ethics are clear if we focus on the fact that the employment agreement says all the things – and only the things – that an employee is agreeing to.”

Nancy puts her fingers to her mouth in thought. “All right, that all seems logical. But if we take these arguments seriously, that I can only threaten you with Ys if you don’t X, and invitations such as the one to the holiday party violate X, well, then, I also can’t ask you to dinner, to watch a game, or I guess even have lunch. That doesn’t seem right.”

Diane shrugs. “I agree. It doesn’t seem right. But that’s why it’s important to work through these things. It seems to me that our choices are, first, to accept that these common practices are unethical, second, find somewhere the argument is flawed, or, third, make those implicit norms we want to enforce explicit in contracts. Heck, you could make attending the holiday party mandatory. This third option is a bit cumbersome, but at least it would put the company on firmer ethical footing.”

Nancy nods quietly. “Diane, I never really thought about the holiday party this way, and I’m sorry. I know these things are aversive to you, and I respect that. If you go home now, I promise I will do my absolute best not to hold it against you.” Nancy places her fingers gently on Diane’s forearm. “I’ve really learned my lesson.”

Diane looks at Nancy’s hand on her arm, and says, “That’s assault.”

Harassment With Impunity

Last week the New York Times ran an interesting piece entitled, “Motherhood in the age of fear,” written by Kim Brooks, who left her child in a car for five minutes to run an errand and, as a consequence, wound up with a warrant for her arrest. The piece reflects on the outsized attacks from both law enforcement and everyday citizens on parents who leave their children unattended – but safe – for short periods of time.

These sentiments are relatively new. In some research on this topic interesting in its own right, Thomas et al. (2016) trace the concern over children left alone to the period between the 1970s – when Americans accepted that children could be left alone in parks, on their way to school, etc. – and the next twenty or so years, during which something of a moral panic emerged, and Americans in the modern era forbid their – and others’ – children from being left unattended for even short periods of time. The authors of this work suggest that this new norm was caused by media reports of the kidnapping of children, in turn leading people to fear for the unattended child.

Fear alone, however, doesn’t really explain the panic. After all, Thomas et al. say, “[t]he fact that many people irrationally fear air travel does not result in air travel being criminalized. Parents are not arrested for bringing their children with them on airplanes. In contrast, parents are arrested and prosecuted for allowing their children to wait in cars, play in parks, or walk through their neighborhoods without an adult.” The authors continue, quoting David Pimentel, who wrote: “In previous generations, parents who ‘let their kids run wild’ were viewed with some disdain by neighbors, perhaps, but subjected to no greater sanction than head wagging or disapproving gossip in the community. Today, such situations are far more likely to result in a call to Child Protective Services, with subsequent legal intervention.”

In short, as I have recently been discussing, the reaction to leaving children unattended has some of the feel of a moral panic: even small “infractions” – which might not even be against the law – are met with harsh censure and punishment.

Brooks goes on to make an interesting point. As she puts it in the subtitle of her article: “Women are being harassed and even arrested for making perfectly rational parenting decisions.” In the piece itself, she writes:

… it occurred to me that I had never used the word harassment to describe this situation. But why not? When a person intimidates, insults or demeans a woman on the street for the way she is dressed, or on social media for the way she speaks out, it’s harassment. But when a mother is intimidated, insulted or demeaned because of her parenting choices, we call it concern or, at worst, nosiness.

There is, I think, a deep point here about morality. What distinguishes the cases she mentions here, how a woman is dressed versus parenting choices? As she points out in the article, the parenting choices in question are neither illegal nor dangerous, in parallel with with clothing choices. The key difference is that – now, unlike in the 70s – these parenting choices have been moralized. Some subset of the population morally condemns these actions. It is not a coincidence that the art that accompanies the Times article is a person surrounded by frowning expressions in one frame and wagging fingers in another.

Here is the brief way to put this subtle but important point: as a society, we believe that it’s ok to harass people if they are doing something we morally condemn even if their action is protected by the law and is either minimally harmless or completely so. It’s important to note that whether or not we harass others probably depends a great deal on whether we think others morally condemn what they’re doing as well. The more we think others take our own moral position, the more likely we are to act.

My prior recent discussions illustrate this point. Certain people who exercise their right to free speech – and don’t hurt anyone – are harassed to the point of distraction, and the world applauds. It’s ok to harass someone who is perceived by observers to have done wrong, even if the law is on their side and no harm has been done. (Again, witch hunts illustrate this point.)

Note that the reverse is true. If the act in question is one that we see as morally acceptable, we object to harassment of the person engaged in the behavior. Consider breastfeeding in public places. While breastfeeding in public is protected in many places, the act is not protected under public indecency laws. What we find is that people who consider public breastfeeding immoral condemn mothers in the same way that Brooks describes. But others rush to the mother’s aid – instead of piling on – because they don’t moralize breastfeeding in public. Indeed, my favorite coffeeshop has a sign indicating their support, heading off any attempts to harass.

It goes still further. It’s not acceptable to harass people for doing things we don’t moralize even if they are harmful. As indicated above, we don’t wag our fingers at parents taking their children on planes, even if we think flying is dangerous. Just driving around in a car with a child is dangerous, but most people would not stand for a bystander yelling at a parent for doing so. Perhaps more cogently, consider how we would react to the person chastising a parent for feeding their child the 24 grams of fat and 60 grams of carbs in a happy meal. As long as the parent is doing something harmful but not immoral, finger wagging is verboten.

And it won’t do to say that it’s the danger that’s the difference here. The point of the Thomas et al. work is that the causality goes the other way. They write that “when people make a negative moral judgment about a parent who leaves her child alone, their estimate of the danger facing that child is higher than for a situation that objectively poses equal risk to the child, but does not elicit the same moral disapproval.” The moral judgment is driving the danger estimate.

The lessons for us as citizens seem clear enough. When you support the condemnation of someone who has done something within their rights but that you don’t like – especially if no harm is involved – consider whether piling on or supporting those that do is the ethical choice. Piling on might be fun. It might even feel right. When reflecting, consider all the people historically who have piled on when others committed behavior that was widely considered wrong: homosexuality, miscegenation, and so forth. Those people probably felt that they were on the side of the angels as well.

How Does Being Observed Influence Moral Judgment?

Just a quick note on some interesting new work coming out in Evolution and Human Behavior. A paper in press by Lee et al. asks if people’s moral judgments differ depending on whether they are being observed. To do this, subjects were assigned to two different conditions. In one, subjects were told that their responses to the questions they saw were been “monitored and manually recorded by two experimenters due to some technical glitch,” while subjects in the control group received no such instructions.

Their primary interest was in whether observation would change judgments in moral dilemmas. They used fifty questions in all. Dilemmas were of various types, but a key comparison relates to dilemmas such as the Trolley Problem. As most readers at this point know, in the Trolley Problem, in the classic version, the subject must decide whether it is permissible to push one person off a footbridge in order to save five people on the trolley tracks. Pushing the person is the utilitarian answer: it’s the one that leads to the greatest good (one dead versus five). Not pushing is the deontological answer: it’s the one that corresponds to a moral imperative (in this case, one about killing a person, even to save many).

The authors find that “social observation increased the proportion of deontological judgments in moral dilemmas.” That is, if you were the person on the footbridge, you would want others to be around so that the subject didn’t push you, but if you were on the trolley tracks, you would want the subject to be unobserved, in which case they would be more likely to push the one to save you and your four friends.

Why? Why should being observed cause someone to choose the option that leads to a worse outcome? You might think that being observed would cause people to be more likely to make the choice that was most beneficial to others. The authors speculate that the reason is that “deontological decisions in moral dilemmas evoke the perception of warmth-related positive traits such as trustworthiness and sociability,” or, related, not pushing signals “their propensities to avoid harming innocent others.”

These possibilities raise the question of why choosing the more overall harmful option signals these positive attributes. Is it really “sociable” to choose the option that leads to worse outcomes? Perhaps. Another possibility seems to be that people know that pushing the person off of the footbridge will be seen by observers as immoral and the sort of thing for which one could be punished. In most legal systems, after all, not pushing, even if it leads to harm, is not punishable, but pushing the person, even to save others, is. (I don’t know if “duty to help” laws require pushing. Anyone?) This fact, that pushing might lead to punishment, might tilt psychological judgments under observation in the direction of the option that avoids punishment. This would be consistent with some of my earlier work, which shows that punishment increases under conditions of observation.

As the authors indicate, these lab results could have real world implications. As they say, “many ethical conundrums in the real world are essentially social in that they require public disclosure of one’s moral stance.” If these results do hold in the real world, then being observed could make people more likely to make moral judgments that lead to worse overall outcomes. Given that so many moral judgments are observed, this fact might have widespread implications.

 

Moral Panic Part II – A Sense of Proportion

In my last post, I discussed one of two interesting features of moral panics, the tendency for people to pile on the alleged perpetrator instead of standing up for them even when, in retrospect, at least, they did little, or nothing wrong.

In this post, I discuss a second feature of moral panics, that people frequently favor draconian punishments for even mild offenses. Modern Americans recoil when they hear of hands cut off to punish theft. We similarly shake our heads about Singapore, where the death sentence is mandatory for, among other things, possession of 15 grams of heroin.

But the American penal system is also panicked about drugs. Three strikes laws have been used to condemn prior offenders to life sentences for absurdly tiny offenses, like stealing a pair of socks. (The court also imposed a fine of $2,500; the issue of working wages in prisons is a topic for another time.) A high school student who sends an explicit picture of themselves to someone who has asked for just such an explicit picture – consensual sexting – often faces felony charges as well as being required to register as a sex offender.

I like this rendering (Critcher, 2017) who, talking about disproportion in the context of moral panic, puts it this way:

Fundamentally, “the concept of moral panic rests on disproportion” (Goode & Ben-Yehuda, 2009, p. 41, emphasis in original). It is evident where “public concern is in excess of what is appropriate if concern were directly proportional to objective harm” (Goode & Ben-Yehuda, 2009, p. 40). Statistics are exaggerated or fabricated. The existence of other equally or more harmful activities is denied.

In short, disproportion is a key, repeating element in moral panics.

In my last post, I referred to the case of Aziz Ansari, and quoted Caitlin Flanagan at The Atlantic, writing about the case, which I render again here:

… what she and the writer who told her story created was 3,000 words of revenge porn. The clinical detail in which the story is told is intended not to validate her account as much as it is to hurt and humiliate Ansari. Together, the two women may have destroyed Ansari’s career, which is now the punishment for every kind of male sexual misconduct, from the grotesque to the disappointing.

The key point from the perspective of moral panics is the latter end of the scale, “the disappointing.” Suppose we understand Ansari’s behavior – not, to be clear, that I am saying that this is how I take it – to be within the boundaries of the law but perhaps outside the boundaries of gentlemanly conduct. Is the punishment merited? Should years of work put into a profession be erased because of one “disappointing” episode?

Maybe. After all, if we agree Ansari’s is free to be ungentlemanly, we must also agree that people are free to tweet whatever they want.

Still, Flanagan’s remark about the “clinical detail” strikes me as insightful. The moralization of scenarios such as one played out in Ansari’s apartment opens up a space for venom, a space the mob can inhabit, throwing shade with near impunity. The victims of the moral mobs are dehumanized, and – as befits animals – no treatment is too severe, as indicated by the stolen socks case, above.

It’s important to note that the mob mentality can penetrate organizational structures. Worries about harassment in and around the workplace were an important part of the #metoo movement. Should Justine Sacco, whose story I discussed last time, have been fired from her job? Suppose that the employees of her firm were screaming for her firing, but cooler heads deemed the tweet an obvious (if ill-considered and offensive) joke? The urge that people have to jump on the bandwagon and join the moral mob shapes decisions made by those who determine the fate of people such as Sacco. What ought they to have done? How should they think about the “right” punishment?

One might be tempted to reply that of course they should still have fired her. The firm is a private concern, and should protect itself. If the mob is coming, throw them their victim.

This is a tempting line to take, especially since nearly everyone, statistically, will be part of the mob, rather than its victim.

But to return to Flanagan’s quote, above, again, should the punishment for a disappointing date be the end of a career and the destruction of a life? Should the punishment for one vile public statement set against countless other countless benign private acts be global humiliation? Given that moral norms change rapidly, how does anyone know that their particular shortcomings – and we all have them – won’t be moralized and – here’s the key – weaponized?

Moral Panic & The Joy of Piling On

In the face of a panic it is the job of those who know better to stand and say… wait… this is misplaced anxiety. — Malcolm Gladwell.

The quotation above is from Gladwell’s recent podcast entitled, “The Imaginary Crimes of Margit Hamosh.” It reminds me of the famous poem by Martin Niemöller, the one that begins, “First they came for the Socialists…” and ends with “Then they came for me—and there was no one left to speak for me.” Why, indeed, do people not speak up? The podcast recounts the story of Dr. Hamosh, a scientist at Georgetown accused of scientific misconduct, at a time—the early nineties—when such accusations were sprouting like weeds. Put through untold hours of scrutiny and reputation-destroying questioning by the NIH’s Office of Research Integrity (ORI), her offense seems to have turned on using the word “presently” – as the English do – to mean “soon,” rather than how (most but not all) Americans do, to mean “now.” (She was ultimately exonerated; Gladwell wrote about the story in the Washington Post, for those interested.) Gladwell’s point is that the diligent ORI was, in the hunt for the supposed epidemic of scientific misconduct, destroying careers and reputations – and no one stood to say, “wait, this is misplaced anxiety.”

So the panic that Gladwell has in mind is not that of patrons at a movie theater on fire, but rather a moral panic, the worry that some moral transgression is happening everywhere with dire consequences, and must be stopped, damn the cost.

Moral panics come about with some regularity whenever a sufficiently large number of humans get together, which is to say pretty much all the time. Americans are probably most familiar with the moral panic surrounding witchcraft and the subsequent Salem witch trials in the late 17th century. Twenty people (and even two dogs) were executed when all was said and done, illustrating the awesome power of moral panics to destroy. The twenty people killed by the moral mob were, of course, innocent of witchcraft, to say nothing of the poor dogs.

The worry – really, the panic – that witches and witchcraft were everywhere were, in Gladwell’s phrase, “misplaced anxiety.” Why did no one stand and say this?

That is an interesting psychological question, and one that remains timely. For instance, Gladwell links the investigations of scientific fraud to the scare in Belgium in the late nineties that led to the recall of 2.5 million bottles of Coke… which turned out to be just fine. Moral panics can seemingly break out any time anywhere about anything.

On the psychology front, moral panics have a number of shared features, but I’ll focus  on just two, one in the remainder of this post, and one in the next.

Taking the first of these two features, as Gladwell’s quote points out, in the face of these massive miscarriages of justice and lives ruined, there is often a distinct lack of individuals who know better to stand up. In fact, modern experiences with moral transgressions seem to paint a different, even, the opposite picture: an eagerness to pile on. The headline the Times ran on the story about the woman who, granted, had a moment of stupidity, captures it precisely: “How One Stupid Tweet Blew Up Justine Sacco’s Life.” The Twitter mob – the current incarnation of the normal, everyday mob with their torches and pitchforks – knows no mercy.

To drift into academic matters for a moment, this seems to fit uneasily with many theories of morality. Many modern theories of morality – though by no means all – focus on notions of harm and deterrence. Why do we morally condemn and punish? To prevent future harm. But that seems hard to square with Twitter mobbing. Surely after the first critical replies Justine would never, ever so tweet again. (And this holds aside the question of whether the “harm” here is covered by theories of morality.) Why do third parties delight in jumping on the moralistic bandwagon, expressing their disapproval, heaping punishment after punishment on the perpetrator?

I don’t propose to answer this vexing question here – though I’ve worked with my former student Peter DeScioli for the reader interested in the sort of answer I favor. As a very informal matter, one sort of (proximate, unsatisfying) answer is simply that people enjoy piling on. Having seen a moral mob or two, I can say that my sense is that people take great joy in expressing the moral failing of the victim of the day. The word that always occurs to me is gleeful. The carrion feeding on Aziz Ansari’s corpse – see below — seemed to me to do so with glee. This, of course, pushes the question back: why is piling on so enjoyable?

A second sort of answer is, perhaps obviously, the cost of speaking up. In research with Alex Shaw and Peter DeScioli, we have found that in certain contexts even simply remaining neutral, let alone coming in on the “wrong” side, can be costly to one’s social relationships. Others make inferences about you based on the moral judgments you make. “Can you believe that scientist had an error in her 50,000 word grant proposal?! She’s a horrible person, right?” The correct answer – as long as one isn’t, the sort that “knows better” and is willing to “stand up” – is always an emphatic “right!” We burnish our moral credentials by condemning the person everyone else is standing in line to condemn. That is, piling on confers a reputational benefit: one is signaling one’s moral virtue and, so, how good a group member and individual one is.

The full answer is no doubt more complex. But whatever the reasons, the larger point here is that piling on is a key feature of moral panics, and, really, the one that Gladwell is pointing to in the quotation. We should strive to understand why it happens, and, of course, as people, we should strive to be the ones standing up instead of the ones piling on.

Do we?

When I think of modern moral panics, the case of Aziz Ansari I mentioned above comes to mind. Now, don’t misunderstand me. I absolutely believe that sexual assault, harassment, and indeed any coercion should be punished. Now, regarding the now-famous account of a woman’s date with Aziz Ansari, opinions seem to vary regarding his behavior. Was it harassment or coercion? Or was it something less than that? Whatever it was, Caitlin Flanagan at the Atlantic characterizes the result this way:

… what she and the writer who told her story created was 3,000 words of revenge porn. The clinical detail in which the story is told is intended not to validate her account as much as it is to hurt and humiliate Ansari. Together, the two women may have destroyed Ansari’s career, which is now the punishment for every kind of male sexual misconduct, from the grotesque to the disappointing.

The “clinical detail” is interesting in its own right, and I’ll return to that next week. But the other part of the hurting and humiliating of Ansari isn’t the detail per se, but it’s the decision to write about the evening publically. Ansari’s career could probably have withstood the clinical details if they were rendered only to the woman’s circle of friends. The details help, but the real attack is in the decision to go public. Given the human love of piling on, making the incident public was the key piece in cementing viral disparagement of Ansari.

Gladwell is right that those who know better ought to stand up. The psychology that underlies human morality – especially the peculiar tendency for people to enjoy joining the moral mob – explains, however, why they generally do not. I’ll return to some of the consequences of this in my next post.

World Cup Soccer, Social Adjustment, and the Origins of Hooliganism. And Nelson from the Simpsons.

How many times have you heard someone explain that a child – or an adult – acted out in anger or violence because they were insecure, had low self-esteem, or were poorly adjusted? This sort of connection, from low self-esteem to aggression – and the reverse, a link between high self-esteem and achievement – is and has been a popular one, reflected in – and maybe propagated by – portrayals in popular media. To take but one example, the authoritative Simpson’s Wiki confidently asserts regarding the school bully, Nelson Muntz, that “the most likely cause of Nelson’s poor behaviour is his low self-esteem…” A key problem with this view – that low self-esteem plays a causal role in violence and aggression – is that, as Boden (2017) recently put it in the similarly authoritative Wiley Handbook of Violence and Aggression, “there is no evidence to suggest that low self‐esteem plays a causal role in violence and aggression.”

So, with the World Cup in full swing, and Brazil still in the running, this seems like a good moment to discuss a forthcoming paper in my old journal, Evolution and Human Behavior, which reports some work that looks at this connection in the context of soccer (hereafter, football, in deference to the Cup) fans. A new paper by Martha Newson and colleagues investigates if hooliganism in football is, as has been suggested, due to “social maladjustment” or, instead, to something more “positive,” the degree to which people feel part of their particular group, or what they call “identity fusion.”

So, Newson et. al surveyed 439 (male) football fans, asking them questions about their fandom, whether they had been in football-related fights, willingness to fight and die for one’s team (!), identity, fusion, social adjustment, and a number of other items. In terms of their Social Adjustment Scale (SAS), they find that “none of the SAS sub-scales correlated with our main variables of interest… Nor was there evidence for social maladjustment contributing to violence [or] a willingness to fight/die” for their team. In contrast, they find that “hooligan acts (both past violence reports and endorsements of future fighting/dying for one’s club) are most likely to occur among strongly fused fans.”

In short, it doesn’t look like, in this context at least, being socially maladjusted makes one prone to violence. Instead, it’s being a super big fan of your team. Now, the usual caveats must be kept in mind. The sample here isn’t completely random. The data are self-reported. And add in there the usual concern about correlation and causation. (Having said that, if it were true that social maladjustment caused violence, then the correlation should have been there. Correlation does not logically entail causation, but usually if there is causation, you should be able to detect a correlation.)

Are there broader lessons from this work? As indicated above, my view is that this work plugs into a larger debate about where antisocial behavior comes from. In contrast to the whimsical example of Nelson from the Simpsons, recent work undermines the view that bullying is driven by having low self-esteem. Reciprocally, the putative benefits of high self-esteem continue to be suspect.

Note that while discussions of self-esteem have often focused on educational settings, the recent work by Baumeister and Vohs (linked above) should be taken seriously by people in the real world in terms of the workplace. As they put it, referring to work by Orth et al,: “Self-esteem mainly affected subjective outcomes, such as relationship satisfaction and depression. The more objective the measure was (e.g., salary, occupational attainment), the less effect self-esteem had…. Despite their large sample, there was no effect whatsoever on occupational status. Thus, high self-esteem leads to being more satisfied with your job but not with getting a better job.”

Finally, results such as these have potentially important implications for anyone trying to improve one’s own – or others’ – behavior. While the idea that increasing self-esteem will produce improved outcomes – better educational attainment, a better job, less aggression – has historically been a popular one, the present state of knowledge should make one cautious, even skeptical of this idea.

Stepping back even further, as some have been suggesting for quite some time, it might be better to stop thinking of self-esteem as a cause but rather an effect. Self-esteem might be the feeling that one gets when one is doing well – professionally, socially, etc. – rather than the feeling that gets one to do the things that will help one do well. If that’s true, then interventions in the classroom and in the workplace shouldn’t focus on making people feel better about themselves, but – and this really shouldn’t be a surprise – to helping people accomplish the sorts of things that will lead to success and, as a consequence, feeling good.

(Note: This entry has been cross-posted on Psychology Today)

How Should Societies Allocate Their Stuff?

One of my favorite novels is The Phoenix Guards by Stephen Brust. Brust writes this novel from the perspective of one Paarfi of Roundwood, a scholar from the fictitious world Brust created. Paarfi begins with a little preface about how he came up with the idea for the historical novel, based on some reading he was doing of a manuscript by another (also, of course, fictitious), author. He writes:

One thing that caught our eye occurred in the sixty-third or sixty-fourth chapter, where mention was made of a certain Tiassa who “declined to discuss the events” leading up to the tragedy.

From this brief phrase, Paarfi/Brust produce a story of magic and adventure that stretches to over 350 pages and bears a singular resemblance to The Three Musketeers but set in a world with sorcery to go with the swords.

The only reason that I mention this is that the rest of this post is a meditation on a recent editorial by Bryan W. Van Norden about free speech, but I’m not going to focus on the editorial per se, but I was struck instead by two sentences in the piece, and my remarks are, like The Phoenix Guards, a lengthy reaction to a short part of the whole. Van Norden writes:

Access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole. (My italics.)

I think it’s worth contemplating this claim for two reasons. First, the question of who gets invited to speak on campus – and ultimately is allowed to speak on campus, which is not always the same thing – is an important and contentious issue that bears close scrutiny. Second, there is a much broader question about how finite goods – which is basically all goods (and services) – should be apportioned, whether because justice requires it or for any other reason.

Let’s take the second piece first.

If it’s right that “institutional access” should be apportioned – just like any other good – based on merit and social welfare, then we should be able to put any other good in that sentence and it should still make sense. (I use “social welfare” as a shorthand for “what benefits society as a whole.”) Here are a few examples.

Like any finite good…

…chocolate bars should be apportioned based on merit and social welfare.

…sexual partners should be apportioned based on merit and social welfare. (I add this only because of Robin Hanson’s recent discussion about this.)

…first class seats on airplanes should be apportioned based on merit and social welfare.

…medical care should be apportioned based on merit and social welfare

…admission to colleges should be apportioned based on merit and social welfare.

.             …kidneys should be apportioned based on merit and social welfare.

To most Western readers, some of these claims probably sound a lot more sensible than others, with the ones toward the bottom sounding more reasonable than the ones toward the top.

Indeed, again for most Westerners, we have a fairly strong sense of how goods (and services; hereafter just “goods”) ought to be apportioned, and far and away the basis is neither merit nor social welfare, but rather prices. Who gets the chocolate bars? Whoever is willing and able to pay for chocolate bars.

The overwhelming majority of goods are indeed allocated this way, and historically arguments have been required to justify deviating from this allocation system. (Karl Marx produced such an argument…) The current medical care system in the U.S. and debates surrounding it is an obvious example. Everyone agrees that medical care is finite; people disagree (strongly) about the right way to allocate it. But examples such as the medical care system illustrate the broader rule: by and large the capitalist West has decided that markets and prices will determine allocations. In cases in which prices aren’t used, the decision has to be made another way. For example, at water fountains, access is decided on a first come, first served basis. For medical care, it is the baroque system of providers, insurers, and the state, all serving up a stew of allocations all but impenetrable to we mortals.

So why does Dr. Van Norden assert that institutional access ought to be apportioned in some other way?

Well, first, I should confess I don’t really know. But second I should lay my cards on the table about how I generally approach the question of how people come to think about how scarce resources ought to be apportioned. A number of years ago, some colleagues and I conducted a study in which a scarce resource – money, in this case – was to be divided between two participants in an online experiment. The two participants were told the rules of the interaction – one person would have to work a bit harder than the other – and then asked how the money that was allocated by the experimenter ought to be divided. Before players knew whether they would have the easier or harder task, they more or less agreed on how to allocate the money. However, after they learned which role they would have, the person who worked harder came to believe that the allocation ought to be based on effort, rather than simply split evenly between the two participants. The player who worked less came to believe an even split was a more sensible idea.

In short, the answer to the question about how scarce resources ought to be allocated depended exquisitely on what allocation regime worked to the best interests of the person making the judgment.

Of course self-interest is not the only determinant of people’s views on allocation regimes. The worlds of psychology and economics are never so simple. But as the expression goes, the race is not always to the swift, but… that’s the way to bet. (Attributions vary.)

And, indeed, in some cases, what matters is not individual differences, but the good or service to be allocated. To return to the example I drew on in my last post, kidney allocation seems to most people to be best done based on factors such as urgency of need and place in line. But we would recoil at the idea that sexual partners should be allocated that way, or indeed by prices. And these views seem to be relatively broadly held.

So the moral of the story to this point is, first, that it doesn’t seem right that people broadly think merit and social welfare should dictate allocation of goods. Second, people in fact differ on how they think we should divvy things up, and at least sometimes they do so in a way that tracks their own interests. Third, intuitions depend on what the good is, exactly.

Which brings us back to the question of institutional access. How should a university allocate its finite speaking slots? The answer to that, to me, depends on what you think the function of those slots are. If they are solely to do with the financial health of the institution, then those making invitations ought to invite those who will maximize that health. This might be entertaining, famous people, who seem to contribute to that end. I recall that Penn had Lin-Manuel Miranda speak at commencement in 2016. I’m as devoted a fan of Hamilton as anyone, but I’m not sure he guided or inspired the Class of 2016.

A different goal of university invitations might have to do with maximizing its educational goals, which would somehow contribute to financial goals as well. In that case, entertaining, famous people might not be as desirable as those who contribute to education and learning.

Might those who invite speakers take merit and social welfare into account? Sure. I’ll have more to say about that down the line, but it doesn’t seem to me that those criteria ought to count as first principles. In the end, it could be that there are no general principles about how to allocate scarce resources or, at least, no principles so general that the answer about how to allocate them is the same answer that one tends to see across the social sciences: it depends.

Social science for the pleeps