Evolutionary psychology is based on a recognition that past selection pressures have left their mark in substantial ways on modern human minds. It stems, in part, from two observations, both of which are at once obvious and profound. First, like any current species, our genetic material was inherited disproportionately from those members of past populations who were more successful at producing genetic descendants. And second, genes have major roles in building and maintaining human limbs, livers, spleens, bladders, blood, and bones—and they also have major roles when it comes to our brains. Or, to use a recurring phrase, evolution doesn’t stop at the neck.
But how does evolution affect the mind? It’s clear that human minds are not just collections of instincts. It’s also clear that human minds aren’t generalized fitness-calculating supercomputers with a single embedded goal of maximizing future genetic representation.
It’s complicated. The detailed workings of human brains are unfathomably complex. For most of us mortals, it’s about glimpsing a range of insights and then relying on metaphors that hopefully get us in the ballpark.
Artificial intelligence
Most computer programs do what they do with wholly pre-programmed code. There are sets of elaborate instructions that fully define what the program does in various situations. If A, then do B; if C, then do D; and so on. There’s an awful lot that such programs can do. But over the years it became painfully apparent that there’s also an awful lot that such programs are awful at—managing something that looks vaguely like animal locomotion over uneven terrain, for example, or understanding natural language, or producing output that appears reasonably intelligent in freestyle interactions.
These days, the most sophisticated programs have aspects of machine learning—they’re programs that are programmed to figure stuff out, rather than programs that are limited to responses that have been built in. And some of the most intriguing use various forms of artificial neural networks, structures based (very roughly) on brains like ours. These systems have ways of taking inputs and processing them into outputs, ways of comparing their outputs against desired and undesired outcomes, rules to make modifications to their processing operations based on whether their current outputs are close to or far away from desired targets, and lots of time to wade through various kinds of inputs, making processing adjustments as they go. They’re not handed a solution to their problem; they figure out a solution to their problem. Often these systems work better when they’re given hints, where, instead of searching the whole of an abstract processing terrain, they’re given biases to go look over there, to try certain kinds of answers first, to pay closer attention to certain kinds of inputs, and so on.
We could call this kind structured learning a goals-and-hints approach. There’s a basic architecture that is capable of self-modifying based on feedback, and it’s told what its goals are, given hints about how best to proceed, and interacts with its relevant environment over time in ways that lead to improved performance based on its ongoing modifications. Along the way, it develops its own sub-goals and sub-hints, moves that help it reach its higher-level goal in the specific context of its local environment.
Goals-and-hints processing has some very interesting features. For example, when two instances of the same program (with the same goals and hints) are given substantially different inputs, they’re likely to reach different processing solutions—each one is making it up as they go, and different environments might imply different ways of maximizing the same goal. Even when different instances of the same program have very similar inputs and reach very similar processing solutions, it’s important to keep in mind that each made up its own solution.
In humans, we see lots of goals-and-hints processing. Learning how to drive a car, for example. There are various implicit and explicit goals—don’t drive off the road, don’t run into other cars or people, follow traffic laws, and so on—and various hints from driving instructors. But mastery is ultimately about getting behind the wheel and having one’s brain make ongoing subtle adjustments to figure out over time how to perform competently.
Humans also take goals-and-hints processing to another level. We often engage in virtual interactions with our environments, imagining how things might turn out for us under various hypothetical circumstances. We observe how others succeed or fail in achieving goals we share. We pay special attention to stories, both real and fictional, that contain possible lessons for our own lives. We use these lessons to modify our own strategic moves.
The point here is that there are various ways for evolution to leave its mark on human minds. There could be basic developmental regularities that lead, most of the time, to certain common behavioral tendencies. There could be various complex sets of if-then rules that normally get built into our cognitive architecture. There could be combinations of developmental regularities and decision rules, where various developmental contingencies tend to lead to predictably varied sets of complex decision rules.
But we’ve also got a huge, flexible cortexes packed with massive neural networks just waiting to self-modify in whatever environment they happen to find themselves, guided by some set of specified goals-and-hints. Evolution can select for various social goals-and-hints (mostly housed in the non-cortical systems that handle basic emotional and motivational evaluations), letting each individual’s cortex work out its own detailed strategies in the context of that individual’s local situation. Some of these goals could be more specific and some could be more general.
Human brains are very complex amalgams of lots of different kinds of mechanisms—developmental contingencies, decision rules, goals-and-hints processing, and other variants, all inter-woven in complex ways. Further, brains are highly bureaucratic—they contain multiple departments with different tasks and competing recommendations, often operating without a clear organizational hierarchy. The various departments are themselves driven by complex amalgams of mechanisms. There’s a lot going on in there.
Modern minds with Stone Age goals-and-hints
I’ve done evolutionary work mostly on topics not to be discussed in polite company—politics, religion, and sex. Other than in my dissertation, though, I haven’t been very explicit about how I think these studies fit into the broader context of evolutionary psychology. Here it is: When it comes to complex social patterns, I’ve never thought that evolution’s impact on the human mind is only or even primarily in terms of regularly developing Pleistocene if-then decision rules (though I think we probably have lots of mechanisms that roughly fit this description). In addition to decision rules, I think about human behavior in terms of goals-and-hints processing, where we have widely shared goals and hints, but each individual works out his or her own strategies for achieving these goals given the specifics features of one’s self and one’s situation. When I see something like Kenrick’s pyramid of fundamental motives, for example, I don’t take these just as guides to identifying very particular evolved goals, but also to thinking about how some goals might be relatively broad—the sorts of goals that don’t just drive specific behaviors, but drive goals-and-hints processes that individuals use to invent and re-invent behavioral strategies based on local details.
And so, I’ve looked at people’s contrasting views on abortion and marijuana legalization, the recent rise of liberal-conservative ideology, and modern politics generally. I’ve looked at modern church attendance (explicitly saying that I don’t think the current individual-difference patterns have all that much to do with ancient individual-difference patterns). I’ve looked at the non-reproductive sex of college kids, at speed dating, at how college educations and cash incomes affect fertility.
I see these as evolutionary studies not because they are signs of Stone Age minds, but because modern minds are still organized around deep motivations regarding domains such as resources, protection, affiliation, social status, mating, and parenting. Humans confront these old social problems through complex mixtures of more specific and more general motivating mechanisms. Using goals-and-hints processing, individuals might figure out, for example, how the legality of abortion or marijuana (even if these are historically novel phenomena) relates to their self-determined context-dependent strategies in achieving old mating goals. (By “self-determined” here, I certainly don’t mean that social environments are irrelevant—far from it. An absolutely central part of humans’ goals-and-hints operations involves paying close attention to what other relevant people are doing.) They might figure out how the competition between meritocratic and discriminatory policies affects their self-determined context-dependent strategies in achieving old goals regarding social status, even if the modern concept of meritocracy didn’t arise until the 20th century. They might take into account the realistic usefulness of modern higher education, birth control, and religious participation in developing their self-determined context-dependent strategies to achieve old goals regarding mating and fertility.
This doesn’t mean, though, that any of these strategies optimize a hypothetical highest-level evolutionary goal. Even with powerfully flexible goals-and-hints processing, human evolutionary history might have set insufficient parameters—the wrong goals, or the wrong hints, or the wrong basic learning architecture—to regularly lead to optimal solutions for genetic propagation within a given environment. There are, after all, relatively clear examples of modern technologies that seem to suck lots of folks into plainly maladaptive behaviors by imitating evolutionarily valuable stimuli—OxyContin, heroin, potato chips, soft drinks, online computer games, and so on. Evolved psychological mechanisms in their more specific and more general forms might or might not be roughly adaptive in a given environment.
This post is part of my ongoing series on modern low fertility. I took this detour into deep thoughts because I wanted to give a sense of why I don’t think modern fertility patterns are (just) about how a detailed batch of ancient decision rules got tripped up by modern environments. I think that humans often deeply integrate modern (even novel) environmental conditions into their behavioral strategies. While there are old decision rules and old goals-and-hints driving the processes, individuals develop their own behavioral strategies over their lifetimes, strategies developed specifically in the context of their local (even novel) conditions. This may or may not lead on average to genetically optimal solutions, but it often means that the introduction of something like college educations or the pill or modern religion isn’t going to fundamentally alter the nature of what humans are doing—these new inventions might be tools employed or avoided in the service of ancient goals. Or, again, they might not. As always, these are empirical questions.