Principles of Culture Engineering, Part 1

This short series of posts will consist entirely of me saying really obvious things.

Not clever-obvious things; not things that will make you say “I can’t believe I never thought of that!” or “you’ve articulated something I’ve been groping towards for years!”  Obvious-obvious things.  The only reaction I expect is an eyeroll and a “No shit, Sherlock.”

So why bother?  Because when I process my own thoughts on utopia-building — or even, really, when I engage with any sort of broader discourse about culture and society — I’ve found it really helpful to have these obvious-obvious truths cached, labeled, and ready for instant application.  They’re easy things to forget, when it comes to tackling object-level issues, even if you understand them perfectly well as a general matter.  And they serve as safeguards against common conceptual failure modes.

At this point, whenever I’m seriously pondering anything pertaining to culture engineering, I try to run my thoughts past these principles and see whether it reveals any gross stupidities.  Often it does.  This list is meant to be like a hand-washing station at a medical clinic — reliably washing your hands won’t make you a brilliant doctor or even an effective one, and it’s a simplistic and tedious thing, but ignoring it is nonetheless probably a bad plan.

*****

The First Principle of Culture Engineering:

People are different from each other.

(See what I mean about “obvious-obvious?”)

There is a staggering amount of diversity within our species: psychological diversity, moral diversity, intellectual diversity, physical diversity.  If you subject a whole bunch of people to the same stimulus, odds are good that they won’t all respond in exactly the same way.  If you change the world on a large scale, such that it affects many humans, the changes will be better for some of those humans than for others.

This is not to say that there are no human universals.  But it’s a lot harder to find something that’s true of all people — or even “virtually all people” — than to find something that’s true of a great many people.

…in fact, due to the sheer size of the numbers involved, it’s almost trivially easy to demonstrate that something is true of a great many people.  Pretty much whatever that “something” is, so long as it’s within the range of remotely-plausible human outcomes.  In a world of seven billion people, if you’re looking at a trait so rare that it’s literally one-in-a-million, you can find seven thousand examples.  If you’re instead looking at, say, a one-in-a-thousand trait — something that’s still rare enough to be present in only a tenth of a percent of humanity, something that could be justly written off as a rounding error in terms of arranging a utopia for all mankind — you can find seven million examples.  That is a lot of individuals!  And since people with similar traits tend to cluster together in all sorts of ways, you can end up with large thriving communities whose members are all very unusual, often so much so that they find it hard to believe and harder to remember.

Ignoring this leads to the thing that Alyssa Vance and Scott Alexander call the “Chinese Robber Fallacy.”  If you’re trying to talk about humanity in a broad or collective way, and you want to be guiding yourself towards the truth, you have to use a wide-eye lens and think in terms of proportions.  You can list examples of a phenomenon from sunup straight until sundown, and it won’t mean a thing, because even a rounding-error-sized tenth-of-a-percent kind of phenomenon will generate seven million people’s worth of examples for you to choose from.

Anyway.

This principle kicks in all the time, in all sorts of situations, but (at least for me) it’s primarily useful if you want to avoid unwarranted generalizing-from-self and generalizing-from-local-knowledge.  Something can be true of you and still be mostly false.  Something can be true of you, and everyone you know, and everyone they know, and still be mostly false.  Reasoning inductively, from experience, is the natural human way of thinking about the world of people…but it is a shit-tastic methodology for understanding a large diverse population.  And that goes double if you live in a heavily-self-selected little bubble, which you almost certainly do, dear reader.

“People naturally seek out…”  “Women always have to deal with…”  “Math nerds tend to be very good at…”  “Every father worries about…”  No.  Stop.  Anything of that form is an extraordinary claim, requiring extraordinary proof.  You don’t know people, or women, or math nerds, or fathers.  You know a tiny and unrepresentative sample of those groups, plus an almost-as-tiny and probably-even-less-representative sample of internet folk claiming to represent those groups.

If you want to talk about changing the world, you have to contemplate how those changes will affect all people, not just the sort of people who spring most readily to mind.  Which is super hard, no matter who you are.  It often involves thinking with statistics; it often involves resigning yourself to working in ignorance, since not-cripplingly-flawed statistical knowledge is often unavailable; it almost always involves putting in the effort to understand the frighteningly alien communities that make up most of civilization, and to take them on their own terms, at least for the purpose of evaluating how they would deal with whatever-it-is that you have in mind.

*****

The Second Principle of Culture Engineering:

People are bad at creating desirable outcomes for themselves.

This is especially true if you have any kind of idiosyncratic definition of “desirable outcome,” since you’re then looking a human population that isn’t particularly trying to move in the right direction.  (Humans don’t generally put much effort into living a life that I, Balioc, would find beautiful and worthy!  News at eleven!)  But even if we put on our preference-utilitarian hats, and look only at people’s own self-supplied concepts of “desirability,” it becomes painfully clear that they just do a really terrible job of optimizing their outcomes and fulfilling their values.

The rationalists are probably the people who understand this most fully, because they use a vocabulary that’s really well-suited to processing this particular idea.  “Rationality is systematized winning,” as the sages say, and most people are not especially rational.  Humans are loaded up with cognitive biases and mental errors that drag them astray from the Way of rational thought.  So how could we be surprised that, most of the time, they don’t win?  That the great collective social engine of human utility-seeking generates an awful lot of heat waste?

And if you’re not comfortable thinking in those terms…then just spend a moment in meditation on all the sorrow and suffering in the world.  Not the distant millions-of-people-starving-in-the-Third-World kind of sorrow and suffering, but the kind that you see around you every day, the kind you really understand in your bones.  All the people desperately wishing for fulfillment that they don’t know how to find.  All the people hungrily dreaming of different lives, losing themselves in happy fantasies.  All the people trapped in bad jobs or bad relationships or bad families.  All the people locked into perpetual, petty, grueling wars of the spirit.  And ask yourself: given they tools they have, are all those people really working to solve their problems in the very most effective possible way?  Are they all thriving as fully as they possibly could? 

Of course they’re not.  We fuck up our lives all the time, and get trapped in ruts of self-destruction or stagnation; it’s what we do best.  Again, this is obvious-obvious, I know.

The Second Principle serves as a reminder of why we need culture engineering at all, why utopia-building is so damn hard and also so damn important.  It’s safeguard against the kind of magical thinking that just says “just give people the resources they need, and they’ll take care of themselves!”  They won’t.  They will wreck themselves, often badly, without the world-at-large providing an awful lot of guidance on how to live the good life.

(I want to be absolutely clear: this is not an indictment of some particular culture, or group, or sub-slice of the population.  This is an indictment of everyone.  Rich folk and poor folk, menfolk and womenfolk, smart folk and stupid folk, black folk and white folk and yellow folk and red folk, folk from every flavor of subculture that I’ve yet encountered — I have yet to see a single group whose members don’t reliably screw themselves out of better outcomes than they could get.)

This is the nutshell-sized version of the conclusion to my Un-Topia series.  (An actual proper conclusion is still forthcoming, I promise.)  Straight-up classical liberalism is not a sufficient solution to the world’s problems, because that would require that we actually use our liberty to make things better for ourselves, which much of the time we clearly don’t.

From a social-policy perspective, this is the counterweight to the First Principle.  Trying to push people in specific directions, for their own good, will probably involve treating them like a homogeneous mass — which they’re not — and hurting them.  Not trying to push people in specific directions, for their own good, will definitely involve leaving them to the mercy of their own outcome-optimizing capabilities — which are awful — and hurting them.  Navigating between that Scylla-and-Charybdis pair is one of the Hard Problems of utopia.

*****

Next post: the remaining Principles.

Advertisements
Principles of Culture Engineering, Part 1

4 thoughts on “Principles of Culture Engineering, Part 1

  1. Idomeneus says:

    These are broadly agreeable, but at least the first one has serious limits. You are making a utopia of humans after all, so there must be something in common to even make them the source of utopia (or else why not a utopia for humans… and animals, and computers, and rocks and ghosts.) [It’s possible you are exempting certain biological humans from this utopia for extreme reasons, like being brain dead. That makes the question even clearer then – what commonality do the utopia-targets share?] So, what is the value everyone can possess (even if not everyone desires it), that you are trying to maximize?

    Like

    1. I think that’s not a helpful way of processing the “utopia” concept.

      If living functional humans have anything in common, value-wise, it’s probably something like “they all really enjoy wireheading once exposed to it.” Most of us can agree that this is not the kind of value on which we want to build our concept of utopia. But when you move to values more complicated than that, you get large-scale disagreement no matter which ones you choose.

      (And of course people’s values are mutable anyway, as per the Third Principle, so in some extremely theoretical sense it doesn’t really matter what values they happen to have *now* — so long as our target values are things they *can* possess, part of the utopia-building process might be figuring out how to get them there in significant numbers.)

      Ultimately, I think, there’s a kind of ethical-existentialist bullet that has to be bitten here. As a pro-utopian, I’m trying to create a world founded on my values, because they’re my values. That’s what it means to have values at all. Other people have different [terminal] values; they are my enemies. Or perhaps my uneasy allies, depending on how much different we’re talking about. I think that many people would be way more sympathetic to my values than they currently realize, if I only have the chance to talk to them, which is why I’m doing this kind of thing at all. I think that some aspects of my values are more widely shared than others, or more able to win converts, and because of that fact I’m inclined to prioritize them more than I otherwise would. But I’m not going to pretend that trying to satisfy everyone’s moral preferences would magically result in the things I particularly want — or that you can satisfy everyone’s preferences, period — or even that you can change people’s preferences enough to be able to satisfy them all.

      Like

      1. Idomeneus says:

        Sure, I think this got side-tracked by me mentioning maximization. I know you want to maximize things that others might not care about. But there has to be something about humans that make them part of this utopia rather than also rocks and animals. Is it because humans can pursue truth, or can create art? If so that would seem a universal trait that’s worth taking seriously.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s