What Are We Fighting For?

By “we” I mean “I.” And by “fighting” I mean “advocating, in some extremely abstract sense.”

This is going to be a reasonably boring and technical post.  I’m not arguing for, or trying to demonstrate, anything.  I’m just providing a list of my terminal/root-level moral values.  I figure that, as I talk about possibilities for a better world, I’m going to end up wanting to refer back to this a lot — basically anytime someone reacts to one of my propositions with a horrified “how could you possibly call that better?”

In particular, I’m not going to be defending any of these values.  At all.  Root-level values don’t can’t be defended, because they don’t rest on any underlying logic; they are merely asserted.  I am basically putting forth my own multi-part definition of the word “good,” and to the extent you don’t like it, you can call yourself my moral enemy and have done.

(Ultimately there are no grounds on which we could debate and try to convince one another, because as far as empirical reality is concerned, all conceptions of “good” are equally arbitrary.  There are no good particles that can be measured by a good-ometer — goodness is something that exists within our conceptions of it.  As the man said, ought cannot be derived from is.  There are some modern philosophers who think that ought can somehow be intuited from is, using some kind of special human morality-detecting sense; this probably isn’t the place to delve into that, so let’s just say that I find these particular claims incredibly unconvincing, and move on for now.)


My “moral operating system” is consequentialist. I believe that value (which is an as-yet-undefined concept) exists within reality as a whole, not within isolated actions or decisions, such that any effects of a choice are morally valent.  Indeed, I believe that it is proper to speak of possible world-states as being more or less morally valuable, and that consequentialists often cede too much to deontic thinking by speaking as though morality inhered in our choices and not in the potential worlds generated by those choices.

I acknowledge several different non-fungible root-level values, each of which (in my schema) qualifies as “good” in its own right, without being reducible to any of the others or being measurable in terms of any of the others.  This, combined with the general difficulty of quantifying value, does have the effect of making it basically impossible for me to calculate in any rigorous way which of two possible world-states is better.  (Is X amount of Value A more or less valuable than Y amount of Value B?  How much sonata-derived glory adds up to one average-happiness human life?)  I take comfort in the fact that even single-value consequentialists, like hedonic utilitarians, basically can’t do any calculation either — hedon-quantification and probability-estimation are both much too fuzzy.  I have to make my moral decisions by ear, but in that sense I’m doing no worse than anyone else.


My root-level values are listed below, in no particular order

Sapient happiness.  Positive and pleasant feelings, as experienced by reflective consciousness systems that can perceive their positivity and pleasant-ness rather than simply reacting to them.  This value category incorporates a wide range of specific mental phenomena, which might get further sorted into subcategories like “sensory pleasure” and “emotional satisfaction” and “intellectual epiphany” and so on; to the extent that such feelings can be quantified at all, these subcategories may be assigned different amounts of moral worth per unit of quantity.  Humans definitely qualify as “sapient” in almost all cases, at least after the very earliest stages of infancy.  There are a few animals that also seem likely to qualify, enough so that I put them in a “safest to take their happiness seriously” bucket — apes, elephants, whales, maybe corvids and octopi.  Other animals and computers and very-early-in-development humans are not capable of experiencing the relevant phenomena, unless my empirical understanding of the universe is very wrong.  I’ll deal with unearthly entities when I have more information on them.

Preference fulfillment.  Sapient entities getting what they want from the world.  This is different from happiness; what we want is very often not the same as what will make us happy, and indeed those things are often in conflict.  Morality is hard, yo.  But I have enough intrinsic respect for motivation, in the abstract, that I find value in goals being achieved.

Liberty/power.  Sapient entities being able to pursue their motivations without restraint.  This is closely related to preference fulfillment, and I’m not totally sure that I should be listing it separately, but I suspect that the double-counting is actually correct; it is good that Mr. X should get what he wants, and in addition it is good that no external forces in the world should prevent him from doing so.  It’s worth noting that, unlike many liberty theorists, I see no meaningful distinction between having the “freedom” to do something and actually being able to do it.  As far as I’m concerned, the distinction between “positive” and “negative” liberty is jibber-jabber.  All constraints are relevant.

Truth/knowledge.  Sapient entities accurately apprehending the state of reality.  I value this both in an “overall state of human knowledge” sense and in an “each individual person’s mental landscape” way — clarity is good, error is bad.  My actual moral calculations suggest that I place very strong relative value on this one; in particular, I’m basically unable to countenance any large-scale world-outcomes that involve systematic deception or induced delusion, which can solve a lot of problems on other fronts.

Virtue.  Sapient entities demonstrating particular traits or behaviors that I consider inherently desirable.  Notable examples include courage, compassion, temperance, etc.  Particularly notable examples include self-awareness and honesty (both of which result in the double-counting of truth/knowledge-related phenomena), as well as self-determination (which results in the double-counting of liberty/power-related phenomena).  I’m taking this opportunity to express my bafflement at the continued popularity of classical-style virtue ethics.  Why not just fold your desire for virtue into your consequentialist value schema?

Aesthetics.  This is a cheaty catchall category, comprising a bunch of things that I think are inherently valuable for reasons I can’t adequately explain.  Grandiosity and complexity tend to win moral points from me.  So does congruence with various preexisting tropes and narratives that I find beautiful.  So does a kind of thematic straightforwardness and sincerity, which is hard to explain except by example — it’s the thing that causes me to like clunky old pieces of literature, the thing that causes me to prefer even the most misguided religious fanaticism to a “cafeteria-style” faith engineered to meet people’s needs like a consumer product.

I’m sure this is not a complete list.  I will probably update this post as introspection yields more results.

What Are We Fighting For?

9 thoughts on “What Are We Fighting For?

  1. Certainly a good summary of what you’re aiming for. And because it’s so existentialist, uh, hardly like I can either debate or elaborate on it.

    So a question about further discussion: do you expect other interlocutors to provide their own goals, and then their worlds will be justified in reference to those? Or do you expect that most discussions about the end state society should be based around this shared base of aims?


    1. I expect nothing. I mean, I’d certainly be *interested* in other people’s abstract ethical structures. It makes for good communication. But in fact I assume that most future conversations about better-ness are going to be much, much less abstract than this.

      It’s just that I *also* assume that, every so often, someone’s going to look at my writing and say, “That’s not better!” And I’ll just be able to point to this.

      People whose starting premises are sufficiently different from mine may be unable to reach compatible conclusions, which is probably worth knowing.


  2. Savannah Fireheart says:

    These values sound cool and I’m down with trying to create a country/utopia/hypothetical construct/thing that maximizes them. Here’s how I’d like to start:

    Some of these things are measurable, or at least, people try to measure them. There are reports on which countries are the happiest, which countries have the most social freedom, which countries have the most economic freedom (to wit: The World Happiness Report 2013*, The Freedom House Annual Survey 2014** and the Economic Freedom Of The World 2014 Annual Report***).

    I’m lazy and hate analyzing data, so I’m going to try and outsource this: does anyone want to look through these lists and find which currently existing countries have the highest rankings on all three of the charts? It seems like this might provide us with a cluster of countries that are Doin’ It Right in re: Balioc’s values to some degree, and I’d certainly be intrigued to see what those countries are.





    1. It’s a good thought, but — as far as I’m concerned — we are nowhere near the point where we can start fiddling with empirics.

      The next umpteen bazillion things I write are going to go some way towards explaining why I think that is. But the short-short-short version is, approximately: we’re super bad at knowing what social/cultural constructions correlate with particular values (including the ones I favor), and so long as we’re operating in that kind of philosophical ignorance, our attempts to quantify and measure things are useless or worse. Happiness surveys aren’t even that great at measuring whether people *think* they’re happy, and they’re absolute rubbish at measuring whether people are *actually* happy. I strongly suspect (for reasons I’ll be getting into at length) that measures like “economic freedom” are in fact anti-correlated with human freedom, and that this is even true of “social freedom” in a somewhat more complicated and less obvious way. Etc.


      1. Savannah Fireheart says:

        But if we aren’t fiddling with empirics, what are we doing? Fiddling with abstractions? Empiricals may be far from perfect, but they’re better than no knowledge, and if we confine ourselves to abstractions or anecdotes, well…in my experience, that quickly turns into “arguing about semantics” or “arguing which anecdotes are most illustrative of reality” or “arguing about what anything really means.”

        Empirics are imperfect. But if this conversation is to become anything other than a clash of (necessarily-imperfectly-defined) abstractions, what else can we use?


        P.S. The countries on the top 20 of all three charts are, in no particular order.

        *New Zealand
        *The US

        I may be missing one or two because I hate looking at lists.


  3. Well, the explicit purpose of this entire blog is “imagining a better world.” Which is to say, developing a much more detailed and spelled-out vision of our (my) moral/social/cultural goals, rather than jumping straight from the most abstract possible starting place to a conclusion.

    I’d much rather figure out how to build a utopia, once I have one in mind…or even how to approximate a jury-rigged version of that utopia out of existing parts…then try to operate with nothing more than my unrefined core values and a set of facts about the current state of affairs.


    1. Savannah Fireheart says:

      Valid. I’d like to see you refine your core values and explicate your moral/social/cultural goals more, if nothing else because I’m more likely to start disagreeing with you/them then.

      I’m interested in messing with the empiricals now, of course, because I’m terrible at arguing abstractions even at the best of times. Perhaps this is only impatience, but I’m also tempted, at the moment, to begin a response to this blog that starts from, as you say, unrefined core values and the concrete and imperfectly-but-empirically measured and tries to build a better world from there.

      Though perhaps that’s what the Less Wrong people are already doing.


      1. Idomeneus says:

        Our goals here are born from our intuitions. We want more happiness because we intuit that happiness is good. Then we apply these empirical tests and surveys, and find some cool results, but often we also find that the empirical surveys contradict our intuitions (like say, a country we are certain is Doing It Wrong, has higher happiness ranking, etc.) Most philosophers and the LWers at this point bite the bullet and say “I am committed to the empirical findings, so we will push on in that direction.”

        Sometimes I think we ought to revisit our intuitions first then. Perhaps we inaccurately defined them. Perhaps “happiness” isn’t really what we are going for. Only after we have established terminology we are certain of, should we turn to empirical tests to see what part of the world maximizes it. One way of revisiting our intuitions does seem this imaginative utopianism project. So I’m still watching to see where it goes.

        An example: It is a common finding of happiness surveys that some countries with less stuff report higher happiness. That is all well and good. But would you be comfortable with a government that made decisions then based on happiness? One that censored knowledge knowing it would only make you anxious, or denied material advancement that would only make people ungrateful? The concept seems very disturbing to me.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s