By “we” I mean “I.” And by “fighting” I mean “advocating, in some extremely abstract sense.”
This is going to be a reasonably boring and technical post. I’m not arguing for, or trying to demonstrate, anything. I’m just providing a list of my terminal/root-level moral values. I figure that, as I talk about possibilities for a better world, I’m going to end up wanting to refer back to this a lot — basically anytime someone reacts to one of my propositions with a horrified “how could you possibly call that better?”
In particular, I’m not going to be defending any of these values. At all. Root-level values don’t can’t be defended, because they don’t rest on any underlying logic; they are merely asserted. I am basically putting forth my own multi-part definition of the word “good,” and to the extent you don’t like it, you can call yourself my moral enemy and have done.
(Ultimately there are no grounds on which we could debate and try to convince one another, because as far as empirical reality is concerned, all conceptions of “good” are equally arbitrary. There are no good particles that can be measured by a good-ometer — goodness is something that exists within our conceptions of it. As the man said, ought cannot be derived from is. There are some modern philosophers who think that ought can somehow be intuited from is, using some kind of special human morality-detecting sense; this probably isn’t the place to delve into that, so let’s just say that I find these particular claims incredibly unconvincing, and move on for now.)
My “moral operating system” is consequentialist. I believe that value (which is an as-yet-undefined concept) exists within reality as a whole, not within isolated actions or decisions, such that any effects of a choice are morally valent. Indeed, I believe that it is proper to speak of possible world-states as being more or less morally valuable, and that consequentialists often cede too much to deontic thinking by speaking as though morality inhered in our choices and not in the potential worlds generated by those choices.
I acknowledge several different non-fungible root-level values, each of which (in my schema) qualifies as “good” in its own right, without being reducible to any of the others or being measurable in terms of any of the others. This, combined with the general difficulty of quantifying value, does have the effect of making it basically impossible for me to calculate in any rigorous way which of two possible world-states is better. (Is X amount of Value A more or less valuable than Y amount of Value B? How much sonata-derived glory adds up to one average-happiness human life?) I take comfort in the fact that even single-value consequentialists, like hedonic utilitarians, basically can’t do any calculation either — hedon-quantification and probability-estimation are both much too fuzzy. I have to make my moral decisions by ear, but in that sense I’m doing no worse than anyone else.
My root-level values are listed below, in no particular order.
Sapient happiness. Positive and pleasant feelings, as experienced by reflective consciousness systems that can perceive their positivity and pleasant-ness rather than simply reacting to them. This value category incorporates a wide range of specific mental phenomena, which might get further sorted into subcategories like “sensory pleasure” and “emotional satisfaction” and “intellectual epiphany” and so on; to the extent that such feelings can be quantified at all, these subcategories may be assigned different amounts of moral worth per unit of quantity. Humans definitely qualify as “sapient” in almost all cases, at least after the very earliest stages of infancy. There are a few animals that also seem likely to qualify, enough so that I put them in a “safest to take their happiness seriously” bucket — apes, elephants, whales, maybe corvids and octopi. Other animals and computers and very-early-in-development humans are not capable of experiencing the relevant phenomena, unless my empirical understanding of the universe is very wrong. I’ll deal with unearthly entities when I have more information on them.
Preference fulfillment. Sapient entities getting what they want from the world. This is different from happiness; what we want is very often not the same as what will make us happy, and indeed those things are often in conflict. Morality is hard, yo. But I have enough intrinsic respect for motivation, in the abstract, that I find value in goals being achieved.
Liberty/power. Sapient entities being able to pursue their motivations without restraint. This is closely related to preference fulfillment, and I’m not totally sure that I should be listing it separately, but I suspect that the double-counting is actually correct; it is good that Mr. X should get what he wants, and in addition it is good that no external forces in the world should prevent him from doing so. It’s worth noting that, unlike many liberty theorists, I see no meaningful distinction between having the “freedom” to do something and actually being able to do it. As far as I’m concerned, the distinction between “positive” and “negative” liberty is jibber-jabber. All constraints are relevant.
Truth/knowledge. Sapient entities accurately apprehending the state of reality. I value this both in an “overall state of human knowledge” sense and in an “each individual person’s mental landscape” way — clarity is good, error is bad. My actual moral calculations suggest that I place very strong relative value on this one; in particular, I’m basically unable to countenance any large-scale world-outcomes that involve systematic deception or induced delusion, which can solve a lot of problems on other fronts.
Virtue. Sapient entities demonstrating particular traits or behaviors that I consider inherently desirable. Notable examples include courage, compassion, temperance, etc. Particularly notable examples include self-awareness and honesty (both of which result in the double-counting of truth/knowledge-related phenomena), as well as self-determination (which results in the double-counting of liberty/power-related phenomena). I’m taking this opportunity to express my bafflement at the continued popularity of classical-style virtue ethics. Why not just fold your desire for virtue into your consequentialist value schema?
Aesthetics. This is a cheaty catchall category, comprising a bunch of things that I think are inherently valuable for reasons I can’t adequately explain. Grandiosity and complexity tend to win moral points from me. So does congruence with various preexisting tropes and narratives that I find beautiful. So does a kind of thematic straightforwardness and sincerity, which is hard to explain except by example — it’s the thing that causes me to like clunky old pieces of literature, the thing that causes me to prefer even the most misguided religious fanaticism to a “cafeteria-style” faith engineered to meet people’s needs like a consumer product.
I’m sure this is not a complete list. I will probably update this post as introspection yields more results.