Mission Statement

This blog, in theory, has one primary purpose: it is to be a forum for imagining a better world.  The best world that I think we, as a civilization, could achieve in the near-to-medium term.

When I say “imagining,” I mean that seriously.  I’m not trying to discuss strategies for bringing about that better world; I’m not trying to provide practical guidance for moral living; I’m not trying to support anyone’s cause, including my own.  I am nowhere near the stage of feeling ready to do any of those things.  There is only one question I’m even trying to address — what is it that we should even want, if we could have anything we wanted?

…ok, fine, two questions.  This is meant to be an exercise in defining values and aspirations, not in writing fairy tales; there’s real value to staying within the limits of the possible, even if you’re not particularly trying to figure out how the best possibilities might be attained.  And of course the boundaries of the possible are going to be hugely controversial, and the resolution of that controversy is going to shape the nature of our best-imagined-world.  So I’ll add another question — which outcomes are even theoretically plausible, and which aren’t even worth considering?

But that’s it.  My aim is to lay out a set of goals, and to keep them within the constraints of reality.  Nothing more.

**********

Why?

Because I think our civilizational discourse has gotten hopelessly tangled up in short-term, tactical thinking.  Many people have very clear ideas about where they want society to go next, but too few of them have given any thought at all to where they want it to end up.  And pushing hard for short-term tactical progress, without even having a terminal aim by which to steer, is a good way to barrel blindly down a path that ends someplace very unpleasant.

Because I think our political coalitions and alliances have gotten ridiculously muddled.  In a sane world, people with similar terminal goals would be allies — and might argue with one another over optimal tactics, while remaining clear on their ideological unity — while people with incompatible terminal goals would be enemies.  But we’re now at a point where tactical arguments have devolved into vicious tribal-identity brawls, where people who agree about what they fundamentally want are nonetheless at each other’s throats for stupid reasons, and where people fighting for very different worlds manage to overlook their essential ideological enmity by never asking the right questions.

Because there aren’t enough people thinking seriously along utopian lines, and of the people who are, many are too caught up in small-scale political and cultural feuds to do so cleanly.

Because our dreams have gotten too small and too standardized.

Because I find it entertaining.

**********

There two rules that I’m imposing on myself for purposes of this exercise, and on anyone who intends to play along with me.

The first one is the most important — no cheating through abstraction, or through appeal to a higher power.  The object is to imagine and define a better world, which is a difficult project, because it requires actually using our limited-bandwidth meat brains to call upon our limited pools of information and experience.  It’s much easier to call upon some unreal entity that can do the imagining and/or defining for us, which is why utopians often end up doing exactly that.  “The superintelligent AI will accurately determine everyone’s suite of utility inputs, and then recommend the outcome that maximizes the combined preference function.”  “Once we throw off the mental shackles of a hierarchical capitalist society, the post-revolutionary collective culture will be able to implement solutions we cannot even imagine.” “The important thing is to get people into heaven, where everything is wonderful in a fashion that cannot be understood by mortal minds.”  Just…no.  No to all of it.  Go specific or go home.

(In addition to being boring and lame, the “outsource our utopia to a Greater Thinker” answer tends to descend quickly into “the Greater Thinker must be real [or achievable] because we’re relying on its wisdom to make things better,” which is…not healthily reality-constrained logic.)

The other one isn’t so much a limitation as an evaluation — you lose points for calling upon nonexistent technology, and the more distant it is from our current tech level, the more points you lose.  I don’t want to use our current civilizational powers as a hard cap; it would probably be foolish to imagine that we won’t enjoy substantial tech growth even in the very near future, and if particular as-yet-unachieved technologies would open up vastly desirable cultural outcomes, that seems like an important thing to know.  But it’s easy as it is useless to “solve” problems with science fiction, and it’s hard to know which advances won’t turn out to be science fiction.  So there should be a preference for leaning on that kind of thing as little as possible.

**********

All that said —

I doubt that any blog, ever, has stuck to its mission with perfect fidelity.  I’m quite sure that this one won’t.  The odds are good that, before long, I’ll be devoting plenty of entries to random silly ideas and to bloviation on the controversies of the day.  My resistances against that kind of thing are not strong.

I just want to be clear about what I’m supposed to be doing.

Welcome to the Baliocene Doctrine.  I hope you enjoy the time you spend hanging around here.

Advertisements
Mission Statement

4 thoughts on “Mission Statement

  1. A worthy project, yet I think there are more parameters than you admit. Off the top of my head, two important ones come to mind.

    How many humans are there? Someone might propose a hunter/gatherer style society, and instead of debating whether this was the ideal way for a human to live, I suspect the easiest criticism is that our world could only support about a million people in that style of living. It’s easy to argue “more = better” for any number of reasons, and so an end state society with only one million people will be dismissed out of hand.

    But, if we believe “more = better”, then our current number of seven billion is hopelessly inadequate. We should get to building an environment that can fit more people, and modifying people so they can make do with less, until we are trillions and quadrillions of people fitting in the singularity. The parameters will define the result.

    Or there’s the question of permanence. Does the ideal human society have to be one that will persist in that state? For a time, forever? To do that presupposes a lot of stability in our “possible ideal”. Yet a society that does not sustain itself is in many ways unrealistic as well.

    You could answer the question of these parameters. Or you could say “provide support for that within your own argument for your utopia”. (PS: Do you hate the word utopia when giving name to a scenario?)

    If the latter, then I think I am going with pre-supposed parameters of “Any sized human society qualifies, we are talking about average output” and “stability is not prized.” But I shall have to consider before defending them.

    Like

    1. I mean, there are *lots* of important parameters that are undefined in this mission statement. Like, for example, “what does ‘better’ actually mean?” Spoiler alert: I’m not a hedonic consequentialist, and so as far as I’m concerned that question has a non-trivial answer.

      I’m not yet clear on how much of this blog, proportionally, I want to spend on metaethics. On the one hand, metaethics is fun for me, and it’s the only thing that can provide a really solid ground-up foundation for the work I want to do; on the other hand, it’s incredibly boring for pretty much everyone who *isn’t* me, and (more importantly) it’s an excellent way to get bogged down in technical details without getting to the exciting conclusions.

      I can answer your actual questions briefly, if unsatisfyingly, by saying that

      (a) I don’t hold to an additive model of moral consequence, and therefore don’t believe that more = better; I believe in concrete valences for existing persons (and therefore concrete obligations towards them), but am willing to subject potential persons to whatever ruthless manipulations make for the best overall outcome; and

      (b) I calculate consequence over time in the intuitive way, and therefore value stability/longevity in the way you’d expect.

      “Utopia” means “nowhere.” Pretty good word for a hypothetical scenario, if you ask me.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s