This blog, in theory, has one primary purpose: it is to be a forum for imagining a better world. The best world that I think we, as a civilization, could achieve in the near-to-medium term.
When I say “imagining,” I mean that seriously. I’m not trying to discuss strategies for bringing about that better world; I’m not trying to provide practical guidance for moral living; I’m not trying to support anyone’s cause, including my own. I am nowhere near the stage of feeling ready to do any of those things. There is only one question I’m even trying to address — what is it that we should even want, if we could have anything we wanted?
…ok, fine, two questions. This is meant to be an exercise in defining values and aspirations, not in writing fairy tales; there’s real value to staying within the limits of the possible, even if you’re not particularly trying to figure out how the best possibilities might be attained. And of course the boundaries of the possible are going to be hugely controversial, and the resolution of that controversy is going to shape the nature of our best-imagined-world. So I’ll add another question — which outcomes are even theoretically plausible, and which aren’t even worth considering?
But that’s it. My aim is to lay out a set of goals, and to keep them within the constraints of reality. Nothing more.
Because I think our civilizational discourse has gotten hopelessly tangled up in short-term, tactical thinking. Many people have very clear ideas about where they want society to go next, but too few of them have given any thought at all to where they want it to end up. And pushing hard for short-term tactical progress, without even having a terminal aim by which to steer, is a good way to barrel blindly down a path that ends someplace very unpleasant.
Because I think our political coalitions and alliances have gotten ridiculously muddled. In a sane world, people with similar terminal goals would be allies — and might argue with one another over optimal tactics, while remaining clear on their ideological unity — while people with incompatible terminal goals would be enemies. But we’re now at a point where tactical arguments have devolved into vicious tribal-identity brawls, where people who agree about what they fundamentally want are nonetheless at each other’s throats for stupid reasons, and where people fighting for very different worlds manage to overlook their essential ideological enmity by never asking the right questions.
Because there aren’t enough people thinking seriously along utopian lines, and of the people who are, many are too caught up in small-scale political and cultural feuds to do so cleanly.
Because our dreams have gotten too small and too standardized.
Because I find it entertaining.
There two rules that I’m imposing on myself for purposes of this exercise, and on anyone who intends to play along with me.
The first one is the most important — no cheating through abstraction, or through appeal to a higher power. The object is to imagine and define a better world, which is a difficult project, because it requires actually using our limited-bandwidth meat brains to call upon our limited pools of information and experience. It’s much easier to call upon some unreal entity that can do the imagining and/or defining for us, which is why utopians often end up doing exactly that. “The superintelligent AI will accurately determine everyone’s suite of utility inputs, and then recommend the outcome that maximizes the combined preference function.” “Once we throw off the mental shackles of a hierarchical capitalist society, the post-revolutionary collective culture will be able to implement solutions we cannot even imagine.” “The important thing is to get people into heaven, where everything is wonderful in a fashion that cannot be understood by mortal minds.” Just…no. No to all of it. Go specific or go home.
(In addition to being boring and lame, the “outsource our utopia to a Greater Thinker” answer tends to descend quickly into “the Greater Thinker must be real [or achievable] because we’re relying on its wisdom to make things better,” which is…not healthily reality-constrained logic.)
The other one isn’t so much a limitation as an evaluation — you lose points for calling upon nonexistent technology, and the more distant it is from our current tech level, the more points you lose. I don’t want to use our current civilizational powers as a hard cap; it would probably be foolish to imagine that we won’t enjoy substantial tech growth even in the very near future, and if particular as-yet-unachieved technologies would open up vastly desirable cultural outcomes, that seems like an important thing to know. But it’s easy as it is useless to “solve” problems with science fiction, and it’s hard to know which advances won’t turn out to be science fiction. So there should be a preference for leaning on that kind of thing as little as possible.
All that said —
I doubt that any blog, ever, has stuck to its mission with perfect fidelity. I’m quite sure that this one won’t. The odds are good that, before long, I’ll be devoting plenty of entries to random silly ideas and to bloviation on the controversies of the day. My resistances against that kind of thing are not strong.
I just want to be clear about what I’m supposed to be doing.
Welcome to the Baliocene Doctrine. I hope you enjoy the time you spend hanging around here.