Against Malatora and Towards a Draconic Post-human Future
Dragonsphere Report
It's hard work to be a lunatic. Not only do you have to consistently find ways to present yourself that are orthogonal to the normies, you also have to differentiate yourself from all the other crackpots, all while investing enough mental and social capital into the neurotypical economy to sustain yourself. It is a great tragedy that the first mover advantage for the post-human dragon microstate concept went to Taygon. I guess I will have to work extra hard now to outcompete Malatora in the economy of ideas.
It makes sense to begin my pitch by explaining some of the flaws in the Malatora brand. Now, I am not an economist or a social scientist, and in fact there is much that limits my analytic capacity. Like Mencius Moldbug, I am a college dropout, though in my case I made my exit from undergrad. I'm also a schizophrenic. And I have brain damage. And I may just possibly be autistic as well. And besides that there is the fact that I scarcely attended K-12 school and got most of my education from shitposting on the internet, in a series of campaigns that saw me permabanned from almost every major forum ever to grace cyberspace. Even so, a few small flaws stand out to me in this whole communist post-human environmentalist autarkic dragon micro-state idea.
Taygon and his ilk begin not with basic economic or even legal theory, even of the useless communist type, but with social norms. In fact, essentially the entire concept of Malatora, sans dragons anyway, is an ambient collection of social norms and the entirely contradictory, dysfunctional, and insane implications that result from them. In a limited sense there's nothing wrong with this. Human knowledge began with social norms. Long before humans knew of intersecting supply and demand curves, they knew “Honor thy father and mother”, “Thou shalt not kill”, and “Thou shalt not commit adultery.” The establishment of norms creates the, perhaps game theoretically appropriate cooperation necessary to minimize certain dysfunctional tendencies, mitigate certain perverse incentives, and begin amassing and utilizing other, higher forms of knowledge. But a society that begins with social norms and no other knowledge of any kind is appropriate only to the stone age, where it would then not even serve as an absolute and final arrangement, but instead be subjected to the slow process of biological elimination that eventually gives rise to the stability necessary to begin processes of conceptual elimination associated with higher level thought and society. Worse yet, the norms of Malatora are not only untested, they are contradictory, and they are not even functionally contradictory.
The most superficial example of this is the entirely cosmetic Code of Malatora. The most problematic example of this is the full and unqualified adoption of a set of both positive and negative rights. The contradictions that result from this are immediately manifest: In Malatora, one has absolute ownership of one's body and all transactions reflect consent, yet labor is assigned by work order, and it is illegal to "withhold" food, housing, healthcare, or education. Freedom of association is enshrined, but so is an absolute right to work. Academic and scientific freedom are enshrined, yet one's labor assignment is still dictated to them. One is obligated to obey their work order, but they are also entitled to complete freedom of movement. One has no right to deny housing to someone, but one has a right to shoot and kill any intruder to one's home. Worst of all, this knot of contradictions is not allowed to simply unwind itself through social experimentation, since it becomes load-bearing in a very serious way as soon as this chestnut is introduced: "the right to protect [these rights], by any means necessary."
Whether this means you can shoot a random private citizen for not giving you food is unclear, but it is not at all unclear that consequences similar to these would be the immediate result of the adoption of the sclerotic and spotty Malatoran "system" under all but the most utopian modalities of resource availability. Since Malatora runs on a combination gift and command economy, it is extremely inefficient, and could effectively only exist in a world that was already post-scarcity. But since it is Autarkic, the ability of its neighbors to produce even infinite wealth would make no difference to the inevitable collapse of Malatoran society. Inefficiency is in turn incompatible with environmentalism, since the efficient distribution of resources is a necessary precondition for production in equilibrium with natural rates of replenishment.
But let's suppose that the social norms of Malatora were somehow strong enough, and its actors rational enough, to overcome all commons problems, minimize all wastefulness and inefficiency, and make this heavily collectivized combination of gift and command economy functional. In this case, the overall construction of Malatora is still suicidal. Why? Because, rather than encoding its values in rigid traditionalism; the values which are ostensibly the only possible thing that could even theoretically hold up such a society, even under the most charitable and anti-realistic conditions imaginable; it makes efforts to not only accept but to enforce an open society. Freedom of speech, press, and opinion are fundamentally incompatible with the requisite strong social norming necessary for this society to function, as are the right to protest and most likely freedom of worship. If this society is only secular, it will have a hard time incentivizing social norms sufficiently. Political freedom is here a joke, although all of these things may function as effective escape valves for the total dysfunction of the system, allowing it to transform into something more coherent: or at least, this might be the case if its contradictions weren't enshrined in a bill of rights and protected by force. The best case scenario is that all of its residents take advantage of their freedom of movement and just immediately leave. The worst case scenario is that the Malatoran "duty to revolution" is recursively deployed in an attempt to enforce social solutions to economic impossibilities until everyone dies.
Other problems are less overarching but no less serious. The Malatorans think they can ignore international law in regards to Malatora while still counting on the apparatus of international law to protect and advantage them in various respects, including copyright law and military law. They think they can somehow introduce myriad new races into the world, allowing them to associate according to their own inclination, without immediately causing racism, and this despite clear indications of dragon-supremacist reasoning (saner readers may recoil at the preposterousness of the words "dragon-supremacist reasoning": I am very sorry, but this is the future which we are only heading deeper into). "Irresponsible" free speech is banned if it causes harm, but there is no clear harm test. We imagine anything that threatens their incoherent mixture of positive and negative rights might be contextualized as harm, especially in light of the "any means necessary" reasoning, which reinforces the thought of an inevitable bloodbath. Malatora also establishes "Freedom from discrimination of any kind, for any illogical reason." What constitutes an illogical reason remains undefined.
Let's leave all of this aside for the moment. These aspects of Malatora are not even worth thinking about, in large part because they have not even been thought about. SomethingAwful accused Taygon of basing his worldview off of reading Wikipedia articles. I am not even sure he read them. I suspect he may have glanced at their titles for essential salience. He seems to think he can dictate away any unwanted logical inconsistencies or consequences, which makes his fantasies not only low in intelligence but in verisimilitude. So how about instead we ask another question: Are the values of Malatora good?
I will not here attempt to determine the total coherence of the Malatoran value system, as pointing out every contradiction and absurdity in it would take entirely too long. Instead I will go value by value and then give a holistic impression.
First value: Consent and self-ownership.
Consent seems like a reasonable value. Certainly it is at the root of essentially all economic and social activity that can be defined as free, including free markets, therefore it seems intuitively good. If we mean by consent, "people only do what best suits them out of their range of options, unrestricted by coercion, and according to their own inclinations", then consent seems obviously good. Especially since, in general, by emphasizing consent as a paradigm, we create a world which tends to iteratively increase people's range of options over time. However, if by consent we mean "people only engage in interactions when they can explicitly communicate that these interactions best suit them, to those they interact with", then I disagree with the value. Self-ownership seems mostly coequal to consent, at least functionally, so I will not add extra comment on it. More on all of this later, but we have to mark this down as an overall "agree".
Second value: Democracy/Equality.
Terrible. Mencius Moldbug, despite our disagreements on other issues, had a lot of very reasonable things to say against democracy and equality. It also violates my inclinations towards elitism, traditionalism, and so forth. More on this later when I expound my own alternative.
Third value: Sexual freedom, including LGBT rights, Polyamory, and Free love
Very mixed. Certainly I would agree no force should be brought to bear against consenting adults acting in the privacy of their own homes, but sexual norms seem socially important to me. This doesn't mean I think society should disrespect polyamory, but I think that if polyamory is to be a mainstream aspect of society, it should come with its own obligations, rituals, and costs, and I feel the same way about homosexuality and transsexuality. Traditionalist society has decided in general that the only correct way to be sexual is to be heterosexual and monogamous. But if it is in fact right for a homosexual to be homosexual, and so forth, then there must also be a right and wrong way to be homosexual (or, perhaps, multiple right and wrong ways). Therefore I am against free love: It is the obligation of anyone with an alternative lifestyle to regularize and enforce the conduct of their community, and this is not, in the libertine sense, "free", even if no state force is brought to bear.
Fourth value: Personhood is coequal with self-awareness, not humanity
Agreed. Moving on.
Fifth value: Indigenous rights
Sure. Whatever. Moving on.
Sixth value: The environment.
Here is an absurdity: to a Malatoran, it is an immense good to reject their natural birth form in favor of a new, self-chosen form, but it is extremely bad for nature's form to otherwise be altered. Here at Dancefighterredux, we join a rising chorus of voices that say, "Fuck the environment". Here is my proposal for the environment: Upload everything. If the inevitable end point of human consciousness is its convergence with computing technology, why on earth shouldn't this also apply to animals and even, to their own limited but measurable extent, plants? Put it all in a computer. Nature is just an inefficient data storage format. More on this later.
And now to my total impression: I am unimpressed. Even if the whole edifice weren't instantaneously incoherent, there would still be things that stuck out. For instance, Indigenous Rights is a Malatoran value expressly encoded in their legal system, even though THERE ARE NO INDIGENOUS PEOPLE IN THE LAND MALATORA WOULD CLAIM. And compare this against what they didn't encode in their legal system! This alone is ample evidence we are dealing entirely with liberal sentiment and not with anything resembling actual thought. That they began with LGBT and dragon rights and ignored any foundational framework of rights is utterly pathological. The whole thing lacks even the logic of a dream. If one wishes to build a paradise for their fetishes, that doesn't mean one proceeds through their fetishes in the act of building. Fetishes don't do a lot of heavy lifting, intellectually speaking. I was going to say a lot more, but when one realizes one has been handling shit, one wishes to wash their hands of it as quickly as possible.
You know what? I am so over this I am actually adding a dividing line and a subtitle to differentiate sections.
Our Glorious Draconic Future
Now let us proceed not from fetishes, but from reason. The principle considerations of a society of dragons should be the same as those of any society, therefore we will begin with those considerations. If you like, you can generalize my thoughts to any system, or copy and replace every reference to dragons with humans. It shouldn't make much overall difference. Dragons are highly motivating to me. That doesn't mean I best serve my interests or advance my case by centering my mind or arguments on this particular Idée fixe.
Picking up some of the things we said we would address earlier, let's look at consent. In fact, let's start with a very complicated case of consent: smoking in public. At present, secondhand smoke is pretty much universally agreed upon to be bad. However, there are still even now people with greater or lesser tolerances towards exposure. Suppose we start by considering a case in which, when one wishes to smoke within range of someone, one has to explicitly get their permission. Now consider the possibility that we are in an environment or area where there are many people, and while you can smoke around one group of people or another, you can't get away from them (nor can you in this case refrain from smoking, such is the strength of your addiction in this case).
So you gather up people's yes's and no's until you have enough yes's in one cluster to light up your cigarette. Reasonable enough. Somewhat inefficient. Now suppose you simply have a machine that tells you what everyone's preference is already. Suppose they were all polled on the matter, and their responses recorded. This makes it much easier to find somewhere to smoke! This is a pre-consent model. But now suppose you have another machine, which allows you to infallibly read the minds of everyone and determine whether or not they would consent to your smoking, if you were to ask the question. This is most efficient of all, since it does away with the overhead of pre-recording responses, and achieves basically the same effect, plus or minus a few people who would have consented had you asked, but take offense at your not asking; but even this could be accounted for by a mind-reading machine.
This is my essential point about consent: the more information we have, and the more accessible and ambient that information is in a given environment, the less consent is required to achieve the same result. Unless there is something good about consent in itself, which seems doubtful, then a maximally efficient society will consist mostly of decisions that are, in this sense, non-consensual. To put it another way, in a society functioning at its highest degree of efficiency, a person neither consents nor refuses consent: the very emergence of consent as a term suggests the possibility of its opposite.
"But isn't having a choice better?" you might ask. Not necessarily. Consent, infinitely expanded, can lead to things like choice paralysis, as well as overhead induced costs such as in sexuality: how often do we now hear the complaint that sex is worse today now that it has lost all of its spontaneity through our current cultural obsession over consent? In other cases, you don't want to give people the illusion of a choice when they don't actually have one. And, if a choice produces externalities or is harmful to the person who chooses it, even if it's within the formal rights of that person, isn't it better not to bring the thought of that choice to their mind? (For instance, if a person's smoking was contingent upon our question, "Would you like to smoke?", we should scarcely ask it). The negotiation of consent can encourage people to make choices they would not otherwise have made, even if they are suboptimal. It seems much better to streamline things by eliminating consent as a term wherever possible (and, conversely, utilizing it whenever necessary). Future technology is liable to enable this to extreme degrees, even to degrees that are unpalatable to our current modern minds. But this should be embraced, and not recoiled from. Therefore I define as an essential good of my society the elimination of all superfluous consent.
Equality. What a miserable impossibility is equality. In conjunction with the idea of consent, the idea of equality has been a source of much woe. "I only consent to a government that promotes equality!" some first poor bastard once said. My friend, let me paint you a picture of the greatest tragedy, the ultimate culmination of this attitude in the far distant future. To do so it will be necessary to introduce various improbable seeming premises and arguments. These will later become foundational to other arguments. If you like, you can think of what follows as a modified and secular form of Pascal's wager, just one that in some cases (as we shall see) results in infinite bad.
One of the major possible solutions to current quantum mechanical equations entails the existence of retrocausality; that is, causal relationships that violate traditional linear time. While retrocausality remains the least popular explanation for what we know, it's the horse I've bet all my money on, so I go forward in that spirit. We already know that action-at-a-distance is an innate property of quantum mechanics, and that fruitful efforts are underway to harness this property of physics for information transmission. But if information transmission is possible across space, then under a retrocausal compatible model of quantum mechanics it should also be possible across time.
This is suggestive of two possible technological techniques I can think of. In the first, the brains of people in the past would be replaced particle-by-particle in a process similar to a Moravec transfer, until the brains of past people are entangled with some functionally equivalent brain in the future. Then, when people in the past die, they would just be switched over to the future brain. The second technology I can think of is a kind of particle mesh injection, that interfaces with the brain and recovers all of the data in it. Naturally, the first technology would preserve continuity of consciousness while the second would merely duplicate people, but both would be immensely valuable, and one or the other (if not both) seems like it must be possible if temporal non-locality is an actual thing (note that the basic conceit of consciousness interfacing with the future in some way is essentially Landian; nevertheless I think I have explicated this train of thought somewhat uniquely and will continue to do so).
So, imagine this consciousness capture technique is deployed on everybody, the entire human race, and they are all brought forth into a virtual post-scarcity environment that runs on pure renewable energy at speeds many quintillions of times faster than real-world time and is built to last. For a non-trivial portion of the population, this is in fact hell, and this is why: While it is now possible to give anyone any quantity of stuff, or at least any perception of any quantity of stuff, everything that in higher-order life is meaningful, valuable, significant, virtuous, or honorable is now basically impossible. Sure, one can still work on advanced scientific, mathematical, and philosophical problems, as I'm sure there will be no shortage of unsolved instances of these even in the far future. For some portion of the populace this will be enough. But it's no longer possible to really participate in history, and since actions don't have any true cost anymore, they reflect far less on individual character. To paraphrase Evola, "Whatever one doesn't find in life, they surely won't find in death."
So for capable people who have been deprived of their ability to achieve even a meaningful fraction of their potential (and there's no shortage of such people in history), entry into this secular future heaven also constitutes a permanent severance from all possibility of attaining higher-order value, an infinite harm. I grant that this is not likely to even register as a concern for substantial portions of the lower classes. Expecting every plumber and fry cook to reason like this would be like expecting the pathology of a Cioran to be the universal human norm even among total morons, and many likely would be perfectly happy just eating, sleeping, fucking, and fucking around for eternity. But inequality scarcely concerns me in these cases, when it perfectly suits the people involved. I'm very neoliberal this way.
This inequality is exacerbated by various things. The first is the effects, compounding in this case, of time. In the absence of well-enforced regulation, once anyone gets a head start in this environment, they not only take off but they never come back. Let's say a person's supreme good is to read and educate themselves, and then to participate in conversation and intellectual work, advancing one field or another. Person A starts at time x, and Person B starts at time Y. By the time Person B even starts their studies, Person A has already read and accomplished a substantial amount. Now, even though this future is relatively post-scarcity, it can't possibly be entirely post-scarcity, as even the virtual future world is still limited by the constraints of the real world. So suppose Person A's accomplishments translate to enough capital, social or otherwise, to be transferred to another virtual environment that runs twice as fast as their current virtual environment, and the virtual environment of Person B. But then they will accomplish twice as much work as Person B, and with a head start to boot. And then the issue compounds itself, and we have a problem not just of inequality but of different growth rates of capital, which, compared to current redistribution problems, basically becomes an intractable problem.

Now, all of this may seem like a series of concerns about inequality, and in fact it is. Just because I don't value equality for it's own sake, doesn't mean I value inequality for it's own sake. I still want the competitive environment to be level enough that it reflects meaningful essential differences between people: whether it's differences in earning ability, in learning ability, or whatever. I think learning ability is more important than earning ability, but it doesn't matter. This sort of runaway process would annihilate the significance of all personal differences that actually matter. It doesn't matter what a person set about acquiring. Imaginary gold: If imaginary gold is acquired according to a market mechanism of some sort, in a way that triggers this phenomenon, there will eventually come a point when it's impossible to outcompete or exceed a person in regards to this. If imaginary gold is especially goodvaluable to people, this will result in extreme and potentially uncorrectable ressentiment, and that's just imaginary gold: we have no idea what future commodities exist. I am already sort of angry thinking about my permanent insignificance in the commodities market of 0x6243d7d8d4349???!?!?. So this is one of the few areas where I think some sort of regulation or strongly enforced social norm is required, on a global scale, for the future.
I think the required regulation is an inverse growth-rate system, that expedites the development of the underclass until they reach the requisite level of development, at which point they enter an environment where time is running at a slower rate, and etc. Since we expect computing technology to continue to grow at a rate roughly concordant to Moore's Law, the total environment of virtual consciousness still experiences an iteratively increasing supply of time. So the only real losses from this method are relative, rather than absolute, and the social value that's preserved by it vastly exceeds any lost.
In case it isn't obvious, this future ecosystem of virtual environments is where I plan to establish my dragon nation. As for everything between then and now, I can comment little on it at present. I now give a description of some of the essential features and blah blah blah of this virtual environment ecosystem, or technosphere:
Explicating the Technosphere
Something that should be immediately obvious about this future technosphere is that every conscious being within it consists of information rather than matter. Yes, these virtual environments still need to run on something, and this something is still vulnerable to Layer 1 attack, but since Layer 1 vulnerabilities and their solutions are mostly well established and, at any rate, invested in, there is little additional analysis required on the issue of Layer 1 vulnerability (I will stammer some about this later anyway). This means that the most important aspects of defense for us to investigate rest on principles of information security rather than physical security. That means these future virtual environments will have to rely on encryption as an extremely important component of their defense, at a level never before seen in history.
One of the technologies that will likely be necessary for security is something that is presently of very limited use: encrypted executables. While other, much more popular methods of hiding runtime data exist, such as virtualization and VPNs, encrypted executable code is the only possibility which gives no access to any component of a system even if an attacker has direct Layer 1 access to that system. But just as encryption solves certain problems, it also creates certain problems. If an outsider has no idea what is going on inside of a virtual environment, they may be inclined to destroy it. After all, many of the possibilities of virtual environments are very bad. Therefore, encrypted virtual environments will have to also find a way to be accountable. Some sort of blockchain-derived technology may be adequate to this purpose: something that can accountably document selected properties of the virtual environment without allowing them to be altered or exposing other information. Then, a consensus about required data in the public ledger could be developed: the overall happiness of each individual, perhaps. The thought being, if a future virtual society makes all of the particulars of their society known to all outsiders, one of them may object to them and take action against that society, whereas if all that is known is that everyone in a given virtual society is happy, then this fulfils some minimum standard for maintaining non-intervention/aggression. I have no idea what the particulars of such a future technology might look like, so I leave it to some future genius, much as I leave the particulars of consciousness capture technology to some future genius. I think this approach will bear fruit. After all, if a Mencius Moldbug can come along and offer slight improvement on even the paltry efforts of a Rothbard, perhaps there is hope for compounding the value of the ideas of a schizophrenic therian failure.
Of course, if any moment of time is in theory accessible to a sufficiently advanced civilization, then the fact that consciousness is unencrypted now, in our physical meat brains, is already a massive and probably irreparable security hole. In fact, if there are alien civilizations, it's entirely possible that the consciousness capture technique has already been deployed, and our essential beings have already been passed endlessly around the universe, via some ultra-advanced alien version of limewire, undoubtedly. This would seem to massively incentivize keeping consciousness capture technology out of irresponsible and malicious hands. I leave the enforcement of this necessity to the future regulatory environment. Surely they are a pinnacle of efficiency and common sense and will not let me down in any way at all.
It is worth noting that these individual environments would all be energy independent, and with energy as their primary need, they would all by default be autarkic. Indeed, the only major thing they possibly could trade is information, including entities (whether animals or plants or persons, or whatever). However, it is doubtful they would be in much competition for energy, so the overall attitude of different virtual environments towards one another, barring culturally driven factors, would be one of indifference. Another method of defense, of course, given the existence of a sufficient number of these environments, would be based on distribution and maximization of paths of exit: IE, a virtual environment where emigration is extremely easy is much better able to perform, say, an emergency evacuation in the event of irreparably damaged hardware. If hardware is transtemporally accessible, then the proper distribution of the required information needed to engage in a secure handshake with and initiate transfer between virtual environments is all that should be required to ensure continuity of consciousness. This is true even in the case of interactions between virtual and biological entities: An extreme case might be, a failing virtual environment finds a planet with sentient but primitive entities, and distributes the required cryptographic information along with spacetime coordinates, other data, and incentives, to a group or collection of groups deemed appropriate by whatever advanced politics and anthropology and etc are available in the future. Once these groups have evolved enough to develop consciousness capture technology and virtual environments themselves, they can establish a link across time and perform the necessary evacuation. So, the distribution of information creates some rather robust possibilities for defense, which vastly minimize the need even for Layer 1 physical defense (in the context of destruction, anyway: not access). Given a sufficient number of virtual environments and sufficient distribution, the need for weapons becomes very limited.
Rules and the use of force
The Platonic question, "Who should rule?" is rather interesting. I am going to reframe the question so that it concerns only the matter of monopoly on force. My own answer to this is very pragmatic and proceeds through process of elimination:
Should an autocrat rule? No. Autocrats are not sufficiently incentivized to make good decisions about the use of force, lack enough information to make good decisions, etc.
Should a monarch rule? No. Still problems with access to information and ability to calculate, still incentive problems
Should a corporate sovereign rule? No. While incentives are extremely well calibrated, they are all associated with rewarding pure earning ability, which is inappropriate as an ultimate organizing principle for society.
Should a republic rule? No. The incentive structures in a republic inevitably reward collusion and corruption, and rulers are selected by an uninformed populace, by majority rule, therefore -> tyranny of the majority, etc.
Should a democracy rule? No. Rulers are selected by an uninformed populace, by majority rule entailing tyranny of the majority, and per Moldbug this results in a shifting set of standards which produce social friction.
Should a bureaucratic class rule? No. Insufficient incentives, collusion, corruption etc.
We could keep doing this for every possible fringe form of government, but the result would still be the same, for the same basic reasons. Additionally, given the fact that the capacity for force inevitably only increases over time, that we are currently at the point of nukes and nerve gas, and will soon be at the point of indefinite hellish torture via simulation, there is only one reasonable answer to the question of "Who should rule?", in relation to the monopoly on force:
Nobody should rule.
By this I don't at all mean to promote anarchy, or suggest that the legitimacy of force should be distributed (though I think in some lesser sense it should be, and more on that later). Rather, I think that the use of force should be automated. There is currently enormous distrust over the development of autonomous killing machines, but autonomous killing machines actually seem like a substantial step up from our current situation. For instance, it is hard to see how, if the police were entirely replaced with fully autonomous bots capable of using lethal force, they would shoot nearly as many unarmed black people to death, and those who were shot to death would all be accountably provable as credible threats to life.
The fact that autonomous killbots would probably initially be programmed with suboptimal rules (like to automatically kill every male over the age of 14 in a given area, to kill anyone holding a certain cell phone, etc) is not really a counterargument. We have bad rules at present because we are trying to formalize heuristics, pseudo-rules developed in an environment of highly imperfect information. As our capacity to gather information becomes more reliable, the rules behind killbots should naturally improve. Perhaps killbots should not be implemented right now. But they should, eventually, be implemented. And in truth, the fact that many significant companies, corporations, and talented individuals have refused to help develop autonomous weaponry is actually grossly irresponsible, since it lessens competitive pressure and essentially ensures that the people who actually do produce autonomous weaponry (which is inevitable) will be under no meaningful compulsion to do so competently.
What does this mean in the context of virtual environments? Here, killing people should be grossly unnecessary. Our goal is to maximize freedom. We could just teleport anyone displaying criminal intent back to their house every time they try to initiate a criminal act. This seems like it would lead to escalating aggravation in certain cases. It also destroys the signaling and cost-imposing effects of crime, and prevents revolution as a possible palliative to disagreement. A riot is the language of the unheard, after all: but for the most part, it should be possible to "hear" potential criminals (or in this case, do they merely become "trolls"?) without actually allowing them to commit crime.
Is our goal to minimize force? That depends on whether force ever has positive utility in-itself. Having read Storm of Steel, I think it sometimes does. But we don't want real death, and we want the cost of using force to be prohibitively expensive in all but the most extreme cases (by which we don't mean that the cost should be lowered in cases we arbitrarily define as extreme, but that some extremely high uniform cost should be imposed: and naturally, imposed in such a way that it doesn't become a permanent sunk cost and lead anyone to become permanently violent).
Whatever ultimate system is decided on, it should not only be automated but it should be encrypted and the keys should be deleted/thrown away to prevent any fallible power from altering the system. I'm told that inaccessible information that is encrypted is not in fact encrypted, but destroyed. But it seems to me that even if it's destroyed as information (IE, even if it becomes an impenetrable black box), as long as it still does stuff, and the stuff we want it to, there is a point to the procedure. I have no idea what such a system would look like. At present, I won't even try to conceive of one. Perhaps in a later blog post, I will have more substantive ideas in regards to implementation.
"But what if people later decide this arrangement is bad?" Well, that's why freedom of movement is necessary. Not only in the sense of moving between different virtual environments, but in the sense of ability to create new ones. There should be hard-wired and automated rules governing force in every virtual environment, but these do not need to be the same in every virtual environment. As long as freedom of movement is preserved, people will "vote with their feet" (an especially dead metaphor in this case), and eventually an optimum situation will be reached. There is no reason at all to ever change the rules of force in a given system while people are still in it. As Moldbug has pointed out, uncertainty leads to friction, the ability to alter the rules that restrict you leads to collusion and corruption, and the most effective way to make sure rule of law is enforced consistently and optimally is to make sure those who enforce it are not threatened or contested in any way. The solution of a corporate sovereign is insufficient because there will always be some non-zero risk of threat to that sovereign, and because even if there weren't, human beings are irrational and inconsistent by nature. The lack of external factors that aggravate this irrationality and inconsistency is great. But what would be better than great is having a way to systematically enforce the rule of law which didn't depend on how a conscious entity felt about anything, or whether their heuristic methods of applying law were on point that day or not.
Now, it's true that this would mean, if a foreign power wanted to intervene in a given virtual environment, they would have to do so within the context of that environments rules. But I don't believe this is a problem for the most part. If the rules are nightmarishly bad (eg, if they both restrict freedom of movement and impose nightmarish suffering), said foreign power will probably just physically destroy the environment. Therefore, there's a strong incentive not to implement nightmarishly bad rules. This also enforces good sportsmanship, in a sense, and ensures that even the deprecation of a given virtual environment will occur within the context of that environment's stated values.
What about external physical threats? Well, perhaps these should be human governed. Mere existential threats, especially in a context where the possibility of transtemporal consciousness capture exists, are very insignificant. But anything that could possibly lead to worse than existential harm should be governed by automated systems, and in the context of effectively immortal beings, this effectively reduces to "The use of force should be governed by automated systems".
The Dragonsphere
Now at last we can begin to discuss the particulars of my own microstate, either a single virtual environment or a united federation of such. This blog post in itself is insufficient to provide all the detail, so I will focus on what I think is most important.
One of the things I have struggled to articulate is the way that unimportant people in the world are simultaneously entirely individually worthless and utterly indispensable. By this, I mean that if you were to snap your fingers and make every fry cook, gas station attendant, barista, cashier, cab driver, truck driver, and economist disappear, the world would cease to function. But take any individual example of these and remove them from their position and they will just be replaced. Essentially, lower class individuals are fungible. If the United States goes down the road to socialism, the highly mobile upper classes will just find another lower class somewhere else to pour their coffee, cook their fries, and issue crystal-ball proclamations about the economy. In this sense, strictly in terms of economic value, every class is equally necessary, but not every class is equal.
In moral terms of course, we find cases of Diogenes, and Epictetus, and Jesus Christ. But these are people who have found a way to participate meaningfully in the world in spite of their poverty. The vast majority of the underclasses lead meaningless lives, nevermind their access to refrigerators and televisions. And this is not because they have a lack of resources, because otherwise there would be no Diogenes or Epictetus or Jesus Christ. It is because they, in purely relative terms, fall behind in whatever attributes translate to social capital. Since this measure is relative, it is liable to be with us forever, and wealth redistribution doesn't really help, even if it weren't largely impossible or meaningless in our post-scarcity future. What would be needed is status redistribution: and that is entirely incoherent, because status is, with rare exception, zero sum (the exception being the development of alternative hierarchies of merit, which can probably be extended infinitely, but this still doesn't mitigate all problems of social worthlessness).
In ancient times, certain cultures came up with a great solution to the problem of distribution of status. Rather than attempting to enforce social equality, a project which has proven entirely incoherent and detrimental to the western world, they just told worthless people that they would be reincarnated. Generally this was combined with some system of Karma to enforce good and obedient behavior. From an engineering perspective, the combination of Karma with reincarnation is not a good one, since it can very easily lead to a snowball effect in which a person winds up incapable of expiating their bad karma and thus incapable of reincarnating into a higher class. Therefore, I much prefer, again from an engineering standpoint, the reincarnation of Chaos Magick, which is for the most part entirely random. However, random is still not good enough. If reincarnation is our answer to the unfairness of life, then we shouldn't allow the possibility of a person having to live 20 or 30 worthless lives just because of the equivalent of getting heads in a coin flip 20 or 30 times: an outcome which, though unlikely, is also inevitable in any context involving large numbers. The properties that correspond to human worth are not intractably defined, but we know where the best odds are, as well as what sort of social distributions of different human types are functional or pleasant. Our context is also functionally infinite. So, yeah.
So how do we get good and obedient behavior? Easy. We just implement deterrence in some other, independent place.
To be very explicit, yes, I am advocating for reincarnation as a component of my virtual environment (in the sense of wiping out a person's memories and establishing a continuity of consciousness into some new entity with a different, undeveloped neurological profile). This should happen automatically, without consent. But reincarnation should not be for everyone. It should only be for those who we know would be suboptimally happy, in a Millesian rather than purely hedonistic sense, to live forever as themselves. This hardly uniformly applies to the lower classes, and it also doesn't uniformly fail to apply to upper classes. But I think one of the most significant drivers of it would be lack of social status. I say "suboptimally happy" rather than unhappy, and I qualify with "Millesian", because I don't want my virtual environment to churn out an endless stream of hedonistic, thoughtless animals. I make no restrictions on what you can do with your own virtual environment: we are only here describing what I want to do with mine.
This solution introduces a problem: do we want immortal citizens mingling with mortal citizens? And the answer is, for the most part, no, at least not indefinitely, which implies at minimum a two-tiered virtual environment, and likely at least three if not four. Therefore, people should be sorted early in life according to their inclinations and capacities. We could do this automatically, but that is not aesthetic. There should still be a stochastic element to how different attributes emerge in any given generation, and then there should be tests which optimally sort people into the right categories. I think these tests should be designed to detect, not primarily intelligence (which seems like an insufficient moral basis for sorting people in and of itself), but salience: what is important to a person. That is, these tests should all be Rorschach tests in a sense, but where answers reveal differences in logicality, empirical-mindedness, social-mindedness, and so forth in a qualitative rather than quantitative sense. That is, the fact that one individual is better at logic than another is irrelevant as long as both are inclined to address a question in terms of logic. Intelligence can be added in one way or another where personality is appropriate. This doesn't strike me as non-meritocratic.
The second tier of the virtual environment should center around general intellectual activity (skilled physical activity is, by definition, intellectual, even though our present society fails to recognize this). The third tier should be administrative, and make decisions not only about the non-force related aspects of the total society (those that aren't handled in terms of government at Tier 1 or 2), but about the real, external world. Transitions between these tiers should occur after death. Testing should determine course of life in Tier 1 and Tier 2, as well as afterlife outcome: either reincarnation or graduation. Some individuals, at the end of their life in Tier 2, may be subject to reincarnation. This will introduce some unhappiness into the system, but this is acceptable: especially since at Tier 2, everyone should be able to understand why it happens and why it is preferable to continued life. At Tier 3, there shouldn't be anyone who needs to be reincarnated, but the option should still be available if chosen: At this level, people are capable of extremely high level independent reasoning, including the construction of their own ideological systems and values. Consent is not, here, superfluous.
Since governments at Tier 1 and Tier 2 are limited and largely ceremonial, I see no reason why they have to have practical forms, except insofar as this is more conducive to the training and intellectual development of the people who actually matter. None of this is intended to say that the lower members of society should be made to be unhappy; we would prefer them to be as happy as possible; but they are still lower, and should be treated accordingly, for all the reasons I have already given. The overall effect of this system is a gradual trickling or bubbling up, of people of character, merit, and value, which minimizes misery, preserves a total ecosystem of conscious beings, grants immortality to those suited to it and only to those suited to it, and is in its totality shockingly fair and reasonable: far more so than current society, even though this is unlikely to be recognized by many due to the culture shock of just reading about it.
And yes, even our most worthless classless idiot still has freedom of movement in this situation, including emigration rights: what could I possibly care whether a citizen goes somewhere else? But I don't think that they would.
These are the foundations of my society.
"Once upon a time there was a man who as a child had heard the beautiful story about how God tempted Abraham, and how he endured temptation, kept the faith, and a second time received again a son contrary to expectation. When the child became older he read the same story with even greater admiration, for life had separated what was united in the pious simplicity of the child. The older he became, the more frequently his mind reverted to that story, his enthusiasm became greater and greater, and yet he was less and less able to understand the story. At last in his interest for that he forgot everything else; his soul had only one wish, to see Abraham, one longing, to have been witness to that event."
When we can bear witness to such events, we will bear witness to them. This is inevitable, it is our nature. Whenever men have written of the gods, they have written of their own hearts: and this is why only Socrates ever suggested that gods should be perfect. Even Christians do not really believe this, or else they would reject scripture in favor of revelation. But if Abraham, for our sake, has to go on being Abraham, then he is not Abraham. If we can't ensure that a member of our society, our race, can get to heaven, then at least we can ensure they don't go to hell: and give them another chance, as many times as we need to do so, until everyone has escaped the great wheel of fate and everything is set right.
Thus ends another Dragonsphere Report