"True Rejections": Whether You're a Cosmist, Cyborgist, Terran, or Extropian Says More About You Than Your Political DesignationsteemCreated with Sketch.

in libertarianism •  8 years ago  (edited)

In an essay called "Is That your True Rejection?," libertarian minarchist, AGI-designer, and "Less Wrong" website creator Eliezer Yudkowsky wrote:

There’s a technique we use in our local rationalist cluster called “Is That Your True Rejection?”, and it works like this: Before you stake your argument on a point, ask yourself in advance what you would say if that point were decisively refuted. Would you relinquish your previous conclusion? Would you actually change your mind? If not, maybe that point isn’t really the key issue. You should search instead for a sufficiently important point, or collection of points, such that you would change your mind about the conclusion if you changed your mind about the arguments. It is, in our patois, “logically rude,” to ask someone else to painstakingly refute points you don’t really care about yourself. Imagine someone went to all the trouble to look up references and demonstrate to you that those traits were 90% hereditary, and then you turned around and said that you didn’t care.

What would it take to get you to change your mind about libertarianism? What are the arguments such that, if they were decisively refuted, you would actually change your mind?

This is important, because perhaps the action of "favoring liberty above all else" leads to the death of mankind. Even at such a point in time as this appears to be the case, it doesn't follow that liberty is a bad thing, or that we shouldn't pursue it, it simply means that there are deeper issues that are more important or even more central to human well-being than liberty is.

A whole network of those ideas is clustered around species dominance, and whether a new synthetic species will soon replace mankind in every meaningful area of human activity. This idea is sometimes called the technological Singularity. Hugo de Garis, an AGI-designer himself, coined the following additional terms to describe mankind's situation leading up to the Singularity:

Artilect: A synthetic intelligence that, once-created, improves itself to god-like capacities, rapidly becoming the dominant species on planet Earth. Those who assume that such a "god-like" brain wouldn't even overlap with human goals, but that is easily seen as unlikely, since a god-like brain would likely wish to interface with existing human society. If it needs a "human-like body" to do any part of this, it will likely either reverse-engineer one, or the intelligence itself will arise within a human body. (This is likely to happen when a "synthetic super-brain" is added to the brain of an infant or toddler, and that toddler grows up having an IQ thousands of times more powerful than any human. In that sense, the parents of such a supermodified child essentially can be considered to have killed their infant and replaced it with a god who is so superior to other humans that it shares nothing in common with them.)

Cosmist: Someone who favors the building of vastly superhuman intelligences, despite the risks posed by those intelligences then being "in the driver's seat"(will have total control over all of humanity).

Terran: Someone who opposes the building of vastly superhuman intelligences, likely because they want human beings to retain species dominance; to retain the highest ability to shape the environment of Earth. The unabomber and other Luddites are often in this camp.

Cyborgist: A person who wants human beings to super-modify themselves so they become god-like. (This might not be possible, and this might not be mutually-exclusive with the other positions. For example, Kevin Warwick wants to supermodify himself, but he does not want to be left far behind in an intelligence arms race. Therefore, if cyborgism is impossible, too slow, or otherwise "unworkable," Warwick's second choice would be to consider himself a "Terran." He even admitted as much.)

Other libertarian Singularitarians have labeled themselves Extropians. This label means that someone is essentially a Cosmist who favors extending the human potential as much as possible, and building the greatest most-optimal creations possible. Most libertarians who comprehend the idea-networks associated with "Singularitarianism"(Singularity comprehension) consider themselves "extropians." Libertarianism is actually the only philosophy compatible with a benevolent singularity, because most existing human beings are philosophically grossly self-contradictory. (They tried an illegal drug when they were young, but now vote for people to be beaten up and put in cages for drug use. They failed to comply with a statute law at some point, but fail to realize that existing statute laws allow for people to be beaten up and arrested for the same noncompliance.)

It means that we shouldn't structure our belief systems around liberty when liberty is a consequence of deeper beliefs which can also be challenged, attacked, and destroyed by opposing goal structures. The best essay I know of for examining this concept in detail is, "What Price Freedom?" by Robert Freitas.

For example: it's quite possible that human society will be attacked by a super-human intelligence in the coming years. Prior to the year 2000, that was an impossibility (barring the arrival of super-human extra-terrestrials), because superhuman AGI could not exist on Earth, because computer hardware was not powerful enough to process information in a super-human way. Sure, corporations were super-human, and groups of humans could do super-human things, but groups of humans are all operating at approximately human-level. This fact had several implications. One of which is that no human's entire social network, family, and legacy could be completely erased, even if they were killed. Another is that, even if a wealthy corporation wanted to keep you in solitary confinement and torture you until you died, eventually death would provide a merciful release. This limited the possible amount of "horribly enslaved suffering" to the lifespan of a human being. Moreover, because all human beings were approximately equal in ability, it made even the proposition of lifelong enslavement unlikely if not impossible.

The prior essays primarily function as criticisms of uninformed deontology. They encourage atheism and consequentialism (a view that maps to a reality where humans can innovate and build with property they own). They encourage us, as libertarians, to think critically and carefully about the nature of reality.

This is a good thing, because we will never obtain liberty by deluding ourselves about the nature of reality.

For what it's worth, I've asked myself, which libertarian issues could be refuted without me changing my mind about libertarianism? My answer is: All of them, except the (involuntary) punishment of the innocent(those that have been punished for actions that lack a valid, 2-part "corpus delicti" or "injury+intent"). If it turned out that the punishment of the innocent caused society to thrive, then I would say, "That society doesn't deserve to thrive." Of course, this is incredibly unlikely, unless all humans were reprogrammed to be "non-human-like in all the ways I currently believe are important." In any conceivable society where almost all humans have 2,000+ superhuman IQs, the innocent would not (and should not) be punished. In fact, the punishment of the innocent can only take place in uncivilized, unintelligent societies (such as, sadly, our own at this time).

America was once the vanguard of civilization. It no longer is, having lost the Law that once made it "more civilized than elsewhere." We no longer strive toward perfection, as a majority. Only a small minority of innovators wishes to get back what we've unwittingly lost.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!