Aubrey de Grey Discussion: Epistemology and Arbitration

This is email discussion with Aubrey de Grey about epistemology and cryonics. I edited to delete a couple mild personal remarks. This is my fifth email in the discussion. Click here to read the rest of the discussion. Posted with permission. The yellow block quotes are from Aubrey de Grey, and the regular text is mine.
I’ve been completely unable to get my head around what [David Deutsch] says about explanations, and you’ve reawakened my confusion.

Essentially, I think I agree that there are no probabilities in the past, which I think is your epistemological point, but I don’t see how that matters in practice - in other words, how we can go wrong by treating levels of confidence as if they were probabilities.
That thing about the past isn't my point. My point is there are probabilities of events (in physics), but there are no probabilities that ideas are true (in epistemology). E.g. there is a probability a dice roll comes up 4, but there isn't a probability that the Many-Worlds Interpretation in physics is true – we either do or don't live in a multiverse.

So a reference to "probability" in epistemology is actually a metaphor for something else, such as my confidence level that the Many-Worlds Interpretation is true. This kind of metaphorical communication has caused confusion, but isn't a fundamental problem. It can be understood.

The bigger problem is that using confidence levels is also a mistake.

Below I write brief replies, then discuss epistemology fundamentals after.
The ultimate purpose of any analysis of this kind - whether phrased in terms of probabilities, parsimony of hypotheses, quality of explanations, whatever - is surely to determine what one should actually do in the face of incomplete information.
I agree with decision making as a goal, including decisions about mental actions (e.g. deciding what to think about a topic).
So, when you say this:
I'm guessing you may have in mind an explanation something like, "We don't know how much brain damage is too much, and can model this uncertainty with odds." But someone could say the same thing to defend straight freezing or coffins, as methods for later revival, so that can't be a good argument by itself.
I don’t get it. The amount of damage is less for vitrification than for freezing and less for freezing than for burial. So, the prospect of revival by a given method is less plausible (why not less “probable”?) for burial than freexing than vitrification.
I explain more about my intended point here at footnote [1] below.

I agree that changing "probable" to "plausible" doesn't change much. My position is a different epistemology, not a terminology adjustment.
But, when we look at a specific case (e.g. reviving a vitrified person by melting, or a frozen person by uploading), we need to look at all the evidence that we may think bears on it - the damage caused by fracturing, for example, and on the other side the lack of symptoms exhibited by people whose brain has been electrically inactive for over an hour due to low temperature. Since we know we’re working in the context of incomplete information, and since we need to make a decision, our only recourse is to an evaluation of the quality of the explanations (as you would say it - I rather prefer parsimony of hypotheses but I think that’s pretty nearly the same thing).
I actually wouldn't say that.

My approach is to evaluate explanations (or more generally ideas) as non-refuted or refuted. One or the other. This is a boolean (two-valued) evaluation, not a quantity on a continuum. Examples of continuums would be amount of quality, amount of parsimony, confidence level, or probability.

These boolean evaluations, while absolute (or "black and white") in one sense, are tentative and open to revision.

In short: either there is (currently known) a criticism of an idea, or there isn't. This categorizes ideas as refuted or not.

Criticisms are explanations of flaws ideas have – explanations of why the idea is wrong and not true. (The truth is flawless.)

Issues like confidence level aren't relevant. If you can't refute (explain a problem with) either of two conflicting ideas, why would you be more confident about one than the other?

When dealing with a problem, the goal is to get exactly one non-refuted idea about what to do. Then it's clear how to act. Act on the idea with no known flaws (criticisms) or alternatives.

Since this idea has no rivals, amount of confidence in it is irrelevant. There's nothing else to act on.

There are complications. One is that criticisms can be criticized, and ideas are only refuted by criticisms which are, themselves, non-refuted. Another is how to deal with the cases of having multiple or zero non-refuted ideas. Another is that parsimony or anything else is relevant again if you figure out how to use it in a criticism in order to refute something in a boolean way.
And the thing is, you haven’t proposed a way to rank that quality precisely, and I don’t think there is one. I think it is fine to assign probabilities, because that’s a reflection of our humility as regards the fidelity with which we can rank one explanation as better than another.
I think there's no way to rank this, precisely or non-precisely. Non-refuted or refuted is not a ranking system.

I don't think rankings work in epistemology. The kind of rankings you're talking about would use a continuum, not a boolean approach.

I provide an explanation about rankings at footnote [2], with cryonics examples.

The fundamental problem in epistemology is: ideas conflict with each other. How should people resolve these conflicts? How should people differentiate and choose between ideas?

One answer would be: whenever two ideas conflict, at least one of them is false. So resolve conflicts by rejecting all false ideas. But humans are fallible and have incomplete information. We don't have direct access to the truth. So we can't solve epistemology this way.

The standard answer today, accepted by approximately everyone, is so popular it doesn't even have a name. People think of it as epistemology, rather than as a particular school of epistemology. It involves things like confidence levels, parsimony, or other ranking on continuums. I call it "justificationism", because Popper did, and because of the mistaken but widespread idea that "knowledge is justified, true belief".

Non-justificationist epistemology involves differentiating ideas with criticism (a type of explanation) and choosing non-refuted ideas over refuted ideas. Conflicts are resolved by creating new ideas which are win/win from the perspectives of all sides in the conflict.

Standard "Justificationism" Epistemology

This approach involves choosing some criteria for amount of goodness (on a continuum) of ideas. Then resolving conflicts by favoring ideas with more goodness (a.k.a. justification).

Example criteria of idea goodness: reasonableness, logicalness, how much sense an idea makes, Occam's Razor, parsimony, amount and quality of supporting evidence, amount and quality of supporting arguments, amount and quality of experts who agree, degree of adherence to scientific method, how well it fits with the Bible.

The better an idea does on whichever criteria a particular person accepts, the higher goodness he scores (a.k.a. ranks) that idea as having. If he's a fallibilist, this scoring is his best but fallible judgment using what he knows today; it can be revised in the future.

There are also infallibilists who think some arbitrary quantity of goodness (justification) irreversibly changes an idea from non-good (non-justified) to good (justified). In other words, once you prove something, it's proven, the end. Then they say it's impossible for it to ever be refuted. Then when it's refuted, they make excuses about how it was never really proven in the first place, but their other ideas still really are proven. I won't talk about infallibilism further.

This goodness scoring is discussed in many ways like: justification, probability, confidence, plausibility, status, authority, support, verification, confirmation, proof, rationality and weight of the evidence.

Individual justificationists vary in which of these they see as good. Some reject the words "authority" or even "justification".

So both the criteria of goodness, and what they think goodness is, vary (which is why I use the very generic term "goodness"). And justificationists can be fallibilists or infallibilists. They can also be inductivists, or not and empiricists or not. Like they could think inductive support should raise our opinion of how good (justified) ideas are, but alternatively they could think induction is a myth and only other methods work.

So what's the same about all justificationists? What are the common points?

Justificationists, in some way, try to score how good ideas are. That is their method of differentiating ideas and choosing between ideas.

One more variation: justifications don't all use numerical scores. Some prefer to say e.g. "pretty confident" instead of "60% confident", perhaps because they think 60% is an arbitrary number. If someone thought the 60% was literal and exact, that'd be a mistake. But if it's understood to be approximate, then using an approximate number makes no fundamental difference over an approximate phrase. Using a number can be a different way to communicate "pretty confident".

Popper refuted justificationism. This has been mostly misunderstood or ignored. And even most Popperians don't understand it very well. It's a big topic. I'll briefly indicate why justificationism is a mistake, and can explain more if you ask.

Justificationism is a mistake because it fundamentally does not solve the epistemology problem of conflicts between ideas. If two ideas conflict, and one is assigned a higher score, they still conflict.

Other Justificationism Problems

Justificationism is anti-critical because instead of answering a criticism, a justificationist can too easily say, "OK, good point. I've lowered my goodness (justification) score for this idea. But it had a lead. It's still winning." (People actually say it less clearly.) In this way, many criticisms aren't taken seriously enough. A justificationist may have no counter-argument, but still not change his mind.

Justificationism is anti-explanatory, because scores aren't explanations.

Another issue is combining scores from multiple factors (such as parsimony and scientific evidence. Or evidence from two different kinds of experiments) to reach a single final overall score. This doesn't work. A lot about why it doesn't work is explained here: http://www.newyorker.com/magazine/2011/02/14/the-o...

One might try using only one criterion to avoid combining scores. But that's too limited. And then you have to ignore criticism. For example, if the one single criterion is parsimony, the score can't be changed just because someone points out a logical contradiction, since that isn't a parsimony issue. This single criterion approach isn't popular.

There's more problems, I just wanted to indicate a couple.

Popper Misunderstandings

A common misunderstanding is that Popper was proposing new criteria for goodness (justification) such as (amount of) testability, severity of tests passed, how well an idea stands up to criticism, (amount of) corroboration, and (amount of) explanatory power. This is then dismissed as not making a big difference over the older criteria. DD's (David Deutsch's) "hard to vary" can also be misinterpreted as a criterion of goodness (justification).

That's not what Popper was proposing.

Another misunderstanding is that Popper proposed replacing positive justifying criteria with a negative approach. In this view, instead of figuring out which ideas are good by justifying, we figure out which ideas are bad by criticizing (anti-justifying).

This would not be a breakthrough. Some justificationists already viewed justification scores as going both up and down. There can be criteria for badness in addition to goodness. And it makes more sense to have both types of criteria than to choose one exclusively.

This wasn't Popper's point either.

Non-Justificationist Epistemology

This is very hard to explain.

Fundamentally, the way to (re)solve a conflict between ideas is to explain a (win/win) (re)solution.

This may sound vacuous or trivial. But it isn't what justificationism tries to do.

It's similar to BoI's point that what you need to solve a problem is knowledge of how to solve it.

How are (re)solutions found? There's many ways to approach this which look very different but end up equivalent. I'm going to focus on an arbitration model.

Think of yourself as the arbiter, and the conflicting ideas as the different sides in the arbitration. Your goal is not to pick a winner. That's what justificationism does. Your goal as arbiter, instead, is to resolve the conflict – help the sides figure out a win/win outcome.

This arbitration can involve any number of sides. Let's focus on two for simplicity.

Both sides in the conflict want some things. Try to figure out a new idea so that they both get what they want. E.g. take one side's idea and modify it according to some concerns of the other side. If you can do this so everyone is happy, you have a non-refuted idea and you're done.

This can be hard. But there are techniques which make solutions always possible using bounded resources.

DD would call this arbitration "common preference finding", and has written a lot about it in the context of his Taking Children Seriously. He's long said and argued e.g. that "common preferences are always possible". A common preference is an outcome which all sides prefer to their initial preference – wholeheartedly with no regrets, downsides, compromises or sacrifices. It's strictly better than alternatives, not better on balance.

In BoI, DD writes about problems being soluble – and what he means by solutions is strictly win/win solutions which satisfy all sides in this sort of arbitration.

An arbitration tool is new ideas (which are usually small modifications of previous ideas). For example, take one side's idea but modify a few parts to no longer conflict with what the other side wants.

As long as every side wants good things, there is a solution like this to be found. Good things don't inherently conflict.

Sometimes sides want bad things. This can either be an honest mistake, or they can be evil or irrational.

If it's an honest mistake, the solution is criticism. Point out why it seems good but is actually bad. Point out how they misunderstood the implications and it won't work as intended. Or point out a contradiction between it and something good they value. Or point out an internal contradiction. Analyze it in pieces and explain why some parts are bad, but how the legitimate good parts can be saved. When people make honest mistakes, and the mistake is pointed out, they can change their mind (usually only partially, in cases where only part of what they were saying was mistaken).

How can a side be satisfied by a criticism/refutation? Why would a side want to change its mind? Because of explanations. A good criticism points out a mistake of some kind and explains what's bad about it. So the side can be like, "Oh, I understand why that's bad now, I don't want that anymore." Good arguments offer something better and make it accessible to the other side, so they can see it's (strictly) better and change their mind with zero regrets (conflict actually resolved).

If there is an evil or irrational mistake, things can go wrong. Short answer: you can't arbitrate for sides which don't want solutions. You can't resolve conflicts with people who want conflict. Rational epistemology doesn't work for people/sides/ideas who don't want to think rationally. But one must be very careful to avoid declaring one's opponents irrational and becoming an authoritarian. This is a big issue, but I won't discuss it here.

Arbitration ends when there's exactly one win/win idea which all sides prefer over any other options. There are then no (relevant to the issue) conflicts of ideas. (DD would say no "active" conflicts). Put another way, there's one non-refuted idea.

Arbitration is a creative process. It involves things like brainstorming new ideas and criticizing mistakes. Creative processes are unpredictable. A solution could take a while. While a solution is possible, what if you don't think of it?

Reasonable sides in the arbitration can understand resource limits and lower expectations when arbitration resources (like time and creative energy) run low. They can prefer this, because it's the objectively best thing to do. No reasonable party to an arbitration wants it to take forever or past some deadline (like if you're deciding what to do on Friday, you have to decide by Friday).

When the sides in a conflict are different people, the basic answer is the more arbitration gets stuck, the less they should try to interact. If you can't figure out how to interact for mutual benefit, go your separate ways and leave each other alone.

With a conflict between ideas in one person, it's trickier because they can't disengage. One basic fact is it's a mistake to prefer anything that would prevent a solution (within available resources) – kind of like wanting the impossible. The full details of always succeeding in these arbitrations, within resource limits, are a big topic that I won't include here.

How do justificationists handle arbitrations? They hear each side and add and subtract points. They tally up the final scores and then declare a winner. The primary reason the loser gets for losing is "because you scored fewer points in the discussion". The loser is unsatisfied, still disagrees, and there's still a conflict, so the arbitration failed.

Here's a different way to look at it. Each side in arbitration tries to explain why its proposal is ideal. If it can persuade the other side, the conflict is resolved, we're done. If it can't, the rational approach is to treat this failure to persuade as "huh, I guess I need better ideas/explanations" not as "I have the truth, but the other guy just won't listen!"

In other words, if either side has enough knowledge to resolve the conflict, then the conflict can be resolved with that knowledge. If neither side has that, then both sides should recognize their ideas aren't good enough. Both sides are refuted and a new idea is needed. (And while brilliant new ideas to solve things are hard to come by, ideas meeting lowered expectations related to resource limits are easier to create. And it gets easier in proportion to how limited resources are, basically because it's a mistake to want the impossible.)

Justificationism sees this differently. It will try to pick a winner from the existing sides, even when (as I see it) they aren't good enough. As I see it, if the existing sides don't already offer a solution (and only a fully win/win outcome is a solution), then the only possible way to get a solution is to create a new idea. And if any side doesn't like it (setting aside evil, irrationality, not wanting a solution, etc), then it isn't a solution, and no amount of justifying how great it is could change that.


To relate this back to some of the original topics:

The arbitration model doesn't involve confidence levels or probabilities. Ideas have boolean status as either win/win solutions (non-refuted), or not (refuted), rather than a score or rank on a continuum. Solutions are explanations – they explain what the solution is, how it solves the problem(s), what mistakes are in all attempted criticisms of this solution, why it's a mistake to want anything (relevant) that this solution doesn't offer, why the things the solution does offer should be wanted, and so on. Explanation is what makes everything work and be appealing and allows conflicts to be resolved.

Final Comments

I don't expect you to understand or agree with all of this. Perhaps not much, I don't know. To discuss hard issues well requires a lot of back-and-forth to clear up misunderstandings, answer questions and objections, etc. Understanding has to be created iteratively (Popper would say "gradually" or "piecemeal").

I am open to discussing these topics. I am open to considering that I may be wrong. I wouldn't want a discussion to assume a conclusion from the start. I tried to explain enough to give some initial indication of what my epistemology is like, and some perspective about where I'm coming from.

Footnotes

[1]

My point was, whatever your method for preserving bodies, you could assign it some odds, arbitrarily. You could say cremation causes less damage than shooting bodies into the sun, so it has better revival odds. And then pick a small number for a probability. You need to have an argument regarding vitrification that couldn't be said by someone arguing for cremation, burial or freezing.

There should be something to clearly, qualitatively differentiate cryonics from alternatives like cremation. Like it should differentiate vitrification not as better than cremation to some vague degree, but as actually on a different side of an reasonably explained might-work/doesn't-work line.

Here's an example of how I might argue for cryonics using scientific research.

Come up with a measure of brain damage (hard) which can be measured for both living and dead people. Come up with a measure of functionality or intelligence for living people with brain damage (hard). Find living brain damaged people and measure them. Try to work out a bound, e.g. people with X or less brain damage (according to this measure of damage) can still think OK, remember who they are, etc.

Vitrify some brains or substitutes and measure damage after a suitable time period. Compare the damage to X.

Measure damage numbers for freezing, burial and cremation too, for comparison. Show how those methods cause more than X damage, but vitrification causes less than X damage. Or maybe the empirical results come out a different way.

Be aware that when doing all this, I was using many explanations as unconscious assumptions, background knowledge, explicit premises, and so on. Expose every part of this stuff to criticism, and for each criticism write an explanation addressing it or modify my view.

Then someone would be in a position to make a non-arbitrary claim favorable to cryonics.

This is not the only acceptable method, it's one example. If you could come up with some other method to get some useful answers, that's fine. You can try whatever method you want, and the only judge is criticism.

But something I object to is assigning probabilities, or any kind of evaluations, without a clear method and explanation of it. (E.g. where does your 10% for cryo come from? Where does anyone's positive evaluation come from?)

I don't think it's reasonable for Alcor or CI to ask people to pay 5-6 figures without first having a good idea about how to judge today's cryonics (like my example method). And from a decision making perspective, I expect people asking for lots of money – and saying they can perform a long term service for me in a reliable way – should have some basic competence and reasonable explanations about their stuff. But instead they put this on their website:

http://www.alcor.org/Library/html/CaseForWholeBody...

It offers a variation on Pascal's Wager to argue for full-body cryo over neuro (basically, get full body just in case it's necessary for cryo to work). No comment is made on whether we should also believe in God due to Pascal's Wager. And it states:
Now, what if we would relax our assumptions a little and allow for some degree of ischemia or brain damage during cryopreservation? It strikes us that this further strengthens the case for whole body cryopreservation because the rest of the body could be used to infer information about the non-damaged state of the brain, an option not available to neuropatients.
No. I'm guessing you also disagree with this quote, so I won't argue unless you ask.

There are some complications like maybe Alcor is confused but today's cryonics works anyway. I won't go into that now.


[2]

We can, whenever we want, create ranking systems which we think will be useful for some purpose (somewhat like defining new units of measurement, or defining new categories to categorize stuff with).

The judge of these inventions is criticism. E.g. someone might criticize a ranking system by pointing out why it isn't effective for its intended purpose.

Concretely, we could rank body preservation methods by the amount of brain damage after 10 years. Then, in that system, we'd rank vitrification > freezing > burial > cremation.

Whether this is useful depends on context (which Popper calls the problem situation). What problem(s) are we trying to solve? Do we have a non-refuted idea for how to use the ranking in any solutions?

Our example ranking system has some relevance to people who consider brain damage important, but not to people who believe the goal should be to preserve the soul by using the most holy methods. They'd want to rank by holiness, and might rank vitrification last.

This is important because the rankings only matter in the context of some explanations of how they matter and for what (which must deal with criticism).

So ranking is secondary to explanation. It can't come first. This makes ranking unsuited for dealing with epistemology issues such as how to decide which explanations to accept in the first place.

In summary, we can make something up, argue why it's effective for a purpose, and if our argument is successful then we can use it for that purpose. This works with rankings and many other things.

But this is different than epistemology rankings, like trying to rank how good ideas are, or how probable, or how high quality of explanations they are.

Or put another way: to rank those things, you would have to specify how that ranking system worked, and explain why the results are useful for what. That's been tried a lot. I don't think those attempts have succeeded, or can succeed.

Elliot Temple | Permalink | Comments (0)

Discussion with Aubrey de Grey About Cyronics and Epistemology

I am discussing cryonics and epistemology with Aubrey de Grey. Although I like cryonics in principle, I don't think current technology and institutions are good yet (he does). After the start, the discussion focuses more on epistemology than cryonics.

Aubrey de Grey is the driving force behind SENS – Strategies for Engineered Negligible Senescence. What that means is medicine to deal with the problems caused by aging. If you ever donate money to any kind of charity, you should look at SENS and seriously consider redirecting all your donations there.

For the details, besides their website you should look at Aubrey de Grey's book Ending Aging. I read it and think it's a good book with good arguments (something I don't say lightly, as you can see by the critical scrutiny I've subjected Ann Coulter and others to.)

Beginning of discussion on Yahoo Groups website:

Me: https://groups.yahoo.com/neo/groups/fallible-ideas...
Aubrey de Grey : https://groups.yahoo.com/neo/groups/fallible-ideas...
Me: https://groups.yahoo.com/neo/groups/fallible-ideas...
Aubrey de Grey : https://groups.yahoo.com/neo/groups/fallible-ideas...
Me: https://groups.yahoo.com/neo/groups/fallible-ideas...
Me (fully quoting Aubrey de Grey's third reply): https://groups.yahoo.com/neo/groups/fallible-ideas...

Continued in blog posts with nice formatting:

Epistemology and Arbitration

(I will be updating this post with more parts in the future)

Like this? Want to read more philosophical discussions? Join the Fallible Ideas email list.

Elliot Temple | Permalink | Comments (0)

Endorsements vs. Integrity

In a recent Center for Industrial Progress newsletter, Alex Epstein bragged about the prestigious people he'd gotten to sanction his upcoming book The Moral Case for Fossil Fuels.

Alex writes that they "endorsed" the book. I think that's accurate. They're siding with him. You understand.

One endorsement reads:
"Alex Epstein has written an eloquent and powerful argument for using fossil fuels on moral grounds alone. A remarkable book.”

--Matt Ridley, author of The Rational Optimist
Today I saw an article by Ridley about global warming. Note this is the same person from the book endorsement. His article takes roughly the same side as Epstein: it disagrees with the "settled science" of the "climate consensus" (scare quotes, not article quotes).

The article was OK, but at the end something stood out to me:
... concentrate on more pressing global problems like war, terror, disease, poverty, habitat loss and the 1.3 billion people with no electricity.
"[H]abitat loss" is not a pressing global problem in the same company as war, disease, etc...

This is not just my view. It's Epstein's view. Epstein disagrees with environmentalist views like this. He values people over animals. He's really strongly at odds with this kind of thinking.

Ridley endorsed Epstein's book, but actually disagrees in a huge way with Epstein's worldview.

What good are endorsements like that? Shouldn't Epstein reject endorsement by his philosophical opponents? Agreeing on a few particular conclusions about fossil fuels isn't enough. Epstein's book is fairly philosophical, and says he cares about about principles and philosophical reasoning (in line with his Objectivist philosophy). He shouldn't gloss over major philosophical differences to get dishonest but prestigious book promotion.

Elliot Temple | Permalink | Comments (0)

Fountainhead Comments

Rereading The Fountainhead by Ayn Rand. Some notes:
He remembered his last private conversation with her-in the cab on their way from Toohey’s meeting. He remembered the indifferent calm of her insults to him-the utter contempt of insults delivered without anger.
“Shut up, Alvah, before I slap your face,” said Wynand without raising his voice.
“Pipe down, Sweetie-pie,” said Toohey without resentment.
There's a theme here involving negative comments without negative emotions.
It was not sarcasm; he wished it were; sarcasm would have granted him a personal recognition-the desire to hurt him.
Negative comments due to negative emotions are easier to take. "Oh, you hate me, so you're being mean." But when it's impersonal, it's harder to dismiss the negative comments. If there's no motive besides the person thinks the negative comments are true, it's hard to ignore them without considering whether they're true or false (with objective reasons).

The position on sarcasm is notable too. I independently came to the same position. But few people are aware of this. Sarcasm is generally seen as more harmless than it is.
There’s an interesting question there. What is kinder-to believe the best of people and burden them with a nobility beyond their endurance-or to see them as they are, and accept it because it makes them comfortable? Kindness being more important than justice, of course.”
(This is a villain speaking, which is why the last sentence states a bad position.)

This issue is really important. You might expect people to like material such as The Beginning of Infinity. That book explains that problems can be solved, and people can make unbounded, unlimited progress. That's good, right? A better life is possible. The future can be awesome.

But people don't flock to ideas like these. It's not that they have counter-arguments. They can't refute it. They just don't actually like or want it. It burdens them with a nobility they don't want to deal with trying to live up to. It's easier if a bad life is all that's possible to man, so then they can live badly without feeling guilty.

With people like this, what could get through to them and help them become rational thinkers? What would get their interest so they'd (happily) try to live better?
“The worst thing about dishonest people is what they think of as honesty,” he said. “I know a woman who’s never held to one conviction for three days running, but when I told her she had no integrity, she got very tight-lipped and said her idea of integrity wasn’t mine; it seems she’d never stolen any money. Well, she’s one that’s in no danger from me whatever. I don’t hate her. I hate the impossible conception you love so passionately, Dominique.”
People lie. All the time. Especially to themselves.

And, what Rand's talking about: they lie to themselves about what lying is, so that they can believe they aren't liars!

Elliot Temple | Permalink | Comments (0)

Success Isn't the Same Thing as Quality

"If you build it (so it's good), they will come" is a brag of popular and successful people. It's saying "people came to my thing because it was good, and didn't go to my rival's thing because it was bad". It's saying whatever the status quo is, that's how things should be. It's saying whatever is popular, it's popular because it's good.

Notice the passivity, you build it, they come automatically. You don't make them come, it just happens by itself. (And if it doesn't? Way more things are built than gain audiences. Well, then you didn't deserve success. Because the meaning here is you build it, you sit around passively to be judged, and then whoever has success deserves it and is good.)

Saying, "Don't worry about getting the word out, just make it really good and success will follow" is the same message. It's defending the status quo, and saying everyone's place in the world is where they deserve to be. It's the elite asserting the rationality of the world that made them the elite.

People also mix up making something people want and making something good. Lots of people have bad preferences. Pleasing people makes it easier to get attention/customers/fans/etc, but it's different than making something good. Again the issue is a claim about how great and rational the status quo is. There are lots of people who devote their lives to pleasing others, and want that to have been good.

The idea that quality ensures success is wrong. And it flatters successful people.

Elliot Temple | Permalink | Comments (0)

Gaming Propaganda

Live comments for Die Noobs gaming "documentary" that i saw part of on Twitch.

Here's some info about it.

http://blog.twitch.tv/2014/07/gaming-documentary-d...

I don't have an ideal link, but I'm guessing it'll be easy to find on YouTube or Google in a few days.

Comments:

lol twitch is showing propaganda video about how playing video games is no different than what kids always did playing football

some guy just got a clip saying “e-sports. take away the e. it’s sports.”

now ppl saying league is just a different type of “athleticism”

and it’s just as much a “team game” as team sports

now someone is saying watching competitive games is just like watching boxing and MMA

now they have a guy saying what he loves about e-sports is the HUMAN STORYLINES

and the EMOTION

it’s soooooo blatant social manipulation

to me it reads like super blatant begging for social legitimacy

but i think it’s got just enough barely any subtlety for ppl not to see it that way

now it’s saying gaming = family bonding

oh look dancing and music

and glorifying IRL fighting

and injuries, violence

and ppl trying to be cool and memey

lol “are they bad enough dudes to become progamers?”

see instead of saying “progamers are badass dudes” they imply it

now ppl are fooled

it’s an assertion disguised as a question about something else

i pasted this to HS players, wonder if anyone will say anything

now they’re at a gym

literal gym

presenting this as progaming training

“i want to dive headfirst into this thing and murder your whole facebook network”

now after the gym warmup they are playing CS

or some FPS, might not be CS actually, idk

oh i think they said it’s CoD

most of the shots of them “practicing”

are of the ppl not the screen

and them talking

and now they are practicing trash talk at a bar

walking in with a girl

some people might think the time we spent gaming was wasted. but it’s like Tom said, we didn’t waste anything. – pure unargued assertion

explaining he’s competitive “we wanna kill everybody”

“obviously we’re not gonna throw a fist at other bands, but we wanna destroy everybody”

i find it funny he had to qualify it like "obviously i don't mean what i keep saying"

it's like funnier cuz it WAS obvious. all the violence is metaphorical. we all know they aren't actually gonna go murdre someone. but he still was like scared and had to disown violence in the midst of all the glorifying of it

there’s contradictions there

it breaks the mood pretty badly. like a rockstar promoting "sex, drugs and rock and roll" and then in the middle he's like "but don't actually do drugs or have sex before you're married"

lol now they are saying rather literally that progamers don’t live in basements and their friends are jealous

it’s such propaganda

after more comparisons with sports, here are two live comments from other people on twitch:
Kookoomaloo: starcraft = golf
Idely: so starcraft is pianogolf?
lol wtf these ppl are interrupting their APM to chug beer while playing FFA starcraft?

"i was impressed, but i still wasn't impressed at the same time"

now they got a literal MMA fighter saying how he played Atari, Nintendo, and arcades

the MMA guy is asked about ever fighting himself a game version of himself, and answers he doesn't because he's superstitious

there was some sexism, insulting wrestling moves for being "pretty". the whole video adheres to social memes about guys should be strong/violent/macho (pretty is for girls). and the basic point is to repeatedly claim gamer guys are high status

now that's material about how gaming impresses someone's mother. they're really trying to go through whatever people care about and say gaming is good at that.

Elliot Temple | Permalink | Comments (0)

Unconstrained by Reality

https://twitter.com/TimJGraham/status/504042666636...
Sarah Silverman on NBC says her purse contents are "fun and pot and gum." Missing Noel Shepherd.
This is a common thing celebrities and other popular people do. This answer is not meant to be taken literally. Silverman is kind of joking, but also kind of serious – there is an actual meaning here. What purpose do these non-literal statements serve?

If you aren't speaking literally, you can speak in terms of 100% pure unfiltered social vibrations, unconstrained by reality (or drug laws).

By speaking non-literally, she can say exactly what will be most popular, without worrying about whether it's true.

She's communicating that she knows what's popular, and approves, and is willing to play the part of complying with social expectations to please others.

"Fun" is something pretty much everyone approves of. "Pot" is popular with her fanbase. And "gum" is a silly answer, meaning she's not too serious, not too worried about important things. It means she won't disapprove of others who spend their time chewing gum or otherwise having unimportant lives.

What's actually in her purse? No one cares.

Elliot Temple | Permalink | Comments (0)

They had never seen his buildings; they did not know whether his buildings were good or worthless; they knew only that they had never heard of these buildings

The Fountainhead by Ayn Rand:
When [Roark] went up to his office, the elevator operators looked at him in a queer, lazy, curious sort of way; when he spoke, they answered, not insolently, but in an indifferent drawl that seemed to say it would become insolent in a moment. They did not know what he was doing or why; they knew only that he was a man to whom no clients ever came. He attended, because Austen Heller asked him to attend, the few parties Heller gave occasionally; he was asked by guests: “Oh, you’re an architect? You’ll forgive me, I haven’t kept up with architecture—what have you built?” When he answered, he heard them say: “Oh, yes, indeed,” and he saw the conscious politeness of their manner tell him that he was an architect by presumption. They had never seen his buildings; they did not know whether his buildings were good or worthless; they knew only that they had never heard of these buildings.

It was a war in which he was invited to fight nothing, yet he was pushed forward to fight, he had to fight, he had no choice—and no adversary.
This is how most people treat my philosophy.

Elliot Temple | Permalink | Comments (0)

Rule Breaking

people routinely break rules on purpose in games. for example basketball. people foul to stop the clock near the end.

in hockey you can get thrown out of the rest of the game for fighting. but people still fight on purpose sometimes.

if you have any way for people to break rules on purpose and get any advantage, they will.

why do they make rules with weak enough penalties that any good can come of rule breaking?

i think a big part of the issue is the fans want to see a good, competitive game. if you penalize a team a ton for breaking a rule, resulting in a very lopsided game and making the ending result no longer in doubt for the rest of the game, then the fans won't like that. it'll be boring to watch.

so there's this tension. on the one hand, they want to stop people from doing certain things. but on the other hand, no matter what anyone does, they don't really want to mess up the game. they want to play on and have it still be exciting, not have one team (or individual player) too handicapped to compete.

plus, the bigger the penalties are, the harder it gets to call a penalty. the more effect calling penalties has, the more referees will have to let small stuff slide. so then players figure out where the line is. and now you have players trying to get as close to the major penalty line as they can without crossing it, and if they slip up just slightly then they get a BIG penalty for doing something slightly on the wrong side of the line. tiny change in behavior, big change in consequences. that's a really bad system. and much worse given the human error factor – people are trying to play just up to the limit of what the ref won't call a penalty on, but have to account for the ref's judgments on each play being randomly wrong by a significant factor in either direction. so it's not just pure skill to go near the line without crossing, it's also luck. you have to figure out the normal range for ref judgements (like judges stuff between 20% more or less severe than it actually is) and then account for that, but then you can be screwed by a random outlier judgment. (i'm thinking the ref judgments are basically what you actually did, modified by a random factor that's on a bell curve).

oh and to make matters worse, most people don't draw a clear line between violating the spirit of the game (good sportsmanship) and the explicit written rules of the game. so there's fan pressure to judge things like intentions of actions, and whether coming near violating a rule repeatedly without actually violating it is bad sportsmanship that should be punished, and so on.

most people see all kinds of misbehavior as on a continuum and don't actually care all that much about the written rules. refs and court judges are supposed to be better than that and go by the actual rules, and do so with very variable success. (even the supreme court is pretty crap at it, especially the lefties)

this kinda stuff affects all types of games, including board games and video games. it varies though, e.g. if there's no fans watching to worry about.

some of it's also an issue for social news sites. for example, reddit tries to have rules to limit ways of getting upvotes. people who do everything they can to get upvotes just shy over breaking the rules will be most effective at getting upvotes.

Elliot Temple | Permalink | Comments (0)

They simply did not care to find out whether he was good

The Fountainhead, by Ayn Rand:
The architects he saw differed from one another. Some looked at him across the desk, kindly and vaguely, and their manner seemed to say that it was touching, his ambition to be an architect, touching and laudable and strange and attractively sad as all the delusions of youth. Some smiled at him with thin, drawn lips and seemed to enjoy his presence in the room, because it made them conscious of their own accomplishment. Some spoke coldly, as if his ambition were a personal insult. Some were brusque, and the sharpness of their voices seemed to say that they needed good draftsmen, they always needed good draftsmen, but this qualification could not possibly apply to him, and would he please refrain from being rude enough to force them to express it more plainly.

It was not malice. It was not a judgment passed upon his merit. They did not think he was worthless. They simply did not care to find out whether he was good.
This is how most people treat my philosophy.

Elliot Temple | Permalink | Comments (0)

Ambiguous Feminism

Look at this tweet:
if a dude sleeps with hella women YEAH BRO. if a girl shows her shoulder in public WHORE.

double standards. IT STOPS TODAY.
it stops which way? which standard should change in what way? what do you actually want me to do?

do you think women who sleep with hella people should be cheered? or that men who sleep with hella people should be booed?

or that women who show their shoulders in public shouldn't be booed, just change that? but booing women who sleep with hella men is fine?

or maybe men should be booed for showing skin in public?

why didn't it occur to the author to say what change he wanted? is it implied or obvious specifically what change he advocates? i don't think so. there are lots of competing popular ideas about how to change this stuff.

if this double standard ends, what single standard should replace it? people agree there shouldn't be a double standard, but disagree about what the right single standard is.

people who want change today, but don't care to say what to change to, are not reformers. they are idiots.

The tweet has 217 favorites, 81 retweets, but only 2 replies. lots of people think they liked it, none of them noticed that it's ambiguous. what did they like? what do they think it says?

they seem to want reform of some kind.

"make it better."
"how? what would be better?"
"i don't know, but we're fixing it TODAY!"

this is very immoral.

Elliot Temple | Permalink | Comments (0)

All Authority is Social Authority

People think there's different types of authority. One guy might have high social status, be a leader of a social group. He has social authority. Another guy might be a "leading intellectual" with "intellectual authority".

But "intellectual authority" is a contradiction. Reason doesn't work by authority.

What's actually going on is that all authority is social authority.

That "leading intellectual" has a type of social status. It comes from his socially-accepted reputation, which comes from things like socially-accepted reputation-deciders. Like the people who are socially anointed as legitimately able to decide who is worthy of a Ph.D. or a (socially) prestigious award.

(Similarly, there is no intellectual prestige. All prestige is social prestige.)

Elliot Temple | Permalink | Comments (0)