Analyzing Quotes Objectively and Socially

stucchio and Mason

stucchio retweeted Mason writing:

"Everything can be free if we fire the people who stop you from stealing stuff" is apparently considered an NPR-worthy political innovation now, rather than the kind of brain fart an undergrad might mumble as they come to from major dental work https://twitter.com/_natalieescobar/status/1299018604327907328

There’s no substantial objective-world content here. Basically “I disagree with whatever is the actual thing behind my straw man characterization”. There’s no topical argument. It’s ~all social posturing. It’s making assertions about who is dumb and who should be associated with what group (and, by implication, with the social status of that group). NPR-worthy, brain fart, undergrad, mumble and being groggy from strong drugs are all social-meaning-charged things to bring up. The overall point is to attack the social status of NPR by associating it with low status stuff. Generally smart people like stuchhio (who remains on the small list of people whose tweets I read – I actually have a pretty high opinion of of him) approve of that tribalist social-political messaging enough to retweet it.

Yudkowsky

Eliezer Yudkowsky wrote on Less Wrong (no link because, contrary to what he says, someone did make the page inaccessible. I have documentation though.):

Post removed from main and discussion on grounds that I've never seen anything voted down that far before. Page will still be accessible to those who know the address.

The context is my 2011 LW post “The Conjunction Fallacy Does Not Exist”.

In RAZ, Yudkowsky repeatedly brings up subculture affiliations he has. He read lots of sci fi. He read 1984. He read Feynman. He also refers to “traditional rationality” which Feynman is a leader of. (Yudkowsky presents several of his ideas as improvements on traditional rationality. I think some of them are good points.) Feynman gets particular emphasis. I think he got some of his fans via this sort of subculture membership signaling and by referencing stuff they like.

I bring this up because Feynman has a book title "What Do You Care What Other People Think?": Further Adventures of a Curious Character. This is the sequel to the better known "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character.

Yudkowsky evidently does care what people think and has provided no indication that he’s aware that he’s contradicting one of his heroes, Feynman. He certainly doesn’t provide counter arguments to Feynman.

Downvotes are communications about what people think. Downvotes indicate dislike. They are not arguments. They aren’t reasons it’s bad. They’re just opinions. They’re like conclusions or assertions. Yudkowsky openly presents himself as taking action because of what people think. It’s also basically just openly saying “I use power to suppress unpopular ideas”. Yudkowsky also gave no argument himself, nor did he endorse/cite/link any argument he agreed with about the topic.

Yudkowsky is actually reasonably insightful about social hierarchies elsewhere, btw. But this quote shows that, in some major way, he doesn’t understand rationality and social dynamics.

Replies to my “Chains, Bottlenecks and Optimization”

https://www.lesswrong.com/posts/Ze6PqJK2jnwnhcpnb/chains-bottlenecks-and-optimization

Dagon

I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

Objective meaning: I took the specified actions.

Social meaning: I like Goldratt. I’m aligned with him and his tribe. I have known about him for a long time and might merit early adopter credit. Your post didn’t teach me anything. Also, I’m a leader who takes initiative to influence my more sheep-like coworkers. I’m also rich enough to give away 20+ books.

Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.

Objective meaning: I recommend The Goal.

Social meaning: I’m an expert judge of which Goldratt books to recommend to people, in what order, for what reasons. Although I’m so clever that I find The Goal a bit shallow, I think it’s good for other people who need to be kept entertained and it has enough serious content for them to get an introduction from. Then they can consider if they are up to the challenge of becoming wise like me, via further study, or not.

This is actually ridiculous. The Goal is the best known Goldratt book, it’s his best seller, it’s meant to be read first, and this is well known. Dagon is pretending to be providing expert judgment, but isn’t providing insight. And The Goal has tons of depth and content, and Dagon is slandering the book by condescending to it in this way. By bringing up Theory of Constraints, Dagon is signaling he reads and values less popular, less entertaining, less approachable non-novel Goldratt books.

It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths).

Objective meaning (up to the dash): Goldratt’s chain idea, which is a major part of your post, is limited.

Social meaning (up to the dash): I’ve surpassed Goldratt and can look down on his stuff as limited. You’re a naive Goldratt newbie who is accepting whatever he says instead of going beyond Goldratt. Also calling chains a “metaphor” instead of “model” is a subtle attack to lower status. Metaphors aren’t heavyweight rationality (while models are, and it actually is a model). Also Dagon is implying that I failed to recognize limits that I should have recognized.

Objective meaning continued: There’s some sort of attempt at an argument here but it doesn’t actually make sense. Saying there is variance in two places is not a limitation of the chain model.

Social meaning continued: saying a bunch of overly wordy stuff that looks technical is bluffing and pretending he’s arguing seriously. Most people won’t know the difference.

Most real-world situations are more of a mesh or a circuit than a linear chain, and the analysis of bottlenecks and risks is a fun multidimensional calculation of forces applies and propagated through multiple links.

Objective meaning: Chains are wrong in most real world situations because those situations are meshes or circuits [both terms undefined]. No details are given about how he knows what’s common in real world situations. And he’s contradicting Goldratt who actually did argue his case and know math. (I also know more than enough math so far and Dagon never continued with enough substance to potentially strain either of our math skills sets).

Social meaning: I have fun doing multidimensional calculations. I’m better than you. If you knew math so well that it’s a fun game to you, maybe you could keep up with me. But if you could do that, you wouldn’t have written the post you wrote.

It’s screwy how Dagon presents himself as a Goldratt superfan expert and then immediately attacks Goldratt’s ideas.

Note: Dagon stopped replying without explanation shortly after this, even though he’d said how super interested in Goldratt stuff he is.

Donald Hobson

I think that ideas can have a bottleneck effect, but that isn't the only effect. Some ideas have disjunctive justifications.

Objective meaning: bottlenecks come up sometimes but not always. [No arguments about how often they come up, how important they are, etc.]

Social meaning: You neglected disjunctions and didn’t see the whole picture. I often run into people who don’t know fancy concepts like “disjunction”.

Note: Disjunction just means “or” and isn’t something that Goldratt or I had failed to consider.

Hobson then follows up with some math, socially implying that the problem is I’m not technical enough and if only I knew some math I’d have reached different conclusions. He postures about how clever he is and brings up resistors and science as brags.

I responded, including with math, and then Hobson did not respond.

TAG

What does that even mean?

Objective meaning: I don’t understand what you wrote.

Social meaning: You’re not making sense.

He did give more info about what his question was after this. But he led with this, on purpose. The “even” is a social attack – that word isn’t there to help with any objective meaning. It’s there to socially communicate that I’m surprisingly incoherent. It’d be a subtle social attack even without the “even”. He didn’t respond when I answered his question.

abramdemski

There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.

Objective meaning: The argument in the OP is incomplete.

Social meaning: You missed something huge, which is not a special case, so your reasoning is highly inaccurate.

The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true.

Objective meaning: Chain links have an “and” relationship.

Social meaning: You lack a basic understanding of the stuff you just said, so I’ll have to start really basic to try to educate you.

But some things are disjunctive: some one thing needs to be true.

Objective meaning: “or” exists. [no statement yet about how this is relevant]

Social meaning: You’re wrong because you’re an ignorant novice.

(Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)

Objective meaning: Other logic operators exist [no statement yet about how this is relevant].

Social meaning: I know about this like XOR, but you’re a beginner who doesn’t. I’ll let you save face a little by calling it “exotic”, but actually, in the eyes of everyone knowledgeable here, I’m insulting you by suggesting that for you XOR is exotic.

Note: He’s wrong, I know what XOR is (let alone OR). So did Goldratt. XOR is actually easy for me, and I’ve used it a lot and done much more advanced things too. He assumed I didn’t in order to socially attack me. He didn’t have adequate evidence to reach the conclusion that he reached; but by reaching it and speaking condescendingly, he implied that there was adequate evidence to judge me as an ignorant fool.

Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?

Objective meaning: Bringing up possibilities he thinks are worth considering.

Social meaning: Flaming me with some rather thin plausible deniability.

I skipped the middle of his post btw, which had other bad stuff.

johnswentworth

I really like what this post is trying to do. The idea is a valuable one. But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.

This is an apparently friendly reply but he was lying. I wrote examples but he wouldn’t speak again.

There are hints in this text that he actually dislikes me and is being condescending, and that the praise in the first two sentences is fake. You can see some condescension in the post, e.g. in how he sets himself up like a mentor telling me what to do (and note the unnecessary “strongly” before “recommend”. And how does he know the idea is valuable when it’s not clearly communicated? And his denial re inferential distance is actually both unreasonable and aggressive. The “too abstract” and “could use some work” are also social attacks, and the “at least 2-3” is a social attack (it means do a lot) with a confused objective meaning (if you’re saying do >= X, why specify X as a range? you only need one number.)

The objective world meaning is roughly that he’s helping with some presentation and communication issues and wants a discussion of the great ideas. But it turns out, as we see from his following behavior, that wasn’t true. (Probably. Maybe he didn’t follow up for some other reason like he died of COVID. Well not that because you can check his posting history and see he’s still posting in other topics. But maybe he has Alzheimer’s and he forgot, and he knows that’s a risk so he keeps notes about stuff he wants to follow up on, but he had an iCloud syncing error and the note got deleted without him realizing it. There are other stories that I don’t have enough information to rule out, but I do have broad societal information about them being uncommon, and there are patterns across the behavior of many people.)

MakoYass

I posted in comments on different Less Wrong thread:

curi:

Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?

MakoYass:

I am evidently interested in discussing it, but I am probably not the best person for it.

Objective meaning: I am interested. My answer to your question is “yes”. I have agreed to try to have a discussion, if you want to. However, be warned that I’m not very good at this.

Social meaning: The answer to your question is “no”. I won’t discuss with you. However, I’m not OK with being declared uninterested in this topic. I love this topic. How dare you even question my interest when you have evidence (“evidently”) that I am interested, which consists of me having posted about it. I’d have been dumb to post about something I’m not interested in, and you were an asshole to suggest I might be dumb like that.

Actual result: I replied in a friendly, accessible way attempting to begin a conversation, but he did not respond.

Concluding Thoughts

Conversations don’t go well when a substantial portion of what people say has a hostile (or even just significantly different) social (double) meaning.

It’s much worse when the social meaning is the primary thing people are talking about, as in all the LW replies I got above. It’s hard to get discussions where the objective meanings are more emphasized than the social ones. And all the replies I quoted re my Chains and Bottlenecks post were top level replies to my impersonal article. I hadn’t said anything to personally offend any of those people, but they all responded with social nastiness. (Those were all the top level replies. There were no decent ones.) Also it was my first post back after 3 years, so this wasn’t carrying over from prior discussion (afaik – possibly some of them were around years ago and remembered me. I know some people do remember me but they mentioned it. Actually TAG said later, elsewhere, to someone else, that he knew about me from being on unspecified Critical Rationalist forums in the past).

Even if you’re aware of social meanings, there are important objective meanings which are quite hard to say without getting offensive social meaning. This comes up with talking about errors people make, especially ones that reveal significant weaknesses in their knowledge. Talking objectively about methodology errors and what to do about them can also be highly offensive socially. Also objective, argued judgments of how good things are can be socially offensive, even if correct (actually it’s often worse if it’s correct and high quality – the harder to plausibly argue back, the worse it can be for the guy who’s wrong).

The main point was to give examples of how the same sentence can be read with an objective and a social meaning. This is what most discussions on rationalist forums where explicit knowledge of social status hierarchies is common look like to me. It comes up a fair amount on my own forums too (less often than at LW, but it’s a pretty big problem IMO).

Note: The examples in this post are not representative of the full spectrum of social behaviors. One of the many things missing is needy/chasing/reactive behavior where people signal their own low social status (low relative to the person they’re trying to please). Also, I could go into more detail on any particular example people want to discuss (this post isn’t meant as giving all the info/analysis, it’s a hybrid between some summary and some detail).


Update: Adding (on same day as original) a few things I forgot to say.

Audiences pick up on some of the social meanings (which ones, and how they see them, varies by person). They see you answer and not answer things. They think some should be answered and some are ignorable. They take some things as social answers that aren’t intended to be. They sometimes ignore literal/objective meanings of things. They judge. It affects audience reactions. And the perception of audience reactions affects what the actual participants do and say (including when they stop talking without explanation).

The people quoted could do less social meaning. They’re all amplifying the social. There’s some design there; it’s not an accident. It’s not that hard to be less social. But even if you try, it’s very hard to avoid any problematic social meanings, especially when you consider that different audience members will read stuff differently, according to different background knowledge, different assumptions about context, different misreadings and skipped words, etc.


Elliot Temple | Permalink | Messages (14)

Mathematical Inconsistency in Solomonoff Induction?

I posted this on Less Wrong 10 days ago. At the end, I summarize the answer they gave.


What counts as a hypothesis for Solomonoff induction? The general impression I’ve gotten in various places is “a hypothesis can be anything (that you could write down)”. But I don’t think that’s quite it. E.g. evidence can be written down but is treated separately. I think a hypothesis is more like a computer program that outputs predictions about what evidence will or will not be observed.

If X and Y are hypotheses, then is “X and Y” a hypothesis? “not X”? “X or Y?” If not, why not, and where can I read a clear explanation of the rules and exclusions for Solomonoff hypotheses?

If using logic operators with hypotheses does yield other hypotheses, then I’m curious about a potential problem. When hypotheses are related, we can consider what their probabilities should be in more than one way. The results should always be consistent.

For example, suppose you have no evidence yet. And suppose X and Y are independent. Then you can calculate the probability of P(X or Y) in terms of the probability of P(X) and P(Y). You can also calculate the probability of all three based on their length (that’s the Solomonoff prior). These should always match but I don’t think they do.

The non-normalized probability of X is 1/2^len(X).

So you get:

P(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

and we also know:

P(X or Y) = 1/2^len(X or Y)

since the left hand sides are the same, that means the right hand sides should be equal, by simple substitution:

1/2^len(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

Which has to hold for any X and Y.

We can select X and Y to be the same length and to minimize compression gains when they’re both present, so len(X or Y) should be approximately 2len(X). I’m assuming a basis, or choice of X and Y, such that “or” is very cheap relative to X and Y, hence I approximated it to zero. Then we have:

1/2^2len(X) = 1/2^len(X) + 1/2^len(X) - 1/2^2len(X)

which simplifies to:

1/2^2len(X) = 1/2^len(X)

Which is false (since len(X) isn’t 0). And using a different approximation of len(X or Y) like 1.5len(X), 2.5len(X) or even len(X) wouldn’t make the math work.

So Solomonoff induction is inconsistent. So I assume there’s something I don’t know. What? (My best guess so far, mentioned above, is limits on what is a hypothesis.)

Also here’s a quick intuitive explanation to help explain what’s going on with the math: P(X) is both shorter and less probable than P(X or Y). Think about what you’re doing when you craft a hypotheses. You can add bits (length) to a hypothesis to exclude stuff. In that case, more bits (more length) means lower prior probability, and that makes sense, because the hypothesis is compatible with fewer things from the set of all logically possible things. But you can also add bits (length) to a hypothesis to add alternatives. It could be this or that or a third thing. That makes hypotheses longer but more likely rather than less likely. Also, speaking more generally, the Solomonoff prior probabilities are assigned according to length with no regard for consistency amongst themselves, so its unsurprising that they’re inconsistent unless the hypotheses are limited in such a way that they have no significant relationships with each other that would have to be consistent, which sounds hard to achieve and I haven’t seen any rules specified for achieving that (note that there are other ways to find relationships between hypotheses besides the one I used above, e.g. looking for subsets).


Less Wrong's answer, in my understanding, is that in Solomonoff Induction a "hypothesis" must make positive predictions like "X will happen". Probabilistic positive predictions – assigning probabilities to different specific outcomes – can also work. Saying X or Y will happen is not a valid hypothesis, nor is saying X won't happen.

This is a very standard trick by so-called scholars. They take a regular English word (here "hypothesis") and define it as a technical term with a drastically different meaning. This isn't clearly explained anywhere and lots of people are misled. It's also done with e.g. "heritability".

Solomonoff Induction is just sequence prediction. Take a data sequence as input, then predict the next thing in the sequence via some algorithm. (And do it with all the algorithms and see which do better and are shorter.) It's aspiring to be the oracle in The Fabric of Reality but worse.


Elliot Temple | Permalink | Messages (5)

Eliezer Yudkowsky Is a Fraud

Eliezer Yudkowsky tweeted:

EY:

What on Earth is up with the people replying "billionaires don't have real money, just stocks they can't easily sell" to the anti-billionaire stuff? It's an insanely straw reply and there are much much better replies.

DI:

What would be a much better reply to give to someone who thinks for example that Elon Musk is hoarding $100bn in his bank account?

EY:

A better reply should address the core issue whether there is net social good from saying billionaires can't have or keep wealth: eg demotivating next Steves from creating Apple, no Gates vaccine funding, Musk not doing Tesla after selling Paypal.

Eliezer Yudkowsky (EY) frequently brings up names (e.g. Feynman or Jaynes) of smart people involved with science, rationality or sci-fi. He does this throughout RAZ. He communicates that he's read them, he's well-read, he's learned from them, he has intelligent commentary related to stuff they wrote, etc. He presents himself as someone who can report to you, his reader, about what those books and people are like. (He mostly brings up people he likes, but he also sometimes presents himself as knowledgeable about people he's unfriendly to, like Karl Popper and Ayn Rand, who he knows little about and misrepresents.)

EY is a liar who can't be trusted. In his tweets, he reveals that he brings up names while knowing basically nothing about them.

Steve Jobs and Steve Wozniak were not motivated by getting super rich. Their personalities are pretty well known. I guess EY never read any of the biographies and hasn't had conversations about them with knowledgeable people. Or maybe he doesn't connect what he reads to what he says. (I provide some brief, example evidence at the end of this post in which Jobs tells Ellison "You don’t need any more money." EY is really blatantly wrong.)

EY brings up Jobs and Wozniak ("Steves") to make his assertions sound concrete, empirical and connected to reality. Actually he's doing egregious armchair philosophizing and using counter examples as examples.

Someone who does this can't be trusted whenever they bring up other names either. It shows a policy of dishonesty: either carelessness and incompetence (while dishonestly presenting himself as a careful, effective thinker) or outright lying about his knowledge.

There are other problems with the tweets, too. For example, EY is calling people insane instead of arguing his case. And EY is straw manning the argument about billionaires having stocks not cash – while complaining about others straw manning. Billionaires have most of their wealth in capital goods, not consumption goods (that's the short, better version of the argument he mangled), and that's a more important issue than the incentives that EY brings up. EY also routinely presents himself as well-versed in economics but seems unable to connect concepts like accumulation of capital increasing the productivity of labor, or eating the seed corn, to this topic.

Some people think billionaires consume huge amounts of wealth – e.g. billions of dollars per year – in the form of luxuries or other consumption goods. Responding to a range of anti-billionaire viewpoints, including that one, by saying basically "They need all that money so they're incentivized to build companies." is horribly wrong. They don't consume anywhere near that much wealth per year. EY comes off as justifying them doing something they don't do that would actually merit concern if they somehow did it.

If Jeff Bezos were building a million statues of himself, that'd be spending billions of dollars on luxuries/consumption instead of production. That'd actually somewhat harm our society's capital accumulation and would merit some concern and consideration. But – crucial fact – the real world looks nothing like that. EY sounds like he's conceding that that's actually happening instead of correcting people about reality, and he's also claiming it's obviously fine because rich people love their statues, yachts and sushi so much that it's what inspires them to make companies. (It's debateable, and there are upsides, but it's not obviously fine.)


Steve Jobs is the authorized biography by Walter Isaacson. It says (context: Steve didn't want to do a hostile takeover of Apple) (my italics):

“You know, Larry [Ellison], I think I’ve found a way for me to get back into Apple and get control of it without you having to buy it,” Jobs said as they walked along the shore. Ellison recalled, “He explained his strategy, which was getting Apple to buy NeXT, then he would go on the board and be one step away from being CEO.” Ellison thought that Jobs was missing a key point. “But Steve, there’s one thing I don’t understand,” he said. “If we don’t buy the company, how can we make any money?” It was a reminder of how different their desires were. Jobs put his hand on Ellison’s left shoulder, pulled him so close that their noses almost touched, and said, “Larry, this is why it’s really important that I’m your friend. You don’t need any more money.

Ellison recalled that his own answer was almost a whine: “Well, I may not need the money, but why should some fund manager at Fidelity get the money? Why should someone else get it? Why shouldn’t it be us?”

“I think if I went back to Apple, and I didn’t own any of Apple, and you didn’t own any of Apple, I’d have the moral high ground,” Jobs replied.

“Steve, that’s really expensive real estate, this moral high ground,” said Ellison. “Look, Steve, you’re my best friend, and Apple is your company. I’ll do whatever you want.”

(Note that Ellison, too, despite having a more money-desiring attitude, didn't actually prioritize money. He might be the richest man in the world today if he'd invested heavily in Steve Jobs' Apple, but he put friendship first.)


Elliot Temple | Permalink | Messages (3)

Less Wrong Banned Me

habryka wrote about why LW banned me. This is habryka’s full text plus my comments:

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

The ban isn’t for two years. It’s from Sept 16 2020 through Dec 31 2022.

They didn’t bother to notify me. I found out in the following way:

First, I saw I was logged out. Then I tried to log back in and it said my password was wrong. Then I tried to reset my password. When submitting a new password, it then gave an error message saying I was banned and until what date. Then I messaged them on intercom and 6 hours later they gave me a link to the public announcement about my ban.

That’s a poor user experience.

Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Periergo is a sock puppet of Andy B. Andy harassed FI long term with many false identities, but left for months when I caught him, connected the identities, and blogged it. But he came back in August 2020 and has written over 100 comments since returning, and he made a fresh account on Less Wrong for the purpose of harassing me and disrupting my discussions there. He essentially got away with it. He stirred up trouble and now I’m banned. What does he care that his fresh sock puppet, with a name he’ll likely never use again anywhere, is banned? And he’ll be unbanned at the same time as me in case he wants to further torment me using the same account.

Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.

I started at around -775 karma when I returned to Less Wrong recently and went up. I originally debated Popper, induction and cognitive biases at LW around 9 years ago and got lots of downvotes. I returned around 3 years ago when an LW moderator invited me back because he liked my Paths Forward article. That didn’t work out and I left again. I returned recently for my own reasons, instead of because someone incorrectly suggested that I was wanted, and it was going better. I knew some things to expect, and some things that wouldn’t work, and I'd just read LW's favorite literature, RAZ.

BTW, I don’t know how my karma is being calculated. My previous LW discussions were at the 1.0 version of the site where votes on posts counted for 10 karma, and votes on comments counted for 1 karma. When I went back the second time, a moderator boosted my karma enough to be positive so that I could write posts instead of just comments. LW 2.0 allows you to write posts while having negative karma and votes on posts and comments are worth the same amount, but your votes count for multiple karma if you have high karma and/or use the strong vote feature. I don’t know how old stuff got recalculated when they did the version 2.0 website.

Overall I have around negative 1 karma per comment, so that’s … not all that bad? Or apparently the lowest ever. If downvotes on the old posts still count 10x then hundreds of my negative karma is from just a few posts.

In general, I think outliers should be viewed as notable and potentially valuable, especially outliers that you can already see might actually be good (as habryka says about me below). Positive outliers are extremely valuable.

The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".

I consider it strange to ban me for stuff I did in the distant past but was not banned for at the time.

I find it especially strange to ban me for 2 years over stuff that’s already 3 or 9 years old (the evaders guest post by Alan is a year old, and btw "evade" is standard Objectivist philosophy terminology). I already left the site for longer than the ban period. Why is a 5 year break the right amount instead of 3? habryka says below that he thinks I was doing better (from his point of view and regarding what the LW site wants) this time.

They could have asked me about that particular post before banning me, but didn’t. They also could have noted that it’s an old post that only came up because Andy linked it twice on LW with the goal of alienating people from me. They’re letting him get what he wanted even though they know he was posting in bad faith and breaking their written rules.

I, by contrast, am not accused of breaking any specific written rule that LW has, but I’ve been banned anyway with no warning.

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I didn’t threaten anyone. I’m guessing it was a careless wording. I think habryka should retract or clarify it. Above habryka used “attack[]” as a synonym for criticize. I don’t like that but it’s pretty standard language. But I don’t think using “threat[en]” as a synonym for criticize is reasonable.

“threaten” has meanings like “state one's intention to take hostile action against someone in retribution for something done or not done” and “express one's intention to harm or kill“ (New Oxford Dictionary). This is the thing in the post that I most strongly object to.

I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

So I came back after 3 years, posted in a way they liked significantly better … I’m building cool things and plausibly amazing while also making major progress at compatibility with LW … but they’re banning me anyway, even though my old posts didn’t get me banned.

More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

I don’t see why they couldn’t wait for me to do something wrong to ban me, or give me any warning or guidance about what they wanted me to do differently. I doubt this would have happened this way if Andy hadn’t done targeted harassment.

At least they wrote about their reasons. I appreciate that they’re more transparent than most forums.

In another message, habryka clarified his comment about others not updating their views of me based on this ban:

The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.

I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.

I’m unclear on what aspect of LW culture that I’m a mismatch for. Or put another way: I may interface better with other cultures which have or lack what particular characteristics compared to LW?


Also, LW didn't explain how they decided on ban lengths. 2.3 year bans don't correspond to solving the problems raised. Andy or I could easily wait and then do the stuff LW doesn't want. They aren't asking us to do anything to improve or to provide any evidence that we've reformed in some way. Nor are they asking us to figure out how we can address their concerns and prevent bad outcomes. They're just asking us to wait and, I guess, counting on us not to hold grudges. Problems don't automatically go away due to time passing.

Overall, I think LW’s decision and reasoning are pretty bad but not super unreasonable compared to the general state of our culture. I wouldn’t expect better at most forums and I’ve seen much worse. Also, I’m not confident that the reasoning given fully and accurately represents the actual reasons. I'm not convinced that they will ban other people using the same reasoning like that they didn't break any particular rules but might be a net-negative for the site, especially considering that "the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned." (source from 2016, maybe they're more trigger-happy in 2020, I don't know).


Elliot Temple | Permalink | Messages (14)