I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.
This post covers some of my earlier time at EA but doesn’t discuss some of the later articles I posted there and the response.
I have several ideas about how to increase EA’s effectiveness by over 20%. But I don’t think think they will be accepted immediately. People will find them counter-intuitive, not understand them, disagree with them, etc.
In order to effectively share ideas with EA, I need attention from EA people who will actually read and think about things. I don’t know how to get that and I don’t think EA offers any list of steps that I could follow to get it, nor any policy guarantees like “If you do X, we’ll do Y.” that I could use to bring up ideas. One standard way to get it, which has various other advantages, is to engage in debate (or critical discussion) with someone. However, only one person from EA (who isn’t particularly influential) has been willing to try to have a debate or serious conversation with me. By a serious conversation, I mean one that’s relatively long and high effort, which aims at reaching conclusions.
My most important idea about how to increase EA’s effectiveness is to improve EA’s receptiveness to ideas. This would let anyone better share (potential) good ideas with EA.
EA views itself as open to criticism and it has a public forum. So far, no moderator has censored my criticism, which is better than many other forums! However, no one takes responsibility for answering criticism or considering suggestions. It’s hard to get any disagreements resolved by debate at EA. There’s also no good way to get official or canonical answers to questions in order to establish some kind of standard EA position to target criticism at.
When one posts criticism or suggestions, there are many people who might engage, but no one is responsible for doing it. A common result is that posts do not get engagement. This happens to lots of other people besides me, and it happens to posts which appear to be high effort. There are no clear goalposts to meet in order to get attention for a post.
Attention at the EA forum seems to be allocated in pretty standard social hierarchy ways. The overall result is that EA’s openness to criticism is poor (objectively, but not compared to other groups, many of which are worse).
John the Hypothetical Critic
Suppose John has a criticism or suggestion for EA that would be very important if correct. There are three main scenarios:
- John is right and EA is wrong.
- EA is right and John is wrong.
- EA and John are both wrong.
There should be a reasonable way so that, if John is right, EA can be corrected instead of just ignoring John. But EA doesn’t have effective policies to make that happen. No person or group is responsible for considering that John may be right, engaging with John’s arguments, or attempting to give a rebuttal.
It’s also really good to have a reasonable way so that, if John is wrong and EA is right, John can find out. EA’s knowledge should be accessible so other people can learn what EA knows, why EA is right, etc. This would make EA much more persuasive. EA has many articles which help with this, but if John has an incorrect criticism and is ignored, then he’s probably going to conclude that EA is wrong and won’t debate him, and lower his opinion of EA (plus people reading the exchange might do the same – they might see John give an criticism that isn’t answered and conclude that EA doesn’t really care about addressing criticism).
If John and EA are both wrong, it’d also be a worthwhile topic to devote some effort to, since EA is wrong about something. Discussing John’s incorrect criticism or suggestion could lead to finding out about EA’s error, which could then lead to brainstorming improvements.
I’ve written about these issues before with the term Paths Forward.
Me Visiting EA
The first thing I brought up at EA was asking if EA has any debate methodology or any way I can get a debate with someone. Apparently not. My second question was about whether EA has some alternative to debates, and again the answer seemed to be no. I reiterated the question again, pointing out the “debate methodology” plus “alternative to debate methodology” issues form a complete pair, and if EA has neither that’s bad. This time, I think some people got defensive about the title, which caused me to get more attention than when my post title didn’t offend people (the incentives there are really bad). The titled asked how EA was rational. Multiple replies seemed focused on the title, which I grant was vague, rather than the body text which gave details of what I meant.
Anyway, I finally got some sort of answer: EA lacks formal debate or discussion methods but has various informal attempts at rationality. Someone shared a list. I wrote a brief statement of what I thought the answer was and asked for feedback if I got EA’s position wrong. I got it right. I then wrote an essay criticizing EA’s position, including critiques of the listed points.
What happened next? Nothing. No one attempted to engage with my criticism of EA. No one tried to refute any of my arguments. No one tried to defend EA. It’s back to the original problem: EA isn’t set up to address criticism or engage in debate. It just has a bunch of people who might or might not do that in each case. There’s nothing organized and no one takes responsibility for addressing criticism. Also, even if someone did engage with me, and I persuaded them that I was correct, it wouldn’t change EA. It might not even get a second person to take an interest in debating the matter and potentially being persuaded too.
I think I know how to organize rational, effective debates and reach conclusions. The EA community broadly doesn’t want to try doing that my way nor do they have a way they think is better.
If you want to gatekeep your attention, please write down the rules you’re gatekeeping by. What can I do to get past the gatekeeping? If you gatekeep your attention based on your intuition and have no transparency or accountability, that is a recipe for bias and irrationality. (Gatekeeping by hidden rules is related to the rule of man vs. the rule of law, as I wrote about. It’s also related to security through obscurity, a well known mistake in software. Basically, when designing secure systems, you should assume hackers can see your code and know how the system is designed, and it should be secure anyway. If your security relies on keeping some secrets, it’s poor security. If your gatekeeping relies on adversaries not knowing how it works, rather than having a good design, you’re making the security through obscurity error. That sometimes works OK if no one cares about you, but it doesn’t work as a robust approach.)
I understand that time, effort, attention, engagement, debate, etc., are limited resources. I advocate having written policies to help allocate those resources effectively. Individuals and groups can both do this. You can plan ahead about what kinds of things you think it’s good to spend attention on, and write down decision making criteria, and share them publicly, and etc., instead of just leaving it to chance or bias. Using written rationality policies to control some of these valuable resources would let them be used more effectively instead of haphazardly. The high value of the resources is a reason in favor, not again, governing their use with explicit policies that are put in writing then critically analyzed. (I think intuition has value too, despite the higher risk of bias, so allocating e.g. 50% of your resources to conscious policies and 50% to intuition would be fine.)
“It’s not worth the effort” is the standard excuse for not engaging with arguments. But it’s just an excuse. I’m the one who has researched how to do such things efficiently, how to save effort, etc., without giving up on rationality. They aren’t researching how to save effort and designing good, effort-saving methods, nor do they want the methods I developed. People just say stuff isn’t worth the effort when they’re biased against thinking about it, not as a real obstacle that they actually want a solution to. They won’t talk about solutions to it when I offer, nor will they suggest any way of making progress that would work if they’re in the wrong.
LW Short Story
Here’s a short story as an aside (from memory, so may have minor inaccuracies). Years ago I was talking with Less Wrong (LW) about similar issues. LW and EA are similar places. I brought up some Paths Forward stuff. Someone said basically he didn’t have time to read it, or maybe didn’t want to risk wasting his time. I said the essay explains how to engage with my ideas in time-efficient, worthwhile ways. So you just read this initial stuff and it’ll give you the intellectual methods to enable you to engage with my other ideas in beneficial ways. He said that’d be awesome if true, but he figures I’m probably wrong, so he doesn’t want to risk his time. We appeared to be at an impasse. I have a potential solution with high value that addresses his problems, but he doubts it’s correct and doesn’t want to use his resources to check if I’m right.
My broad opinion is someone in a reasonably large community like LW should be curious and look into things, and if no one does then each individual should recognize that as a major problem and want to fix it.
But I came up with a much simpler, directer solution.
It turns out he worked at a coffee shop. I offered to pay him the same wage as his job to read my article (or I think it was a specific list of a few articles). He accepted. He estimated how long the stuff would take to read based on word count and we agreed on a fixed number of dollars that I’d pay him (so I wouldn’t have to worry about him reading slowly to raise his payment). The estimate was his idea, and he came up with the numbers and I just said yes.
But before he read it, an event happened that he thought gave him a good excuse to back out. He backed out. He then commented on the matter somewhere that he didn’t expect me to read, but I did read it. He said he was glad to get out of it because he didn’t want to read it. In other words, he’d rather spend an hour working at a coffee shop than an hour reading some ideas about rationality and resource-efficient engagement with rival ideas, given equal pay.
So he was just making excuses the whole time, and actually just didn’t want to consider my ideas. I think he only agreed to be paid to read because he thought he’d look bad and irrational if he refused. I think the problem is that he is bad and irrational, and he wants to hide it.
My first essay criticizing EA was about rationality policies, how and why they’re good, and it compared them to the rule of law. After no one gave any rebuttal, or changed their mind, I wrote about my experience with my debate policy. A debate policy is an example of a rationality policy. Although you might expect that conditionally guaranteeing debates would cost time, it has actually saved me time. I explained how it helps me be a good fallibilist using less time. No one responded to give a rebuttal or to make their own debate policy. (One person made a debate policy later. Actually two people claimed to, but one of them was so bad/unserious that I don’t count it. It wasn’t designed to actually deal with the basic ideas of a debate policy, and I think it was made in bad faith because they person wanted to pretend to have a debate policy. As one example of what was wrong with it, they just mentioned it in a comment instead of putting it somewhere that anyone would find it or that they could reasonably link to in order to show it to people in the future.)
I don’t like even trying to talk about specific issues with EA in this broader context where there’s no one to debate, no one who wants to engage in discussion. No one feels responsible for defending EA against criticism (or finding out that EA Is mistaken and changing it). I think that one meta issue has priority.
I have nothing against decentralization of authority when many individuals each take responsibility. However, there is a danger when there is no central authority and also no individuals take responsibility for things and also there’s a lack of coordination (leading to e.g. lack of recognition that, out of thousands of people, zero of them dealt with something important).
I think it’s realistic to solve these problems and isn’t super hard, if people want to solve them. I think improving this would improve EA’s effectiveness by over 20%. But if no one will discuss the matter, and the only way to share ideas is by climbing EA’s social hierarchy and becoming more popular with EA by first spending a ton of time and effort saying other things that people like to hear, then that’s not going to work for me. If there is a way forward that could rationally resolve this disagreement, please respond. Or if any individual wants to have a serious discussion about these matters, please respond.
I’ve made rationality research my primary career despite mostly doing it unpaid. That is a sort of charity or “altruism” – it’s basically doing volunteer work to try to make a better world. I think it’s really important, and it’s very sad to me that even groups that express interest in rationality are, in my experience, so irrational and so hard to engage with.