I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.
Meta criticism is potentially more powerful than direct criticism of specific flaws. Meta criticism can talk about methodologies or broad patterns. It’s a way of taking a step back, away from all the details, to look critically at a bigger picture.
Meta criticism isn’t very common. Why? It’s less conventional, normal, mainstream or popular. That makes it harder to get a positive reception for it. It’s less well understood or respected. Also, meta criticism tends to be more abstract, more complicated, harder to get right, and harder to act on. In return for those downsides, it can be more powerful.
On average or as some kind of general trend, is the cost-to-benefit ratio for meta criticism is better or worse than regular criticism? I don’t really know. I think neither one has a really clear advantage and we should try some of both. Plus, to some extent, they do different things so again it makes sense to use both.
I think there’s an under-exploited area with high value, which is some of the most simple, basic meta criticisms. These are easier to understand and deal with, yet can still be powerful. I think these initial meta criticisms tend to be more important than concrete criticisms. Also, meta criticisms are more generic so they can be re-used between different discussions or different topics more, and that’s especially true for the more basic meta criticisms that you would start with (whereas more advanced meta criticism might depend on the details of a topic more).
So let’s look at examples of introductory meta criticisms which I think have a great cost-to-benefit ratio (given that people aren’t hostile to them, which is a problem sometimes). These examples will help give a better sense of what meta criticisms are in addition to being useful issues to consider.
Do you act based on methods?
“You” could be a group or individual. If the answer is “no” that’s a major problem. Let’s assume it’s “yes”.
Are the methods written down?
Again, “no” is a major problem. Assuming “yes”:
Do the methods contain explicit features designed to reduce bias?
Again, “no” is a major problem. Examples of anti-bias features include transparency, accountability, anti-bias training or ways of reducing the importance of social status in decision making (such as some decisions being made in random or blinded ways).
Many individuals and organizations in the world have already failed within the first three questions. Others could technically say “yes” but their anti-bias features aren’t very good (e.g. I’m sure every large non-crypto bank has some written methods that employees use for some tasks which contain some anti-bias features of some sort despite not really even aiming at rationality).
But, broadly, those with “no” answers or poor answers don’t want to, and don’t, discuss this and try to improve. Why? There are many reasons but here’s a particularly relevant one: They lack methods of talking about it with transparency, accountability and other anti-bias features. The lack of rational discussion methodology protects all their other errors like lack of methodology for whatever it is that they do.
One of the major complicating factors is how groups work. Some groups have clear leadership and organization structures, with a hierarchical power structure which assigns responsibilities. In that case, it’s relatively easy to blame leadership for big picture problems like lack of rational methods. But other groups are more of a loose association without a clear leadership structure that takes responsibility for considering or addressing criticism, setting policies, etc. Not all groups have anyone who could easily decide on some methods and get others to use them. EA and LW are examples of groups with significant voids in leadership, responsibility and accountability. They claim to have a bunch of ideas, but it’s hard to criticize them because of the lack of official position statements by them (or when there is something kinda official, like The Sequences, the people willing to talk on the forum often disagree with or are ignorant of a lot of that official position – there’s no way to talk with a person who advocates the official position as a whole and will take responsibility for addressing errors in it, or who has the power to fix it). There’s no reasonable, reliable way to ask EA a question like “Do you have an a written methodology for rational debate?” and get an official answer that anyone will take responsibility for.
So one of the more basic, introductory areas for meta criticism/questioning is to ask about rational methodology. And a second is to ask about leadership, responsibility, and organization structure. If there is an error, who can be told who will fix it, and how does one get their attention? If some clarifying questions are needed before sharing the error, how does one get them answered? If the answers are things like “personally contact the right people and become familiar with the high status community members” that is a really problematic answer. There should be publicly accessible and documented options which can be used by people who don’t have social status within the community. Social status is a biasing, irrational approach which blocks valid criticism from leading to change. Also, even if the situation is better than that, many people won’t know it’s better, and won’t try, unless you publicly tell them it’s better in a convincing way. To be convincing, you have to offer specific policies with guarantees and transparency/accountability, rather than saying a variant of “trust us”.
Guarantees can be expensive especially when they’re open to the general public. There are costs/downsides here. Even non-guaranteed options, such as an option suggestion box for unsolicited advice, even if you never reply to anything, have a cost. If you promised to reply to every suggestion, that would be too expensive. Guarantees need to have conditions placed on them. E.g. “If you make a suggestion and read the following ten books and pay $100, then we guarantee a response (limit: one response per person per year).” That policy would result in a smaller burden than responding to all suggestions, but it still offers a guarantee. Would the burden still be too high? It depends how popular you are. Is a response a very good guarantee? Not really. You might read the ten books, pay the money, and get the response “No.” or “Interesting idea; we’ll consider it.” and nothing more. That could be unsatisfying. Some additional guarantees about the nature of the response could help. There is a ton of room to brainstorm how to do these things well. These kinds of ideas are very under-explored. An example stronger guarantee would be to respond with either a decisive refutation or else to put together an exploratory committee to investigate taking the suggestion. Such committees have a poor reputation and could be replaced with some other method of escalating the idea to get more consideration.
Guarantees should focus on objective criteria. For example, saying you’ll respond to all “good suggestions” would be a poor guarantee to offer. How can someone predictably know in advance whether their suggestion will meet that condition or not? Design policies to not let decision makers use arbitrary judgment which could easily be biased or wrong. For example, you might judge “good” suggestions using the “I’ll know it when I see it method”. That would be very arbitrary and a bad approach. If you say “good” means “novel, interesting, substantive and high value if correct” that is a little better, but still very bad, because a decision maker can arbitrary judge whatever he wants as bad and there’s no effective way to hold him accountable, determine his judgment was an error, get that error corrected, etc. There’s also there’s poor predictability for people considering making suggestions.
From what I can tell, my main disagreement with EA is I think EA should have written, rational debate methods, and EA doesn’t think so. I don’t know how to make effective progress on resolving that disagreement because no one from EA will follow any specific rational debate methods. Also EA offers no alternative solution, that I know of, to the same problem that rational debate methods are meant to solve. Without rational debate methods (or an effective alternative), no other disagreements really matter because there’s nothing good to be done about them.