EA risks falling into a "meta trap". But we can avoid it. (EA is effective altruism. It is favored by people who believe they are into reason and logic, similar to the Less Wrong community. The standard type is an atheist who rejects superstition, loves science, talks about thinking fallacies and biases, and reads non-fiction books.)
While donating to AMF all our lives is great, if we can spend our effort to get two people to donate to AMF instead of us, we’ve doubled our impact.
The author means: instead of person A donating $100 to charity, he can spend $100 on marketing/outreach/persuasion to get persons B and C to each donate $100 to the charity. He claims that, if that works out, then it means person A has doubled his impact.
Is that good? I think it means person A doubled his impact at triple the total cost. Now $300 was spent instead of $100.
This is the same issue as the broken window fallacy, also known as the fallacy of the seen and unseen. (The seen is the window repair guy getting paid and then spending that money at a bakery and then the baker buying shoes and so on. The unseen is that the window owner would have spent the money on something else if he didn't have to buy a window repair, e.g. he would have bought a suit and then the tailor would have bought bread and then the baker would have bought shoes and so on. So breaking the window did not stimulate the economy by creating demand for window repairs and thus make people better off. Breaking windows is bad.) The seen here is persons B and C donating $100 each to the charity. The unseen is what they would have used that money for otherwise. That $200 would have been spent elsewhere, and might have provided more value than an additional $100 for the charity.
(If you don't recognize my explanation of the broken window fallacy, and want to learn more, read Economics in One Lesson. I'm just repeating economists like Bastiat and Hazlitt, not saying anything new. The book description at the link states 'this is the book that made the idea of the "broken window fallacy" so famous'. It's a great introductory book which doesn't require doing math.)
In the author's math, the $200 spent elsewhere has zero impact. It's worthless. That's not a considered opinion, it's because he forgot to count it for anything, he didn't think about it, just like the broken window fallacy forgets to consider what the window repair money could have been spent on instead.
Suppose the $200 was spent quite badly, then maybe it would have $50 of impact (25% effectiveness on a scale where the charity is 100% effective). That's generous and lets his meta strategy come out ahead, but not by double. Let's do the math on how much the person A actually helped anything. If he donates $100, and the other people spend their money badly, the total impact is $150. If person A does marketing and gets B and C to donate, the total impact is $200. The increase in impact in this generous scenario is 33% not 100%. (100% impact means double, that's the claimed impact.)
If B and C would have spent their money at 50% effectiveness, then everything comes out equal. If they would have spent their money at 75% effectiveness, then person A hasn't double his impact, he's made the world worse.
Also, charities can handle their own marketing. If you donate $100, the charity itself can then use that for marketing and bring in $200 of donations. If they don't think more marketing is the best use of that $100, there is a reason.
Some charities seem happy to spend $100 asking for donations in order to bring in $101 of additional donations. This makes the world a worse place! A lot more wealth gets spent on mailing letters and other things that don't help people.
The author thinks spending $100 to bring in $200 of donations is a $100 win. By the same logic, spending $100 to bring in $101 of donations is a $1 win. He'd see it as a positive thing because he forgets that those $101 of additional donations would have been spent on something else that would have been a larger win than the $1 benefit he sees.
Conclusion: The EA community is grossly incompetent. It's not just this one writer (who participates in EA discussions a ton), it's the whole community, or else he would have been corrected (the post was high effort and got significant attention, and there are a bunch of very positive comments). They are literally doing broken-window-fallacy level thinking while believing they are cleverly improving charity, and the whole big community of "smart" people do not see and correct the error.
> Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
> Which is more probable? (1) Linda is a bank teller or (2) Linda is a bank teller and is active in the feminist movement.
> When asked by Tversky and Khaneman, the majority of people picked #2. However, this isn’t possible, since the probability of two events both occurring cannot be greater than the probability of one of those two events occurring.
> This is called the conjuction fallacy, and it is a classic bias of human rationality. However, it’s also a classic bias of meta-charity.
How have none of them figured out yet that that study is terrible? The people weren't doing fallacious mathematical analysis, they were interpreting a communication in a non-literal way – which is how most communication in our society works. They thought they were being asked which is a better story about Linda, something like that.
All the study shows is that you can create misunderstandings if you ask very literal-minded questions to people who don't do much hyper-literal thinking in their life and also don't even realize you want them to answer like what they would say if they were autistic and wanted to be mocked by their friends (giving that type of answer is, admittedly, something they aren't very good at doing, because they haven't practiced it).
Here's the paper. It has plenty of the usual problems with this sort of research, in addition to the above. https://apps.dtic.mil/dtic/tr/fulltext/u2/a131801.pdf
The paper does have some better arguments than the one people remember and quote. They asked the people some other stuff. But it still has broad problems like not doing enough to get people to even try to be in hyper-literal thinking mode. Also not practicing arguing like an autistic hair-splitter (so even if they tried their best, which they didn't in this study, they might not do a great job) is different than having a cognitive bias. It's the same as how they would suck at chess the first time they played, having not practiced before. But the methods of this study, that would prove they have e.g. a "letting your opponent capture our pieces with his pawns cognitive bias" b/c, when asked about what move they would play in a chess position, they gave an incorrect answer. Most people aren't logicians or mathematicians or anything like that. That doesn't mean their brains are broken or deficient or whatever. It just means they don't have certain skills they haven't ever learned but could learn if they wanted to (they don't) (and maybe also if the available educational materials weren't so bad and their experience attending school hadn't alienated them from learning).
Normally when you persuade someone to buy something instead of any alternatives, they only prefer it by a small margin. If you tell them about a better way to use $100, it might only be $3 better than what they were going to do before. If you spend an average of $50 per person to tell them how to do something which is slightly ($3) better, that's bad! Charities try to get around this by asking you to *sacrifice yourself* (altruism), and actually come out behind, in order to provide a larger win to other people.
The model many charities use is kinda like: You could spend $100 on clothes with $150 of value to you, or you could donate $100 to our charity for $10 worth of feeling good and we'll use it to provide clothes to orphans who will get $300 of value out of those clothes. Thus you come out $90 behind instead of $50 ahead, but the world comes out ahead overall (the $100 led to $310 of value instead of $100). This model is different than telling me about some better clothes so that I can get $160 of value of clothes for my $100 (a small improvement).
Even if you accept this model, it changes the math compared to how the EA author did the calculation. He didn't make a competent effort to do the math correctly.
The value people get from buying products is roughly proportional to the sale price. The value the company gets from selling it is related to profit margin. Those are different things. This reduces marketing by for-profit companies.
A for-profit company with 10% profit margins will only spend $100 on marketing if they can bring in $1000+ of sales revenue (which translates to $100+ profit). But a charity will spend $100 on marketing to bring in only $110 because they view their profit margins on donations as 100%.
How much benefit do the customers get? Let's say customers have 20% margins on average. They pay $100 and get $120 of value. Why? Well the may they would pay is $120, but the minimum the store would sell it for is $90 if they were negotiating. The price needs to be somewhere in the middle so both the store and customer benefit. The store sets a price of $100 in order to make some profit while also providing an attractive deal to enough customers (both in terms of them getting enough value from the product to pay that much, and also if they tried to charge $110, but another company charges $100, then customers won't pay $110 even if it was worth $200 to them, cuz they can get it for $100).
So basically on $1000 of sales, the company makes $100 profit and customers make $200 of profit, they get $1200 of value. So $100 of marketing happened to a group of customers and in return for opening their mail, or seeing billboards, or whatever, the group got $200 of value.
If charities have a similar ratio – a $100 donor to a charity gets $120 of value – then then $110 of donations creates $22 of value for the donors to make up for putting up with $100 of marketing.
The marketing-to-value-creation ratio is far worse with charities because of their 100% profit margins. Lower profit margins require more total sales per marketing dollar, and the benefit to the public is proportional to total sales.
This is a reason non-profits are dangerous: they have been incentives and advertise too much.
This line of argument assumes basically the first model, not the second (sacrificial) model, from #12248 I'll leave considering how to modify the analysis for the charity-is-sacrifice model as something for you to think about.
Pointing out the broken window fallacy thinking in EA is a big deal and the sort of thing that should cause lots of people to stop and reanalyze
they would just deny that it’s in other EA materials. it's a one off error! bad luck the ppl who read that particular article didn’t happen to notice it while paying attention to the actual main points of the article!
well, some ppl would deny it’s the broken window fallacy. but the better ones would do the other thing, and it’d be a big distraction and hard to get a quick, decisive debate result.
they'd pretend it was just this one minor comment in this one article that isn’t super important. they would not, however, supply us with a link to their most important article, for which any error really would be a big deal...
why don't you post a comment on the EA article and see how they react?
I think we should all read their books and analyse them. What say you?
#12251, I think the seen/unseen distinction in the broken window fallacy is another way of saying "consider the counterfactual" (what would have happened anyway in a given situation without the intervention). I think this is very common in EA and already considered in most EA analyses. Does that seem correct to you, or am I missing something that differentiates the two arguments?
My impression is that measuring the impact of an intervention against its counterfactual is exactly the kind of analysis that made GiveWell unique and valuable in the charity evaluation space, and is now much more common in international development in the form of RCTs as well. It seems to me like this is an expected part of an EA analysis already. Do you see it the same/differently?
"In the author's math, the $200 spent elsewhere has zero impact. It's worthless. That's not a considered opinion, it's because he forgot to count it for anything, he didn't think about it, just like the broken window fallacy forgets to consider what the window repair money could have been spent on instead."
This seems wrong to me, like the most uncharitable reading that misses the context. EAs whole claim is that people should pay a lot of attention to what their money and time could do if spent intentionally and with impact in mind. They don't claim that people should donate the $200 to cause X because that money will zero impact elsewhere. They have written articles and books showing that they looked at the counterfactual impact and think that it's less valuable to spend the $200 elsewhere. That might be a point of legitimate disagreement, but it seems odd to assume they haven't thought about this.
The standard EA argument is that most people either don't give to charity at all or they give to ineffective charity. A reasonable reading to me of what the author was likely thinking when he wrote that was that the $200 spent elsewhere would have gone to (a) personal consumption instead of charity and he thinks this is less good for the world, (b) a less effective charity, which would be less good for the world, or (c) an actively damaging charity, which would be net negative for the world.
(If it's relevant, the article is also 4 years old).
#12292 Double impact is a *mathematical* claim. Your interpretations don't even try to make the math work out.
If you think want to link the best EA article – one where any important mistake would shake your confidence in EA being any good – go ahead. I don't think talking non-specifically about what EA is like, without quotes, will get anywhere. The previous blog post, right before this one, actually talks about that: https://curi.us/2194-discussion-policy-quotes-or-youre-presumed-wrong