I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.
Altruism means (New Oxford Dictionary):
the belief in or practice of disinterested and selfless concern for the well-being of others
Discussion about altruism often involves being vague about a specific issue. Is this selfless concern self-sacrificial? Is it bad for the self or merely neutral? This definition doesn’t specify.
The second definition does specify but isn’t for general use:
Zoology behavior of an animal that benefits another at its own expense
Multiple dictionaries fit the pattern of not specifying self-sacrifice (or not) in the main definition, then bringing it up in an animal-focused definition.
New Oxford’s thesaurus is clear. Synonyms for altruism include:
unselfishness, selflessness, self-sacrifice, self-denial
Webster’s Third suggests altruism involves lack of calculation, and doesn’t specify whether it’s self-sacrificial:
uncalculated consideration of, regard for, or devotion to others' interests sometimes in accordance with an ethical principle
EA certainly isn’t uncalculated. EA does stuff like mathematical calculations and cost/benefit analysis. Although the dictionary may have meant something more like shrewd, self-interested, Machiavellian calculation. If so, they really shouldn’t try to put so much meaning into one fairly neutral word like that without explaining what they mean.
a way of thinking or behaving that shows you care about other people and their interests more than you care about yourself
Caring more about their interests than yourself suggests self-sacrifice, a conflict of interest (where decisions favoring you or them must be made), and a lack of win-win solutions or mutual benefit.
Does EA have any standard, widely read and accepted literature which:
- Clarifies whether it means self-sacrificial altruism or whether it believes its “altruism” is good for the self?
- Refutes (or accepts!?) the classical liberal theory of the harmony of men’s interests.
Harmony of Interests
Is there any EA literature regarding altruism vs. the (classical) liberal harmony of interests doctrine?
EA believes in conflicts of interest between men (or between individual and total utility). For example, William MacAskill writes in The Definition of Effective Altruism:
Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent. Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.
I understand EA’s viewpoint to include:
- There are conflicts between individual utility and overall utility (the impartial good).
- It’s possible to altruistically sacrifice some individual utility in a way that makes overall utility go up. In simple terms, you give up $100 but it provides $200 worth of benefit to others.
- When people voluntarily sacrifice some individual utility to altruistically improve overall utility, they should do it in (cost) effective ways. They should look at things like lives saved per dollar. Charities vary dramatically in how much overall utility they create per dollar donated.
- It’d be good if some people did some effective altruism sometimes. EA wants to encourage more of this, although it doesn’t want to be too pressuring, so it does not claim that large amounts of altruism are a moral obligation for everyone. If you want to donate 10% of your income to cost effective charities, EA will say that’s great instead of saying you’re a sinner because you’re still deviating from maximizing overall utility. (EA also has elements which encourage some members to donate a lot more than 10%, but that’s another topic.)
Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.
EA is compatible with many views on how to calculate overall utility, not just the view that you should add every individual utility. In other words, EA is not based on a specific overall/impersonal utility function. EA also is not based on any advocating that individuals have any particular individual utility function or any claim that the world population currently has a certain distribution of individual utility functions.
All of this contradicts the classical liberal theory of the harmony of men’s (long term, rational) interests. And doesn’t engage with it. They just seem unaware of the literature they’re disagreeing with (or they’re aware and refusing to debate with it on purpose?), even though some of it is well known and easy to find.
Total Utility Reasoning and Liberalism
I understand EA to care about total utility for everyone, and to advocate people altruistically do things which have lower utility for themselves but which create higher total utility. One potential argument is that if everyone did this then everyone would have higher individual utility.
A different potential approach to maximizing total utility is the classical liberal theory of the harmony of men’s interests. It says, in short, that there is no conflict between following self-interest and maximizing total utility (for rational men in a rational society). When there appears to be a conflict, so that one or the other must be sacrificed, there is some kind of misconception, distortion or irrationality involved. That problem should be addressed rather than accepted as an inherent part of reality that requires sacrificing either individual or total utility.
According to the liberal harmony view, altruism claims there are conflicts between the individual and society which actually don’t exist. Altruism therefore stirs up conflict and makes people worse off, much like the Marxist class warfare ideology (which is one of the standard opponents of the harmony view). Put another way, spreading the idea of conflicts of interest is an error that lowers total utility. The emphasis should be on harmony, mutual benefit and win/win solutions, not on altruism and self-sacrifice.
It’s really bad to ask people to make tough, altruistic choices if such choices are unnecessary mistakes. It’s bad to tell people that getting a good outcome for others requires personal sacrifices if it actually doesn’t.
Is there any well-known, pre-existing EA literature which addresses this, including a presentation of the harmony view that its advocates would find reasonably acceptable? I take it that EA rejects the liberal harmony view for some reason, which ought to be written down somewhere. (Or they’re quite ignorant, which would be very unreasonable for the thought leaders who developed and lead EA.) I searched the EA forum and it looks like the liberal harmony view has never been discussed, which seems concerning. I also did a web search and found nothing regarding EA and the liberal harmony of interests theory. I don’t know where or how else to do an effective EA literature search.