Sam Harris wrote an article against economic freedom. Every sentence is nasty. I reply to a few:
How Rich is Too Rich?
The title is a leading question. It's asking for an answer like 3 million, 50 million, or a billion dollars. It's assuming there is an amount of wealth that's too rich, and the issue is just to decide where the line is. But that premise is incorrect. There is no "too rich". Wealth is a good thing. More wealth isn't bad.
Also, in our culture, the title will be understood to refer to individual wealth and maybe company wealth but not government wealth, university wealth or non-profit foundation wealth.
[Hearst Castle Photo, at the top]
The uncaptioned photo is misleading. The article opens by talking about wealth inequality and rich individuals. But that's a photo of a government owned tourist attraction, not a private residence. It's not a picture of wealth inequality.
I’ve written before about the crisis of inequality in the United States and about the quasi-religious abhorrence of “wealth redistribution” that causes many Americans to oppose tax increases, even on the ultra rich.
Ludwig von Mises and many other economists and political philosophers have written arguments against wealth redistribution and related concepts like socialism, statism, interventionism, initiating force, central planning, and the erosion of property rights. Rather than address these arguments, Harris just incorrectly implies they're a matter of religious faith.
The conviction that taxation is intrinsically evil has achieved a sadomasochistic fervor in conservative circles—producing the Tea Party, their Republican zombies, and increasingly terrifying failures of governance.
"intrinsically evil" is a straw man. "sadomasochistic fervor" is an insult. "Tea Party" is brought up negatively, without specifying anything negative about it. "Republican zombies" is an insult. The assertion that failures of governance are due to taxes being too low is false and unargued. The intensifier "increasingly terrifying" is aggressive, emotional rhetoric, without facts or reasoning provided.
We've now made it through the first paragraph of the article. I'll speed up for the rest.
Of course, this is just an economic cartoon.
After more insults and straw men, but no economic arguments, Harris declares that people who disagree with him are cartoon idiots. He follows up with wild uncited assertions. E.g. he thinks capitalism is at fault for the 2008 financial crisis, but he doesn't engage with the many books explaining why that's incorrect.
If you are an economist and believe that you have detected any erroneous assumptions above, please write to me here.
As I write this, the linked contact form doesn't exist. Also, this is dishonest because many economists have published detailed explanations of why the things Harris is saying are false. He's just ignoring them as if they don't exist, rather than trying to respond to any.
The federal government should levy a one-time wealth tax (perhaps 10 percent for estates above $10 million, rising to 50 percent for estates above $1 billion) and use these assets to fund an infrastructure bank.
This is a proposal for using physical force on a huge scale. Harris wants to forcibly take "a few trillion dollars" for projects he considers wise, including environmentalism. He doesn't understand liberal ideas like the advantages of dealing with people on a voluntary basis, using persuasion instead of force, or only interacting in a win/win way (when all parties think they're better off by proceeding).
Also, I don't think Harris thought through the practical details of his plan. Why does he think most or all multi-billionaires have ~50% or more of their wealth in liquid assets? And what happens if they don't? They have to take huge losses selling off non-liquid assets?
And stocks won't be liquid enough in the context of all the rich people trying to unload a bunch of stocks. The market would crash.
Consider if e.g. Jeff Bezos had to dramatically reduce the amount of wealth he has invested in Amazon. Here's four basic possibilities:
- Bezos finds new investors to replace him. This would basically be impossible when all the other rich people are also trying to come up with cash to pay a huge tax.
- Bezos sells the stock at a huge loss and is unable to pay the government half of what his net worth on paper used to be. He's pointlessly ruined.
- Amazon buys the stock back from Bezos (at near full price) and operates with much less capital than it had before. A tax intended to take money from rich individuals ends up hurting businesses.
- The government accepts non-cash assets as tax payments. Bezos simply hands over a large portion of his ownership of Amazon to the government. And if the government wanted to sell this stock so it could do the projects Harris wants, it'd face the same difficulties that Bezos did.
Contrary to many readers’ assumptions, I am not recommending that the federal government confiscate productive capital from the rich to subsidize the shiftlessness of people who do not want to work.
But he is advocating that the federal government confiscate productive capital from the rich. It's just for a different intended purpose.
to the eye of this non-economist, it seems obvious
Why doesn't he try reading some economics books to find out about what he's missing? The answers seem obvious because he's arrogant, despite knowing he's ignorant of the field.
Yes, I share everyone’s fear that our government, riven by political partisanship and special interests, is often incapable of spending money wisely. But that doesn’t mean a structure couldn’t be put in place to prevent poor uses of these funds.
Harris doesn't propose any structure that would prevent poor use of the funds, nor does he acknowledge that this is a hard problem which people have been trying to solve for centuries without much success. Putting in place a structure to make government more effective is not a new idea, but Harris treats it like an answer even though he apparently hasn't thought of a structure that would work (nor can Harris point to a structure that would work that anyone else has thought of).
The article is hateful throughout, advocates massive use of force (taking trillions of dollars from its owners who give up their property rights only because they don't want to be shot, jailed, or similar) and doesn't even try to engage with the economics literature or even a fair version of what Republicans think. Harris wrote a bunch of biased insults against large groups of mainstream Americans, but didn't contribute a single topical, relevant argument to the current debate about wealth inequality.
Harris also wrote a followup article unreasonably claiming that what his critics objected to was "suggesting that taxes should be raised on billionaires". He then contradicted that by admitting, "Many readers were enraged that I could support taxation in any form." But what about how Harris insulted all Republicans as zombies? And the overall message was his hatred for all people who favor liberal ideas like economic freedom, peace, or property rights?
Related Post: Criticism of Sam Harris' The Moral Landscape
Wow. You kicked his ass. I used to like him for his criticism of Islam.
> Wow. You kicked his ass.
It's not about kicking people's asses or beating others in some sort of competition, though. ET explained flaws in Harris's ideas. That's an amazing *gift*, not an ass-kicking.
From the follow up article
> The result was Objectivism—a view that makes a religious fetish of selfishness and disposes of altruism and compassion as character flaws. If nothing else, this approach to ethics was a triumph of marketing, as Objectivism is basically autism rebranded.
>And I say this as someone who considers himself, in large part, a “libertarian”—and who has, therefore, embraced more or less everything that was serviceable in Rand’s politics. The problem with pure libertarianism, however, has long been obvious: We are not ready for it. Judging from my recent correspondence, I feel this more strongly than ever. There is simply no question that an obsession with limited government produces impressive failures of wisdom and compassion in otherwise intelligent people.
He provides no arguments and says things like "has long been obvious" and "There is simply no question". Those are bad ways to deal with ideas.
> basically autism rebranded
> And I say this as someone who considers himself, in large part, a “libertarian”
In what sense is a person who wants to take trillions of dollars by force from the rich a "libertarian"? He's a particularly extreme statist.
>>But that doesn’t mean a structure couldn’t be put in place to prevent poor uses of these funds.
Is it not his argument that the government was the structure supposed to prevent poor use of the funds by the rich? Now he wants to create another structure to prevent the same thing from the government.
He should be thinking of structures that allow error correction instead of structures that prevent errors.
>>for the richest possible person must still spend money on something, thereby spreading wealth to others.
Spending money means acquiring wealth not spreading it.
>>Future breakthroughs in technology (e.g. robotics, nanotech) could eliminate millions of jobs very quickly, creating a serious problem of unemployment.
It seems to me that people that expect that to happen are showing some kind of bias. Because why would one be optimistic about our ability to create technology but pessimistic about our ability to create new jobs?
> It seems to me that people that expect that to happen are showing some kind of bias. Because why would one be optimistic about our ability to create technology but pessimistic about our ability to create new jobs?
They don't understand scarcity. There's always more to do to better satisfy people's wants. If robotics can satisfy some wants with less human labor, great, humans can labor more to satisfy some other wants instead and then be better off overall (they get the same stuff as before, via robot, plus new things). And, besides, if there somehow wasn't scarcity anymore, there'd be plenty for everyone (by definition), so what would one need a job for?
Whenever some currently high-priority wants are addressed, it opens up the pursuit of currently lower priority wants. People would like far more stuff than they can currently get. And, again, if there was so much that people stopped wanting more stuff and preferred to spend their time reading books or socializing or whatever, then what would be the harm of being unemployed? Unemployment is scary only when you have less than you want (which means there is still work to be done that is not being done by robots).
#10698 great argument re: lack of scarcity not being scary
>And, besides, if there somehow wasn't scarcity anymore, there'd be plenty for everyone (by definition), so what would one need a job for?
I know. some people say "what if machines steal all the jobs? How are we going to get money?", But why do they want money for? If machines already produce everything there's no need for money.
#10701 one potential fear is a society where some rich ppl are post-scarcity and then some poor people haven't got enough.
but the poor people could work for each other, and have their own economy.
and anyway if i was so rich i didn't care for more wealth, i'd be rich enough to feed billions of people (and the expense would matter less to me than $20 does today). if no one else fed them and gave them basic stuff (someone would), i'd do it. besides, there are incentives to do it. feeding a billion people would get me a lot more readers for my blog posts...
Alan wrote more criticism of Harris on this topic: https://conjecturesandrefutations.com/2018/08/21/harris-on-hoarding/
I think economics is Sam's weakest points. And I disagree with him on this.
But not as fervently as most of the people on this thread. I will try to improve Sam's argument here. So, someone can point to what I am missing.
To contrast Sam, one can argue that as some jobs disappear, another jobs will be created. As Neil deGrasse Tyson says, "who could have said 100 years ago there will be a job today - massage therapist. And, forget about massage therapists, there are *dog psychologists*!"
This runs nicely with the statement that humans have unlimited needs. However, the economy only exists to deal with the need for other human beings. If there is a need for a physical resource, its price is only a factor if there are other human beings that also need this resource. So, economy is only necessary for humans serving other humans. Enter AI.
Whatever future job you can think of, you need to answer one additional question that need not have been answered in the past:
"For a hypothetical future job, that does not yet exists... Why would you entrust a human with this job? What extra benefit can you derive from the fact that a primate will deliver the results?"
Probably, the last human jobs will be some form of services that is deeply rooted in human-human relationships. Massages, grooming, friendship, fake psychotherapy (the one not interested in results but only in emotional relief)...
The question remains whether these, too, will eventually be delegated to AI.
> This runs nicely with the statement that humans have unlimited needs. However, the economy only exists to deal with the need for other human beings. If there is a need for a physical resource, its price is only a factor if there are other human beings that also need this resource. So, economy is only necessary for humans serving other humans. Enter AI.
> Whatever future job you can think of, you need to answer one additional question that need not have been answered in the past:
> "For a hypothetical future job, that does not yet exists... Why would you entrust a human with this job? What extra benefit can you derive from the fact that a primate will deliver the results?"
There is some AI supremacist premise here which is unargued. AIs would just be people running on silicon. They'd acquire knowledge and figure things out by the same means. If you disagree you should explain why.
also even IF AIs were categorically better at stuff, there would still be value in trading with humans, for the same reason a doctor can benefit from hiring someone to do data entry even if he's faster at doing data entry than the person he hires.
#10732 Hmm. Talk about AI can sound as SF. But there is nothing unreasonable in what I said, given enough time...
AI wouldn't be like people. For intellectual tasks, they would be superior sooner. For all other tasks, they would be superior later. The difference between sooner and later can be measured in hundreds of years.
I am not explaining the whole reasoning behind the concept of "superintelligence". I think whoever listened to enough Sam's work, will have heard about it more than enough. Also, people who talked on the subject: Nick Bostrom, Ray Kurzweil, Elon Musk, S. Hawking, ...
Now, as far as I can see, in the end, you are applying the principle of comparative advantage to humans and AIs?
This just doesn't apply. If a doctor can find a Michael Jordan with a knowledge of theoretical physics rivalling A. Einstein who is also perfectly happy doing data entry all day and all night with no lunch breaks or toilet brakes. For the cost of $0.2 / h.
I think it is clear that this person has got the job.
#10733 What I wanted to say: I think, once the software is solved, AIs can scale up, we can multiply then, (or they can multiply eachother) as much as they want.
> #10732 Hmm. Talk about AI can sound as SF. But there is nothing unreasonable in what I said, given enough time...
> AI wouldn't be like people. For intellectual tasks, they would be superior sooner. For all other tasks, they would be superior later. The difference between sooner and later can be measured in hundreds of years.
> I am not explaining the whole reasoning behind the concept of "superintelligence". I think whoever listened to enough Sam's work, will have heard about it more than enough. Also, people who talked on the subject: Nick Bostrom, Ray Kurzweil, Elon Musk, S. Hawking, ...
That is a list of names of people who presumably disagree with each other on various issues related to this topic.
Do you have any writing setting out a position which you'll take responsibility for? Whether it's your own writing or someone else's.
> Now, as far as I can see, in the end, you are applying the principle of comparative advantage to humans and AIs?
> This just doesn't apply. If a doctor can find a Michael Jordan with a knowledge of theoretical physics rivalling A. Einstein who is also perfectly happy doing data entry all day and all night with no lunch breaks or toilet brakes. For the cost of $0.2 / h.
A world class thinker/athlete would be hiring others to do tedious data entry jobs. They would be a SOURCE of such jobs, not a competitor. The situation you're imagining couldn't happen.
You're doing something like adding a bunch of high quality labor but then holding the overall economic situation, including the jobs available, static.
There's lots of problems with this but one is that economies aren't static! A bunch of high quality thinkers would engage in entrepreneurship, creating new jobs.
Also IRL there's things like ongoing huge shortages for skilled programmers so severe they command huge salaries. There's tons of jobs at the "top" before you get to data entry, and filing those jobs would only grow the pie further. High quality labor isn't a harm to lower earners but a benefit.
#10735 As I started writing, I said I disagree with Sam. Then I proceeded to conujure up a scenario in which Sam's concern would be valid.
Sure they disagree with each other. They disagree in details. They have same core concern in common.
I don't have to write out the details of how things will unfold in the future, if this is what you are asking.
To build on what you said, I will provide an improvised example.
You said, "A bunch of high quality thinkers would engage in entrepreneurship, creating new jobs". By 'thinkers' I assume you meant - people. Why do you think biological intelligence will always be the one initiating business activities?
Discovering/anticipating people's needs is *the* activity that an entrepreneur is engaged in.
Why should we believe that biological intelligence will always do this activity better than synthetic intelligence? Take IoT, auto piloted drones, big data from social networks, etc. At some point, some program might produce an output: "It should be economically profitable in 5 years to start building a spa center in the region X. Pessimistic estimate (mean - 1sigma) gives 1% profit. Optimistic estimate gives 20% profit."
A program can then use the publicly available government websites / API and open up a company. A program can create a comprehensive business plan. It can connect to ZipRecruiter, Toptal and other sites to hire people it needs to do "manual" work. Then dispatch the detailed business plan to key people.
All of what I said is possible within 10-20 years (considering that humans would still be involved in the process as employees, doing the gruntwork.)
This kind of analogy can be made for any human activity. Although we hit a wall with actual robots that need to be made sufficiently like humans. This, for now, seems like a very distant future. In theory, it shouldn't be impossible. Just look at a documentary called Westworld.
#10735 Nothing needs to remain static. You add highly skilled workforce. Some other human need emerge as the most pressing one. A question appears on how to fulfill this need.
Currently, at the beginning, other people are working on fulfilling the pressing needs. After some time period, the process becomes automized.
With AI becoming more and more versatile (more and more _general_), this time period will get shorter and shorter. Until we get to a point:
A new need emerges and you immediately go to an AI for a solution. AI generates a solution in theory. Simulates every step and finds potential pitfalls. For implementing the soluton you only need low skilled, low pay workers.
I am a programmer myself. I think, in principal, there is nothing stopping an AI from coding other programs. The problems is human-computer interface. Explaining all the program requirements to a computer without assuming 'common sense'. Currently, this is done via cumbersome prog languages. But, the future will probably bring these languages closer to english.
Note: When I write "AI" I mean what is sometimes called "AGI", and I read the above comments in that way as well. But then I saw e.g. the auto piloted drones, which do *not* involve AGI and are much inferior to a human mind, so there may be some confusion about this. Such confusions are common – lots of people don't realize that AlphaGo is qualitatively different than an AGI, and mistakenly believe it's a significant step along a path to AGI.
#10733 AI would be universal knowledge creators (can create any knowledge that can be created by any knowledge creator), as humans already are. Universality can't be outcompeted or beaten (since they already include everything) except by coming up with a different kind of thing. E.g. a universal earth fishing pole (capable of catching all types of fish on Earth) could be outcompeted by magic wand of catching all animals (including fish) because it has a different type of universality (animal catching vs. fish catching). What type of universality AIs would have that humans don't, that would be superior to universal knowledge creator, has not been addressed by anyone. For info on universality, etc, see David Deutsch's book: http://beginningofinfinity.com (DD and I are familiar with the superintelligence literature you refer to, but those people are inadequately familiar with our work and have not answered DD's book.)
#10732 is correct that trade between humans and AIs would make sense due to *comparative advantage*. I think that would stop in the very far future (given that AIs would vastly outcompete humans, which I deny) because economically productive projects need prioritization. Just because something makes a profit doesn't mean it's worth doing when that same attention could go to a more important project. The guiding and directing intelligence of businessmen is scarce. But that's OK. At that point there'd be plenty of wealth for humans to have a very high standard of living without it being a sacrifice for anyone.
But humans can get brain hardware upgrades, upload themselves into powerful computers, or whatever else. Thus AIs would have no fundamental advantage over humans. They'd just be people who started in a different initial configuration of their body. Further, AIs would be afflicted by the same things that make humans stupid: bad parent and educational knowledge. Most AI writers neglect the need for AIs to learn, or assume that AIs would somehow be much better at learning from existing educational resources (despite the poor quality), or assume that new and much better resources for AI education would be created (but we don't know how to do that anymore than we know how to fix our schools or get parents to stop destroying the curiosity of their children).
#10733 is incorrect that the economic logic of comparative advantage stops applying when one party is much more productive than another.
#10749 what about entreprenurial AI I outlined. The near future one?
Imagine hearing a statistic in 10 years saying that 10% of all new companies were created by the same proprietary software. Ranging from hair salons to pet shops. Meaning that maybe one person collects profit from all of these businesses.
Additionaly, you find that most all of these businesses are fully automated (Not hiring ).
And all of the mentioned percentages are just picking up pace. We expect more of the same trend.
Not an economist, so I will just add some more statistics in this hypothetical future, and someone can say whether these are theoretically possible figures:
Gdp per capita rising at all time high pace.
Median product per person **dropping**.
Would there be reasons for concern for you? Or you don't think this scenario is possible?
No too rich?
In introductory text it is written "there is no such thing as too rich."
Lets say there is a total of $100 trillion on earth. Surely, if one person had all of it ($100 trillion), that would be too rich. No?
> In introductory text it is written "there is no such thing as too rich."
> Lets say there is a total of $100 trillion on earth. Surely, if one person had all of it ($100 trillion), that would be too rich. No?
I'm not really sure why you felt the need to specify $100 trillion. Would you be okay with one person having all the wealth on earth if there was only a dollar of total wealth?
Anyways can you offer a plausible story/scenario in the context of a classically liberal capitalist society in which someone would 1) acquire literally all the wealth on earth and 2) then choose to sit on it instead of engaging in trades for the labor of others?
#10783 Sure, $1 or $100 trillion. It doesn't matter. I just tried to guess an exact number. Probably failed.
I will try what you asked.
One person has all the wealth on earth. Now we split into two scenarios: 1/ That person either has a need that only another human can satisfy.
2/ He doesn't have that kind of need.
For case 2, I would say, it is hard to find something good in a scenario like this.
The wealthy guy has certain needs that only other humans can satisfy. This case can be further split into:
A/ somehow the current set of laws are still in effect and largely enforced.
B/ Society morphed into an anarcho-capitalist environment.
The police, the judges, everybody is getting paid by the wealthy guy. Directly or through an intermediary called The State. They are very limited in negotiating their wages. They cannot save money since their wages are calibrated to merely keep them alive.
Here I am using an intuition that the wealthy guy would have more pull in negotiating the wages since the need of a poor guy would be much stronger than a need of a wealthy guy. I see this as a stable state. The only thing that could initiate a change would be if 1.A does not stand.
The population stops dealing with the wealthy guy. This would require a synchronisation of majority of people and would equate to forceful wealth redistribution.
>One person has all the wealth on earth. Now we split into two scenarios
you seem to have completely skipped this part of my request:
>>can you offer a plausible story/scenario in the context of a classically liberal capitalist society in which someone would 1) acquire literally all the wealth on earth
you just assumed your (what i regard as super unrealistic) hypothetical is the world state and talked about stuff from there. i was asking you to first explain how you think that one person would acquire all the wealth in the world before we get to analyzing the implications of that situation.
i think it's silly. even if some super genius creating and programmed nanobots that could make anything, there would still be other wealth (pre-existing and being created).
in the context of a tyranny, you can imagine a tyrant laying claim to the world's wealth. even then other people would exercise effective control over some parts of it (cuz IRL tyrants need to bribe subordinates and potential rivals to keep them in line, or want to impress their mistresses by giving them a castle or fine clothes or whatever)
Just one comment on the rest:
> They cannot save money since their wages are calibrated to merely keep them alive.
IRL guys with way less wealth than ALL THE WEALTH IN THE WORLD do things like give $11,000 tips
#10785 Sorry, I totally misunderstood your last question.
You want a scenario explaining **how** someone would acquire literally all wealth on earth.
No, I cannot give a plauislbe scenario for this.
But this still does not address the original claim that there is no such thing as "too rich".
My claim is, take an inventory of all the wealth on Earth. Express it in dollar value X. And then say, if someone had X dollars, that would be too rich.
After that, you can claim the following:
If a person has X-1 dollars, everything is still fine. When he reaches X, we have a problem. Searching for a point of divergence. This is what I think Sam was referring to.
The last few lines of yours appeal to some common acts of kindness or boasting. This seems not-fitting for this discussing. Act of kindness is a principle of pursuing self-interest combined with the common healty human psychology.
Boasting, similar. In both cases, an individual views his own self-interest through other people's opinion of him. This is a deviation from a pure game-theoretical approach where we appeal to as-is human psychology and we claim that we know for a fact what will be some of the motivations for future people. It is easier to devise an economic system when we have certain guarantees about what peoples' goals will be.
> #10785 Sorry, I totally misunderstood your last question.
> You want a scenario explaining **how** someone would acquire literally all wealth on earth.
> No, I cannot give a plauislbe scenario for this.
> But this still does not address the original claim that there is no such thing as "too rich".
i think that claim involved the context of reality. As in, there's no such thing as too rich IN REALITY, as opposed to whatever might be the case in some hypothetical or fantasy world.
> My claim is, take an inventory of all the wealth on Earth. Express it in dollar value X.
How do you propose to do this btw.
do you have a method which address these issues raised by Mises:
>It is possible to determine in terms of money prices the sum of the income or the wealth of a number of people. But it is nonsensical to reckon national income or national wealth. As soon as we embark upon considerations foreign to the reasoning of a man operating within the pale of a market society, we are no longer helped by monetary calculation methods. The attempts to determine in money the wealth of a nation or of the whole of mankind are as childish as the mystic efforts to solve the riddles of the universe by worrying about the dimensions of the pyramid of Cheops. If a business calculation values a supply of potatoes at $100, the idea is that it will be possible to sell it or to replace it against this sum. If a whole entrepreneurial unit is estimated $1,000,000, it means that one expects to sell it for this amount. But what is the meaning of the items in a statement of a nation's total wealth? What is the meaning of the computation's final result? What must be entered into it and what is to be left outside? Is it correct or not to enclose the "value" of the country's climate and the people's innate abilities and acquired skill? The businessman can convert his property into money, but a nation cannot.
and you wanna take a dollar value inventory for whole earth, not just one nation!
>And then say, if someone had X dollars, that would be too rich.
> After that, you can claim the following:
> If a person has X-1 dollars, everything is still fine. When he reaches X, we have a problem. Searching for a point of divergence. This is what I think Sam was referring to.
I do not understand the principle behind this method.
> The last few lines of yours appeal to some common acts of kindness or boasting. This seems not-fitting for this discussing. Act of kindness is a principle of pursuing self-interest combined with the common healty human psychology.
You seemed to imagine that someone with literally all the wealth in the world (which they have gained through unspecified means you have admitted you cannot elaborate) would act like some stereotype of a Marxist's vision of the capitalist class, paying people so little they can't even save anything.
I think the whole premise is rotten and thus the scenario is silly, but playing along with it a little bit for the sake of argument, I think it's 100% relevant to the discussion to consider how some people with lots of wealth actually act in real life. Often, they are actually quite generous. Not all of them are big tippers, but tons do other things like e.g. give millions to philanthropic stuff.
if we're gonna discuss a story i regard as impossible about a fictional rich overlord person and his implications for economics or whatever, i don't see why one particular personality archetype (a super stingy miser person) should occupy a privileged place in the discussion. can't we talk about different existing IRL personality types? or do you think there's something in inherent in economics or in the situation or something like that that would *force* the rich dude to act in a miserly way as you're imagining? that he literally *couldn't* be a generous $11,000 tipper type bro? couldn't value bringing joy to people with ALL THE WEALTH IN THE WORLD? cuz if you think THAT then say why and that might help move the discussion forward :-)
> Boasting, similar. In both cases, an individual views his own self-interest through other people's opinion of him. This is a deviation from a pure game-theoretical approach where we appeal to as-is human psychology and we claim that we know for a fact what will be some of the motivations for future people. It is easier to devise an economic system when we have certain guarantees about what peoples' goals will be.
I don't really understand what this is saying. But I don't think we need to agree on people's specific goals to talk about economics. Basically as long as you'll concede that people have goals and try and acquire stuff they value more in exchange for stuff they value less, that's a pretty good starting point.
#10787 Regarding the total value of the whole Earth - I understand. It does seem nonsensical.
But surely we can have some measure of relative wealth and say things like: A is as wealthy as B and C combined. If these are all the existing actors, we could differentiate between these two cases:
1/ A 60%, B 30%, C 10%
2/ A 40%, B 30%, C 30%
The question can then get transformed: (caution, leaving RL, pure theorizing)
Is there a percentage that makes the system unstable? Now we must define exactly what unstable means exactly. If the system converges to a state in which 1 actor acquires 100%, that's definitely unstable. If the system converges to 2 actors retaining wealth, most would also consider this unstable. If the system converges to a stable state where 1 million people retain 100% of wealth, this is open to discussion.
'stability' is a poor choice of a word here probably.
Now returning back to RL. There are mechanisms that would prevent the most obviously unstable outcomes.
We can concede that people will always have goals. A central claim of capitalism is that a society will thrive if an individual is pursuing purely their own economic interest. That is, wealth acquisition. Convincing free market proponents like Friedman didn't add caveats of this sort: "Besides wealth acquisition, we also count on the fact that 99% of the people will also value the wellbeing of their countrymen, neighbours and their children."
Free market idea should stay robust even if we imagine purely rational agent with a single goal - nonviolent wealth acquisition.
If we find problems with that, we can still say: "Allthough capitalism is not mathematically proven to be bullet-proof. For all realistic intents and purposes, it is the best available system."
So, all the anecdotal stories about $11k tips, Bill Gates foundation, etc. is needlessly expanding the discussion.
#10789 New wealth can always be created 'from nothing' by actors thinking of new ways to serve one another. So, one can say that it is inherently impossible for one actor to acquire more than X% of total wealth. But this returns us back to the AI question.
I can imagine a future in which all of the future human needs will be better served by AI than by other humans.
Principle of comparative advantage does not apply because AI are inherently cheaper. We are only waiting for it to become better. Its price will not rise and they will not compete with humans on the market.
We can go into this if you disagree...
For me the only question remains, whether it is only I that can imagine this future? Or is an actual potential future?
#10782 If I get $99999 quadrillion *without taking anything away from anyone else* then it won't hurt anyone. For one person to own *everything*, they'd presumably have to steal from others. But if they own much more *without* others owning less, it's at least neutral for others, and actually in practice would be massively beneficial for others (one reason is b/c when I create wealth I only capture a portion of it as money income for myself, e.g. I create stuff with $10 of value, at a cost of $1, and sell it for $5. You never sell things at 100% of their value. So whatever my wealth I create for myself, you can generally expect a larger total amount of benefit to go to other people.)
#10805 There are durable and non-durable goods. If you charge $1 million to save someone's life, sure you have created wealth for others - you literally gave them their lives.
It's misleading to think in terms of "creating wealth". Helps think in terms of mutual interdependencies of individuals.
You either need the work of other people, or you don't.
Other people either need your work, or they don't. These are the factor that drive wealth.
A term 'work' can be misunderstood. An investor also performs work in form of decision making.
> what about entreprenurial AI I outlined. The near future one?
If FI/BoI are correct that AIs are just people, then it wouldn't be fundamentally different than a human entrepreneur.
> Would there be reasons for concern for you? Or you don't think this scenario is possible?
I wish it were possible. It sounds wonderful if someone was smart enough to create tons of new super-succesful businesses that successfully participate in the economy along free trade lines *for mutual benefit in every transaction*. Everyone who traded with this entrepreneur would be better off after every trade.
If one entity is so productive and successful, I would expect them to be especially good at thinking, especially rational, especially wise, especially honest, especially moral, and so on. As a first approximation, I'd trust and admire them, not fear them. (Similar to how if aliens were able to develop the technology to come visit Earth, my initial guess would be that they would be wonderful, not warlike, because developing that technology requires good thinking, good ways of interacting with others, etc.)
I don't think the scenario is fully consistent though.
> Median product per person **dropping**.
Why would it drop? There's no shortage of demand. Having a new and effective producer doesn't cause unemployment or underemployment for others. It makes just makes the pie bigger without harming anyone.
> IRL guys with way less wealth than ALL THE WEALTH IN THE WORLD do things like give $11,000 tips
I scoff at your 11k tips.
There are tons more.
>#10732 is correct that trade between humans and AIs would make sense due to *comparative advantage*. I think that would stop in the very far future (given that AIs would vastly outcompete humans, which I deny) because economically productive projects need prioritization.
I don't understanding this. What do you think would stop in the very far future?
#10825 If – hypothetically – AGIs were a trillion times more intellectual productive than people, and robots did all the manual labor, then the logic of comparative advantage would still exist and apply regarding humans producing things for trade. That would be possible. But I don't think anyone would bother with that. I would instead expect that humans would not work except when they want to, and would be provided with much more wealth than they would have acquired by working and trading.
> If FI/BoI are correct that AIs are just people, then it wouldn't be fundamentally different than a human entrepreneur.
They **are** fundamentally different in how easy it is to replicate them. And the fact that they are without exception both cheaper and better than humans.
> Everyone who traded with this entrepreneur would be better off after every trade.
This does not address my example of providing medical services I outlined in #10810. Medical services are just an example so I don't talk theory all the time. There are many such examples.
I don't see how "everybody is getting richer via transactions" in these situations. Let's say you exchanged all your possestion for your life. You don't have any cash or possesions left after that. That's fine, it is your turn to give back to society after the society gave you so much.
So, how can I give to society? I perform some service to other people. What service can I do to other people that would be better or cheaper than AI? No such thing.
In this scenario, you are effectively counting on the scarcity of robots. Or you are scraping the market for services that are not worth the electricity required for an AI to perform them.
I must paint these caricatures, because free market evangelists here are stating some categorical claims that for them sound like mathematical axioms. I am very much proponent of free market, but I don't share a conviction that it is devoid of all potential pitfalls.
> Why would it drop?
More things are getting done. But there are fewer people doing those things. AI is doing more and more. Which means that a single owner of these AIs is collecting all the profit.
> #10813 I scoff at your 11k tips.
What you are referencing are transactions. A wealthy person received a gaming video stream and voluntarily payed for that stream $100k.
On the other hand, if it is the case that the donor received nothing of value from that stream, then let's try to extract what is the relevance of this to the discussion. Even if the world disolves into stark wealth inequality, there is no need for concern because the wealthy people will every once in a while give away couple of hundred thousands of dollars. So, no need for concern, that concludes our discussion...
#10825 It would not, technically, stop. It would denegrate people to doing a pool of jobs far less valued relative to the total sum of economic activity. Most of the business actors would be AIs, owned by few human actors. This would produce rigid wealth hierarchies as no human could ever traverse economic classes.
>> If FI/BoI are correct that AIs are just people, then it wouldn't be fundamentally different than a human entrepreneur.
> They **are** fundamentally different in how easy it is to replicate them.
A new AI starting out would be kind of like a baby. It would have to learn a bunch of stuff. So replicating a bunch of those would be like replicating a bunch of uninteresting babies you don't need to feed.
OTOH an Einstein-Rockfeller Genius AI would have had to do a bunch of learning already. It'd be a developed knowledge creating entity pretty far along in its existence.
When you talk about replicating such a developed entity, are you thinking of doing that without the *consent* of that entity? Or are you assuming that such an entity would want to make tons of exact copies of itself? Or what?
A computer program is not an entity. You are comparing young AI to babies. I take it you don't find the AGI/ASI arguments too seriously. That's ok, many people don't. In short, if a young boy started out playing chess or Go, how long would it take him to become one of the best in the world? Many years.
Current programs do it in an hour. From zero knowledge about the game to the best players in the world by pure observation and playing agaist itself.
I am **not** comparing chess to general human intelligence, I am just indicating what kind of progress trajectory we can expect.
So, yes, I am talking about replicating that entity without its consent. Replication would be performed by the AI's author, its sole proprietor.
Another variant would be, as you suggested, that the AI would replicate itself when it deems necessary.
You have not responded to the issues relating to universality, explained above in #10749 which were also brought up to you a second time. So, for a third time, you clearly disagree with us about that but aren't discussing it. Have you read BoI? Do know what we're talking about? AGIs will not be like you think they will be like. You keep talking about the implications of AGIs as you envision them instead of responding to the arguments that they won't be like that.
#10834 Sorry, haven't read Beginning of Infinity. Don't understand the question about universality.
I am claiming that AI can be as universal as humans in any sense. In some senses sooner, in some senses later.
Ok. So the claim is that there is a more plausible scenario how things regarding AI may unfold and it's laid out in DDs book?
When 2 things have equal capabilities (the same knowledge creation and problem solving repertoire), one can't be superior in the kinds of ways ppl expect AGIs to be superior to humans.
DD explains about universality in his book, and a bit about AI. If you understand universality then you can understand the implications for AI.
> In some senses sooner, in some senses later.
Humans are universal knowledge creators today. There are difficulties that universal knowledge creators can face – like misconceptions about how to think rationally – which can and will also afflict AGIs. AGIs will not be like super-rational people because rationality comes from good ideas not from silicon.
>A computer program is not an entity.
we are computer programs.
Hi, I just wrote some explanation of liberalism which will help people understand how capitalism works and why Harris is mistaken.
Sam Harris on Twitter's ban of Trump
> There's an important debate to have about the wisdom of kicking Trump off @twitter. I still believe that it should have happened years ago and that we've paid a terrible price for the delay. But for the moment, all I want to say is:
> Thanks, @jack.
Harris left Patreon in protest (and self-protection) after Sargon and others got deplatformed.
I wonder why he bothered when apparently he's in favor of deplatforming and says he has been for years...?
There's something weird here. I don't fully understand.
Maybe he didn't like deplatforming but has now given up on opposing the establishment in any meaningful way and is rewriting his own intellectual history to suck up more? Maybe he now thinks he needs to get on the bandwagon – enthusiastically – before it's too late. He does seem like the kind of person who could have been in one of the next few purges because he's held a few unapproved opinions, e.g. too favorable to free speech, too willing to talk with right wingers instead of just scorn them, and too critical of Islam (this stuff is similar to some other "Intellectual Dark Web" people). I think (I haven't followed him much.)
#19442 Or maybe Harris is willing to tolerate center-right people and doesn't want them deplatformed, but does want anyone further right to be cancelled? (Plus won't acknowledge that Trump is pretty moderate.) He certainly ignorantly hates Objectivism and laissez-faire capitalism.
#19441 Notable further discussion of Sam Harris' pro-deplatforming stance:
#19444 In other words, Harris approves of dictatorship exercising power when
1. He agrees with the dictator
*and also*, the excuses that makes this seem OK to him:
2. He's really really sure he's right, not just pretty sure
3. It seems really extra important to him
It has to be *all 3*, so he doesn't feel like an authoritarian since it's so limited.
Note the total lack of any procedures to address possible bias or error. https://www.elliottemple.com/essays/using-intellectual-processes-to-combat-bias
#19445 Other common excuses for authoritarian attitudes include stuff about:
- public safety
- public health
- mental health
- what is "healthy"
#19446 also various forms of environmentalism (e.g. protecting nature, plants, animals, non-renewable resources, or preventing overpopulation)