Division of Labor and Experts: Generally Great but Sometimes Overrated

Our society has had great success due to the division of labor. People economically specialize. We have farmers, lawyers, barbers, bakers, security guards, inventors, novelists, architects, engineers, programmers, managers, marketers, etc.

Division of labor is far more efficient than everyone living independently and doing a little of each job. It lets people focus a large portion of their time, attention and learning on one area. As a result, they get better at it.

Trade is what makes division of labor work. I don't farm or bake but I can trade for corn and bread. Trade is how I benefit from other people doing something.

This system plays a big role in our lives. We're all familiar with it even if we don't really think about it or study economics. Besides providing material prosperity, it has led to certain psychological attitudes.

I don't know how most things work and I get them from specialists. I don't know how to repair a car so I rely on a mechanic. I don't know how to write a good poem so I get poems from poets. I don't know how to paint so I get my paintings from people who do. I do know how to create software, but I still get most of my software from other people who specialize in that particular type of software.

People have developed an attitude of not knowing how most things work and not needing to. Someone else will do it better than I would, anyway. For almost everything, there is a specialist who's better at it than me. If I do it, other than my career, it's just a hobby for fun.

This attitude is partly reasonable but partly dangerous. People can overestimate experts.

There don't actually exist good specialists for every speciality. Some areas have too few people working in them, e.g. life extension, AGI or epistemology. In some areas, tons of experts are wrong, e.g. Keynesian economists and Kantian philosophers.

People overestimate medicine's ability to fix their problems. Many surgeries and medications are cruder and less effective than people think. It's good that they exist. They're good options to have. But they aren't just safe, perfectly effective and wonderful. They're risky and doctors downplay the risks and side effects. Doctors can't fix everything and lots of the fixes have a meaningful chance of breaking something.

Worse are experts for mental, not physical, problems. Sad? Go talk to a "professional" to get help for your "depression". Marriage problems? There's an expert for that. Kid doesn't listen in school? There are experts for that. But these people don't know much about ideas. They are neither philosophers nor scientists. They can give some basic self-help advice and they can use social pressure to manipulate people. The whole field isn't merely largely ineffective, it's dangerous with its brain-disabling drugs, it's imprisonment without trial ("involuntary commitment"), and how it misdirects people away from solving their own problems with self-improvement, studying better ideas, and other productive activities.

Experts encourage people to be irresponsible. Don't worry about it, the expert is responsible for getting a good outcome. But people are often disappointed by the outcome the expert provides. It's your life. You have to live with the outcome. You need to judge which experts are effective enough and when you need to take matters into your own hands.

Many types of experts are fine. People who produce material goods for sale are broadly OK. People who provide relatively simple or easily evaluated services are broadly OK. The longer term the issue is, and the more ongoing interaction with the specialist is needed, the more you should be careful.

The most dangerous experts that people consult directly are "mental health" experts. That whole industry is poison.

The capabilities of physical doctors are overestimated but they're basically on your side, try to help, and mostly make things better. Mental doctors make a lot of things worse. Many people regret interacting with them, and others are brainwashed/indoctrinated/pressured to the point they have trouble thinking critically about it and forming an independent opinion of their psychology or psychiatry experiences.

The most dangerous experts that people deal with indirectly are philosophers. Most people don't read philosophy books but they pick up ideas, here and there, about learning, knowledge, critical thinking, reason, morality, political principles, the metaphysical nature of reality, etc. Many of those ideas are badly wrong. They lead to people being irrational, unreasonable, bad at learning, biased, etc., which makes things worse throughout their lives. Economists also spread a ton of really bad ideas to people indirectly.


Elliot Temple | Permalink | Messages (9)

Childing a Child

Many people think "gendering" a child might be bad.

That means: Teaching the boy role to a boy might be bad. Teaching the girl role to a girl might be bad.

No one considers that teaching the child role to a child might be bad.

I think the child role is more impactful (for good or bad) than a gender role. There's a much larger difference between young children (e.g. a 5 year old) and adults than between men and women. I'd estimate that under 20% of the adult/child difference is due to the child's learned social role, but the social role is still a big factor.

The child social role has a lasting impact after childhood. People don't just forget being children. And people transition in stages where they e.g. put effort into differentiating themselves from a child. When reacting to a former role is a significant part of one's life, then that role is still impactful.

Adults put substantial ongoing effort into avoiding being childlike. It prevents them from doing much learning. They don't want to be beginners or learners because that's for children. It also prevents adults from having some types of fun. Curiosity is another childish trait which adults suppress a lot of.

Most people are mostly respectful of both genders. They don't have a significant grudge against either gender. By contrast, people are frequently hateful of the child role. They dislike and mistreat children routinely. Teaching your offspring a social role that you don't respect, then disrespecting them for years, is a nasty system. People think children are stupid and use the fact that they do child social role behaviors – while adults don't – as proof.


Elliot Temple | Permalink | Messages (2)

Research and Discussions About Animal Rights and Welfare

I made Discussion Tree: State of Animal Rights Debate. My tree diagram summarizes pro-animal-rights arguments from Peter Singer and asks some questions about major issues he didn’t cover. It reveals that his arguments were incomplete. The incompleteness I’ve focused on is that they don’t address issues related to computers and software. Maybe animals are like self-driving cars with some extra features, not like humans. Self-driving cars aren’t intelligent, conscious or capable of suffering. Singer doesn’t try to address that issue.

I did additional research to find arguments to add to my discussion tree. I found no answers to basic computer science questions from the animal welfare advocates.

I posted to five pro animal rights forums asking for links to written material (like books, articles, or blog posts) making arguments that Singer didn’t make, so I could read about why they’re right. I received no relevant responses and almost zero interest.

Later, I and others posted to eleven more places. Although this resulted in a bunch of discussion, I was not referred to a single piece of relevant literature. No one had a single piece of evidence to differentiate animals from fancy self-driving cars, nor any substantive argument. Many people insulted me. None had a scientific, materialist worldview, incorporating computer science principles, and could give any argument against my position which is compatible with that type of worldview. Nor did they give arguments that that kind of worldview is false. No one said anything that could plausibly have changed my mind. And people didn’t quote from my discussion tree and respond, nor suggest text for a new node. I linked and documented lots of the discussion on this page.

I was referred to dozens of pieces of literature, but none were relevant. In general, searching for terms like “software”, “hardware”, “algorithm” and “compu” immediately showed the source was irrelevant.

I also went to a vegan Discord for a YouTube debater to ask if they could help me improve my discussion tree diagram. I streamed what happened. Summary: They laughed at my view, then asked me to debate in voice chat (instead of giving literature), then banned me for not responding in 30 seconds while they knew I was busy fixing an audio issue.

This illustrates several things. First, my discussion tree shows how you can begin researching a topic in an organized way. You can pick a topic and create something similar. If you want to learn, it’s a great approach.

Second, there’s a serious lack of interest in discussion or debate in the world, and most people are quite ignorant and don’t even know of sources which argue why their beliefs are correct. They have some sources for why they’re right and rival views X and Y are wrong, but no answer to view Z, and will just keep giving you their answers to X and Y. Are you better or do you know of anyone who is better? Speak up.

Third, animal rights advocates broadly don’t know anything about computers and software and haven’t tried to update their thinking to take that stuff into account. Sad!

I encourage people to try creating a discussion tree on a topic that interests them, then ask for help finding sources and adding arguments to it. See what people, with what conclusions, have anything they’re willing to contribute, or not. You’ll learn a lot about the topic and about the rationality of the advocates of each viewpoint. It’ll help you judge issues yourself instead of deferring to the conclusions of experts (rather than their arguments). Even if you were happy to defer to expert opinions, it’s hard because experts disagree with each other; a discussion tree can help you organize those expert arguments.

You can also use discussion trees to organize and keep track of debates/discussions you have – as the conversation goes along, keep notes in a tree diagram.

I made a video covering these events and more. It’s from when I’d gotten almost no answers, rather than a bunch of bad answers. And I streamed a bunch of my discussions when I got bad answers.

While discussing, I wrote several additional blog posts, including a second discussion tree.


This content was borrowed from my free email newsletter. Sign up here.


Elliot Temple | Permalink | Messages (16)

Elliot Temple | Permalink | Message (1)

Human and Animal Differences

In the comments below, reply saying which is the first sentence you disagree with, and why you disagree.

Minds are software. Suffering is a state of mind. Physical information signals, whether from the eyes or from pain nerves, have to be processed by the software before they can cause suffering, be liked or disliked, etc. Before that they're just raw data and no meaning has been determined yet by the conscious mind.

Brains (both human and animal) are universal classical computers. The hardware between humans and some animals is similar. Hardware similarity doesn't tell you about software similarity. Computation is hardware independent. Similar or even identical hardware can run totally different computations. Studying hardware and comparing hardware similarities is a red herring.

All animal behavior follows algorithms specified by their genes. Human genes specify a different type of algorithm – general intelligence – which involves the ability to create/design new knowledge, just as biological evolution created/designed the knowledge of optics in our eyes, the knowledge for how to build a computer out of neurons, the knowledge for what situations a rabbit should run away in, etc. General intelligence is the ability to evolve new knowledge. It’s the ability to replicate ideas with variation and selection, just as biological evolution proceeds by replicating genes with variation and selection. With animals, all the knowledge comes from biological evolution. Humans can do evolution of ideas inside their brains to create new knowledge, animals can’t.


Elliot Temple | Permalink | Messages (0)

The Cambridge Declaration on Consciousness

The Cambridge Declaration on Consciousness (2012):

The field of Consciousness research is rapidly evolving. Abundant new techniques and strategies for human and non-human animal research have been developed. Consequently, more data is becoming readily available, and this calls for a periodic reevaluation of previously held preconceptions in this field.

ok

Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness.

No. Wrong just in this summary, unsourced, and focusing on correlation instead of causation.

You can’t tell what is “necessary” by turning some things on and off. You turn off X and then Y doesn’t happen. Does that mean X is necessary to Y? No, some Z you didn’t consider could cause or allow Y. So they’re making a basic logic error.

And how can you do a correlation study involving “conscious experience” in non-human animals? How do you know if or when they have any conscious experience at all?

The neural substrates of emotions do not appear to be confined to cortical structures.

These people don’t seem to understand the hardware independence of computation. Or they think emotions are non-computational or something. But they don’t explain what they think and address the computer science issues.

In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals.

Wait lol, after they brought up emotions the next sentence (this one) switches from emotions to “emotional behaviors”. Emotional behaviors are behaviors which look emotional according to some cultural intuitions of some researchers. This ain’t science.

The rest is more of the same crap that doesn’t address the issues or give sources, so I’m stopping now.


Elliot Temple | Permalink | Messages (3)

Animal Rights Issues Regarding Software and AGI

Claim: Animal rights may be refuted by advanced Critical Rationalist (CR) epistemology, including the jump to universality, but most people (pro or anti animal rights) haven’t read and understood The Beginning of Infinity and have a different view of epistemology. Given that ignorance of CR, their belief in animal rights is reasonable. And their failure to understand my questions and challenges of their beliefs is also reasonable. (This claim is based on a comment by TheRat on Discord.)

I disagree with that claim. The purpose of this post is to restate my main question/challenge for animal rights and then to argue that it should be understandable, and be seen as an issue worth answering, by someone who has never heard of CR. The issue is related to software not CR. I will further claim that a non-programmer should be able to understand the question/problem/issue and see that it matters (even though he’ll have a hard time reaching a conclusion about the answer without being able to understand code).

Note: I do have other arguments against animal rights which rely on CR.

The Programmer’s Challenge to Animal Rights

Claim: Animals are complex robots. Humans are different because they have general intelligence – the thing that AGI (Artificial General Intelligence) researchers are trying to program but haven’t yet been able to. All known and documented animal behavior is compatible with animals lacking general intelligence (example).

Animals are built with with different materials (more carbon, less metal). This difference is irrelevant. Similarly, the “artificial” in Artificial General Intelligence doesn’t matter either.

Animals are fundamentally similar to a self-driving car, to board game playing software in a robot with (or without) an arm that can move the pieces around the board, and to “AI” controlled video game characters. Those, like all human-written software that exists today, are all examples of non-AGI (non-general intelligence) algorithms. And the lack of a physical body in some cases important (a robot body could be built and added without changing the intelligence of the software).

Brains of both animals and humans are universal classical computers (Turing complete), just like Macs and iPhones, which run software. The relevant differences are software algorithm differences. People who deny this are ignorant and/or unscientific.

Further Explanation

All software we know how to write today is inadequate to achieve general intelligence. So to claim animals have moral rights like humans, people should argue that animals do things which fundamentally differ from current software. So far I have been unable to find any serious attempt to do this.

Alternatively, someone could come up with a distinguishing feature of software algorithms other than having or lacking general intelligence, show that some animals have that feature, and explain why that feature has moral relevance. I’ve also been unable to find any serious attempt to do this.

Whether general intelligence has moral relevance is non-obvious. Regardless, a reasonable person should agree it might have major moral relevance and therefore this is an issue worth investigating for those curious about animal rights. If there is no animal rights literature trying to do this sort of analysis, and addressing these issues, that’s a significant gap in their arguments.

People denying that general intelligence has moral relevance should specify what else humans have, which robots lack, which they think has moral relevance. A common answer to that is the capacity to suffer. I have been unable to find any animal rights literature that tries to differentiate humans or AGIs from self-driving cars and non-AGI software in terms of ability to suffer. What is it about a human’s software, what trait matters other than general intelligence, that grants the capacity to suffer? If they answered that, then we could investigate whether animal software has that trait or not.

I think capacity to suffer is related to general intelligence because suffering involves making value judgments like not wanting a particular outcome or thinking something is bad. Suffering involves having preferences/wants which you then don’t get. I don’t think it’s possible without the ability to consider alternatives and make value judgments about which you prefer, which requires creative thought and the ability to create new knowledge, think of new things. This is a very brief argument which I’m not going to elaborate on here. My main goal is to challenge animal rights advocates. What is their position on this matter and where are their arguments?

What I’ve mostly found is that people don’t want to think about computer algorithms. They don’t know how to program and they aren’t scientists. They don’t know (or deny without educated arguments) that brains are literally universal classical computers (Turing complete), that information and computation is part of physical reality and physics, that human minds are literally equivalent to some sort of software, and other things like that. That’s OK. Not everyone is an expert.

That’s why I’ve been asking (see the comments in addition to the post) to be referred to literature from someone who does know how to program, understands some of these basic issues, and then makes a case for animal rights. Where are the people with relevant expertise about computers and AGI who favor animal rights and write arguments? I can’t find any. That’s bad for the case for animal rights!

Note: My relevant views on AGI are mainstream for the field. I disagree with the mainstream views in the AGI field on some advanced details, but the basic stuff I’m discussing here is widely agreed on. That doesn’t prove it’s true or anything, but a mainstream view merits some analysis and argument rather than being ignored. (Even obscure views often merit a reply, but I won’t get into that.) If animal rights advocates have failed to consider mainstream AGI ideas, that’s bad.

Consciousness

Besides suffering and general intelligence, the other main trait brought up in animal rights discussions is consciousness. If animals are conscious, that gives them moral value. These three traits are related, e.g. consciousness seems to be a prerequisite of suffering, and consciousness may be a prerequisite or consequence of general intelligence.

What computations, what information processing, what inputs or outputs to what algorithms, what physical states of computer systems like brains indicates or is consciousness? I have the same question for suffering too.

Similar questions can be asked about general intelligence. My answer to that is we don’t entirely know. We haven’t yet written an AGI. So what should we think in the meantime? We can look at whether all animal behavior is consistent with non-AGI, non-conscious, non-suffering robots with the same sorts of features and design as present day software and robots that we have created and do understand. Is there any evidence to differentiate an animal from non-AGI software? I’m not aware of any, although I’ve had many people point me to examples of animal behavior that are blatantly compatible with non-AGI programming algorithms. Humans are different because lots of their behavior is not explainable in terms of current software algorithms. Humans create new knowledge, e.g. about spaceships and vaccines, that isn’t programmed in their genes. And humans do that regarding many different topics, seemingly all, hence the idea of “general” intelligence. I have yet to see evidence that any animal does that on even one topic, let alone generally.

Many of the arguments about consciousness involve the rejection of what I regard as science. E.g. they advocate dualism – they claim that there is something other than the material world. They claim that consciousness is a fundamental, non-physical part of reality. They deny that physics can explain and account for everything that exists.

I regard dualism as bad philosophy but I won’t go into that. I’ll just say that if the case for animal rights relies on the rejection of modern physics and the scientific-materialist view of the world, they’ve got a serious problem which they should address. Where can I read literature telling me why I should change my view of science and accept claims like theirs, which addresses the kind of doubts an atheist who believes in objective physical reality would have? I haven’t gotten any answers to that so far. Instead I’m told assertions which I regard as factually false, e.g. that information is not physical. People who say things like that seem to be unfamiliar with standard views in physics (example paper).

The Argument for Conservatism

Animal rights advocates claim that, if in doubt, we should err on the side of caution. If the science and philosophy of mind isn’t fully figured out, then we should assume animals have moral value just in case they do. Even if there’s only a 1% chance that animals have rights, it’s a bad idea to slaughter them by the millions. I agree.

Pro-life (anti-abortion) advocates make the same argument regarding human fetuses. The science and philosophy aren’t fully settled, so when in doubt we should avoid the chance of murdering millions of human beings, even if it’s a low chance. I agree with that too. I think most animal rights advocates disagree with that or refuse to take it into account so that they can favor abortion. I think this indicates some political bias and double standards. I imagine there are some pro-life animal rights activists, but I think most aren’t, which I think is screwy.

Despite agreeing with these arguments, I’m pro-abortion and pro-slaughtering-farm-animals. The reason I favor abortion is I don’t have any significant doubt about whether a 3 month old fetus, which doesn’t not yet have a brain with electrical activity, is intelligence. I haven’t carefully researched the scientific details about abortion (I would if I was actually deciding the law), but from what I’ve seen, banning third trimester abortions is a reasonable and conservative option.

The reason I favor slaughtering cows is that I have no significant doubt about whether a cow has general intelligence. I’ve seen zero indicators that it does, and I’ve debated many people about this, asked many animal rights advocates for things to read which argue their case, asked for examples of animals doing things which are different than what a non-AGI robot could do, and so on. The total lack of relevant counter-argument from the other side is just the same as with abortion and is about equally conclusive. When all the arguments go one way, one can reasonably reach a conclusion and act on it instead of endlessly doubting. (When argument X has logical priority over Y, then Y is excluded from “all the arguments”. And when argument P is conclusively refuted by argument Q, then P is excluded from “all the arguments”.)

My Expertise

Because I’m asking for arguments from someone familiar with software and AGI rather than from just anyone, I think it’s fair that I share my own background.

I’m a philosopher and programmer. My speciality is epistemology (the philosophy of knowledge, including how to think, learn and reason, and how to evaluate ideas and arguments). I study and contribute to the Critical Rationalist epistemology of Karl Popper and David Deutsch, which I believe is important to making progress on AGI. David Deutsch, a physicist, philosopher and programmer, was my mentor and taught me a lot about philosophy and physics. He’s an award-winning pioneer of quantum computing, a Fellow of the Royal Society, and an author.

I’m a professional programmer with over a decade of work experience, but the software I work on isn’t related to AGI. I’ve read books about AI, watched talks, learned and coded some of the algorithms, talked with people in the field, etc.

Conclusion

Non-programmer animal rights advocates ought to be able to see that someone, some expert, should address the issue of whether humans are animals are differentiated by general intelligence. They should argue that animals have general intelligence (or argue that humans don’t have it) or explain some other sort of software/algorithm/code difference between animals and present day, non-AGI robots and software. If no one can do that and address the computational issues, the remaining option in favor of animal rights is to reject science.

I’m seeking thoughtful, competent written arguments addressing these issues. Blog posts are OK, not just academic material. I challenge anyone who favors animal rights to refer me to such literature in the comments below.


Elliot Temple | Permalink | Messages (10)

Discussion about Animal Rights and Popper

This discussion is from the Fallible Ideas Discord. Join link.

Context: Discussion Tree: State of Animal Rights Debate and in the comments you'll see that I went to some animal rights forums and asked for responses. And, after they had no literature to refer me to, I got banned from the Ask Yourself vegan debate Discord for not responding fast enough while troubleshooting an audio issue.

TheRat: curi, re the vegan thing. How could science demonstrate that animals can suffer (interpret pain as bad etc...) or how could we falsify that animals are not robots? Would this not require us to understand consciousness first? Would this not be in the realm of philosophy vs science? btw I think you're right but I don't know what would change my mind.
curi: knowledge creating animals. humans routinely do things we can't explain as non-AGI algorithms. let's see an animal do one. it's clearer if you get several different sorts of things, e.g. poetry, engineering, art, chess.
curi: you have to be careful about what counts cuz e.g. beavers do something that could be called engineering. but only a specific type that is encoded in their genes, they don't do it more generally.
curi: i'm not aware of any animal researcher with a halfway sophisticated understanding of what non-AGI software can do who has carefully observed and documented animals to try to show they do anything intelligent.
curi: i am aware of ppl observing carefully and noticing animals being much more algorithmic (or simpler algos) than ppl would naively, unscientifically expect: http://curi.us/272-algorithmic-animal-behavior
curi: i think the reason ppl don't care about this is they assume intelligence is a matter of degree and/or suffering is possible without intelligence
curi: so they consider it an uncontroversial non-issue that e.g. a dolphin is somewhere between 0.1% and 70% as intelligence as a human
curi: rather than understanding there at least might be a jump to universality for intelligence and so you can't just safely assume stuff has medium intelligence anymore than a computer can have a medium computational repertoire
curi: the jump to universality is what polarizes the issue into a binary intelligent or non-intelligent. but ppl don't know about it. so they aren't even trying to show a single thing that any animal has ever done which is incompatible with non-intelligence.
curi: so they haven't.
curi: alternatively they could argue for dualism, animal souls, non-intelligent suffering and differentiate that from information processing and computation, or several other things.
curi: i haven't seen anything that understands software stuff which tries to differentiate suffering from information processing in general without intelligence.
TheRat: Is it possible for animals to suffer without having that universality?
curi: there are no arguments to establish some way that would be possible afaik. i think suffering is related to preference, opinions, values, judgments. i think you have to want, prefer or value X, and be able to form judgments about better and worse, in order to suffer. something along those lines.
curi: if you never consider alternatives, like a rooma algorithm doesn't, then how can you be bothered by the outcome?
TheRat: I've read about Dolphins in captivity that seem to "go insane" and commit suicide. What do you think is going on there?
curi: chess algorithms consider alternative moves in some sense but it's mechanistic, it isn't a value judgment, they just do math about each outcome on the board and play the move that leads to the highest evaluation (or sometimes use a random algorithm among the top few moves to avoid predictability).
curi: re dolphins: sounds like algorithm bugs. animals have plenty of those. it's probably an evolutionary useful thing in some scenarios, like a failsafe where it tries to stop repeating the same actions that aren't working.
curi:

Only after thirty or forty repetitions will the wasp finally drag the caterpillar into its nest without further inspection.

curi: even digger wasps have failsafes where they change behavior after 30-40 repetitions.
curi: (whether an action works being defined in some algorithmic way, not as a value judgment or opinion, and in particular not as something where the creature can create new knowledge and new opinions that aren't in its genes)
curi: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.
curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.
curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.
TheRat: Yes. I've had that issue when trying to debate people. I'll say something and it flies right past them because they don't have cr background. Most of the time not realizing there is a disagreement there.
curi: it's worse for me in general b/c it's CR and Objectivism and Austrian econ/classical liberalism as major background knowledge ppl don't have. and sometimes other stuff but especially those 3.
curi: i should perhaps add my own additions to CR, especially debating methodology stuff, as an additional thing.
curi: they are within the CR tradition so could go either way on separating. i don't like to separate DD from CR.
curi: programming is another big background knowledge which is relevant in this case but doesn't come up tooooo often.
TheRat: Yes I have no programming knowledge at all so I struggle with the computation stuff from CR and DD.
curi: i don't think it's realistic to have serious opinions about animal rights without knowing how to code, knowing how various video game "AI" algorithms work, stuff like that. also some physics knowledge is important like about what information is and some conception of how computation aka information processing is part of reality.
curi: i don't even know good sources for that physics stuff. i kinda got bits here and there over time from DD. his information flow in the multiverse paper is both technical and largely off topic or unnecessary cuz of the multiverse focus.
TheRat:

i don't think it's realistic to have serious opinions about animal rights without knowing how to code

That sucks. Everytime I have attempted to learn how to code I give up after 1 day. I get bored.
curi: you can't really compare animals to robots if you don't know how robots work. harsh but i don't know a good workaround.
curi: i don't even know where to find one animal rights writer who knows how to code and tries to analyze that stuff.
curi: i don't think most animal rights advocates know of one either...
curi: i imagine i would have gotten replies by now somewhere if ppl actually had answers.
curi: ppl like answering reasonable-seeming opponents who ask for a particular thing and they totally have that covered.
curi: it's like if you go to a Popperian forum and ask if anyone knows any Popper chapters that refute induction, ppl will be happy to answer.
curi: or if you ask for anyone other than Popper with good anti-induction args, someone will want to recommend DD.
curi: but if no one knows any answers then you may be ignored.
curi: like if you go to a Popper forum and ask for his arguments against capitalism and why he rejected Mises, you may not get an answer b/c no1 has an easy or good answer to give. the answer, afaik, is Popper was wrong and actually irrational about that.
curi: if you don't bring up Mises they may point you to some non-technical kinda vague comments here and there that he made, but if you do bring up Mises' treatises Popper certainly made no attempt to answer those and nevertheless formed opinions in contradiction to them, so that's awkward, so it'll be hard to get ppl to engage with that issue.
curi: someone might try claiming that maybe Popper didn't know about Mises or didn't have time to read every possibly-dumb idea and it wasn't his speciality. but that kind of thing is dangerous and in this case will actually get you rekt by documented facts about Popper's awareness of Mises and exposure to ideas of that nature.
curi: so safer not to respond.
TheRat: Popper was friends with Hayek right? Did he disagree with Hayek too? I am very unfamiliar with Popper's political views. What I've read in OSE is actually more epistemology than poli sci or econ.
curi: yes he disagreed with Hayek significantly re capitalism/econ stuff. But hayek was also somewhat of a statist and socialist sympathizer, whereas Mises wasn't.
curi: Hayek was the leader of the Mount Pelerin society meetings which Mises and Popper both went to.
curi: there's a comment in a book by a popper student about Popper disliking and dismissing libertarian-type arguments like Mises, but it doesn't give arguments, nor did Popper. but he wasn't just unaware.
curi: his irrationality on these issues was enough to contradict himself, IMO quite blatantly. advocated freedom ... and TV censorship. advocated freedom and peace ... and the government forcibly taking 51% of all public companies.
curi: he says milder stuff in that direction in OSE. haven't read for ages but he talks about social technology by which he means something along the lines of governments improving at figuring out how to be effective at their policy goals. which sure aren't freedom.
curi: he's of course right that governments do tons of counterproductive and inefficient actions, and that's a big problem, and there's tons of room for improvement there. but he was also making some statist assumptions.


Elliot Temple | Permalink | Messages (0)