One Criticism Is Decisive

I'm sharing two answers I gave in 2019 explaining why we should reject an idea if we know one criticism of it. In short, a criticism is an explanation of why an idea fails at its purpose. It never makes sense to act on or accept an idea you think won't work.


https://curi.us/2124-critical-rationalism-epistemology-explanations#13292

I will also add that we don't reject a theory just from 1 failed observation. We must also have a better theory in place. One that explains what the previous theory successfully explained, and accounts for the mismatch in observation.

If it's a universal theory (X), and you (tentatively) accept one failed observation, and accept the arguments about why it's a counter-example, then you must reject the theory, immediately. It is false. You may temporarily accept a replacement, e.g. "X is false but I will keep using it as an approximation in low-risk, low-consequences daily situations for now until I figure out a better replacement. A replacement could be a new theory in the usual sense, but could also e.g. be a new combination of X + additional info which more clearly specifies the boundaries of when X is a good approximation and when it's not."

For a non-universal theory Y which applies to a domain D, then the same reasoning applies for one failed relevant observation – a counter-example within D.


https://curi.us/2124-critical-rationalism-epistemology-explanations#13300

As I understood it before, we don't reject it until we have a better explanation. Like for the theory or relativity, we have "failed observations" at the quantum level right? But we don't reject it because we don't have another better theory yet. What am I missing?

If you know something is false, you should never accept it because it's false.

The theory of relativity is accepted as true by (approximately) no one. Call it R. What people accept is e.g. "R is a good approximation of the truth (in context C)." This meta theory is not known to be false. I call it a meta theory because it contains R within it, plus additional commentary governing the use of R.

This meta theory, which has no known refutation, is better than R, which we consider false.

KP and DD did not make this clear. I have.

If you believe a theory is false, you must find a variant which you don't know to be false. You should never act on known errors. Errors are purely and always bad and known errors are always avoidable and best to avoid. Coming up with a great variant can be hard, but a quick one like "Use theory T for purposes X and Y but not otherwise until we know more." is generally easy to create and defend against criticism (unless the theory actually shouldn't be used at all, in any manner).

This is fundamentally the same issue as fixing small errors in a theory.

If someone points out a criticism C of theory T and you think it's small/minor/unimportant (but not wrong), then the proper thing to do is create a variant of T which is not refuted by C. If the variant barely changes anything and solves the problem, then you were correct that C was minor (and you can see that in retrospect). Sometimes it turns out to be harder to create a viable variant of T than you expected (it's hard to accurately predict how important every criticism is before you've come up with a solution. that can be done only approximately, not reliably).

It's easy to make a variant if you allow arbitrary exceptions. "Use T except in the following cases..." That is in fact better than "Always use T" for a T with known exceptions. It's better to state and accept the exceptions than accept the original theory with no exceptions. (It's a different matter if you are doubtful of the exceptions and want to double check the experiments or something. That's fine. I'm just talking from the premises that you accept the criticism/exception.) You can make exceptions for all kinds of issues, not just experiments. If someone criticizes a writing method for being bad for a purpose, let's say when you want to write something serious, then you can create the variant theory consisting of the writing method plus the exception that it shouldn't be used for serious writing. You can take whatever the criticism is about and add an exception that the theory is for people in situations where they don't care about that issue.

Relativity is in the situation or context that we know it's not universally true but it works great for many purposes so we think there's substantial knowledge in it. No one currently has a refutation of that view of relativity, that meta theory which contains relativity plus that commentary.


Elliot Temple | Permalink | Messages (0)

Human Problems and Abstract Problems

I originally wrote this in 2012. Single quotes are DD. One nesting level (single black line indenting the quote) is DD's friend Demosthenes who was involved with TCS a lot.


David Deutsch wrote in 2001 on TCS list regarding "Are common preferences always possible?":

Demosthenes wrote on 10/2/01 5:16 am:

On Tue, 16 Jan 2001 11:09:21 +0100, Sarah Lawrence wrote:

On Thu, 6 Feb 1997 at 10:32:03 -0700, Susan Ramirez asked:

Why do you believe that it is always possible to create a common preference?

This question is important because it is the same as

  • Are there some problems which in principle cannot be solved?

Or, when applied to human affairs:

  • Is coercion (or even force, or the threat of force) an objectively inevitable feature of certain situations, or is it always the result of a failure to find the solution which, in principle, exists?

I think that both Sarah and Demosthenes (below) somewhat oversimplify when they identify 'avoiding coercion' with 'problem-solving'. For instance, Sarah says "This question ... Is the same as[:] Are there some problems

Let's watch out for different uses of the word "problem".

which in principle cannot be solved?" Well, in a sense it is the same issue. But due to the imprecision of everyday language, this also gives the impression that avoiding coercion depends on everyone adopting the same theory (the solution, the common preference) about whatever was at issue. In fact, that is seldom literally the case, because the parties' conceptions of what is 'at issue' typically change quite radically during common-preference finding. All that is necessary is that the participants change to states of mind which (1) they prefer to their previous states, and (2) no longer cause them to hurt each other.

In other words, common preferences can often be much narrower than it may first appear. You needn't agree about everything, or even everything relevant, but only enough to proceed without hurting (TCS-coercing) each other (or oneself in the case of self-conflicts).

I agree that this question is important, though I would offer instead the following two elucidating questions:

In the sphere of human affairs:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

The word "problem" in both of these is ambiguous.

Problem-1: (we might call it "human problem"): "a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome"

Problem-2: (we might call it an "abstract problem"): "a thing that is difficult to achieve or accomplish"

There are problems, notionally, like going to the moon. But no one gets hurt unless a person has the problem of going to the moon. Problem-1 involves preferences, and the possibility of harm and TCS-coercion. And it is the type of problem which is solved by common preferences.

Problem-2, inherently, does not have time or resource limits, because the universe is not in a hurry, only people are.

So, are there any problems which are insoluble with the time and resource limits of real life situations? Not problem-2 type, because those do not arise in people's life situations, and they do not have time or resource limits.

And as for problem-1 type problems, those are always soluble (within time/resource constraints), possibly involving changing preferences. (BTW, as a general rule of thumb, in non-trivial common preference finding, all parties always change their initial preferences.)

An example:

problem-2: adding 2+2 (there is no time limit, no resource limit -- btw time is a type of resource)

problem-1: adding 2+2 within the next hour for this math test (now there are resource issues, preferences are involved)

Another way to make the distinction is:

problem-1: any problem which could TCS-coerce (hurt) someone

problen-2: any problem which could not possibly ever TCS-coerce (hurt) anyone

problem-2s are not bad. Not even potentially. Problem-1s are bad if and only if they TCS-coerce anyone. A problem like 2+2=? cannot TCS-coerce anyone, ever. There's just no way. It takes a different problem like, "A person asked me what 2+2 is, and I wanted to answer" to have the potential for TCS-coercion.

Notice solving this different problem does not necessarily require figuring out what 2+2 is. Solving problem-1s never requires solving any associated problem-2s, though that is often a good approach. But it's not necessary. So the fact that various problem-2s won't be solved this year need not hurt anyone or cause any problem-1s -- with their time limits and potential for harm -- to go unsolved.

I believe that the answer to question (1) is, no -- there are no human problems that are intrinsically insoluble, given unbounded resources.

This repeated proviso "given unbounded resources" indicates a misconception, I think. The answer to (2) is, uncontroversially, yes. Of course there exist disagreements -- both between people and within a person -- that take time to resolve, and many will not be resolved in any of our lifetimes.

I think this unclear about the two types problems. While it agrees with me in substance, it defers to ambiguous terminology that basically uses unsolved problem-2s to say there are insoluble problems and try to imply it's now talking about problem-1s.

There is a mix up regarding failure to solve an abstract problem like figuring out the right theory of physics (which two friends might disagree about) with failure to solve human problems, like the type that make those friends hurt each other.

It's harmless to have some disagreements that you "agree to disagree" about, for example. But if you can't agree to disagree, then the problem is more dangerous and urgent.

It's uncontroversial that people have unsolved abstract problems for long periods of time, e.g. they might be working on a hard math problem and not find the answer for a decade. And their friend might disagree with them about the best area to look for a solution.

But so what?

Human problems are things like, "I want to solve the problem this week" (maybe you should change your preference?) or "I want to work on the math problem and find good states of mind in regard to it, and enjoy making progress" (this human problem can easily be solved while not solving the harmless abstract problem).

But that has nothing to do with the question being discussed here.

Right because of the confusion over different meanings of "problem".

The fact that after 25 years of almost daily attention to the conflict between quantum theory and general relativity I have failed to discover a theory that I prefer to both (or indeed to either), does not indicate that I have "failed to find a common preference"

Right. Common preferences do not even apply to problem-2s, only problem-1s.

either within myself, or with other proponents of those theories, in the sense that interested Susan Ramirez. I have not found a preferred theory of physics, but I have found successively better states of mind in regard to that problem, each the result of successive failures to solve it.

However this view is only available to those of us who believe that for all moral problems there exists, in principle, a unique, objectively right solution. If you are any kind of moral relativist, or a moral pluralist (as many people seem to be) then you can have no grounds for arguing that all human disputes are in principle soluble.

It is only in spheres where the objective truth of the matter exists and is in principle discoverable, that the possibility of converging on the truth guarantees that all problems are, in principle, soluble.

I agree that for all moral problems

No clear statement of which meaning of problem this refers to.

there exists an objectively right solution, and that this is why consensual relationships -- and indeed all liberal institutions of human cooperation, including science -- can work. The mistake is to suppose that if one does not believe this, it will cease to be true. For people to be able to reach agreement, it suffices that, for whatever reason, they seek agreement in a way that conforms to the canons of rationality and are, as a matter of fact, converging on a truth. Admittedly it is a great impediment if they think that agreement is not possible, and very helpful if they think that it is, but that is certainly not essential: many a cease-fire has evolved into a peace without a further shot being fired. It is also helpful if they see themselves as cooperating in discovering an objective truth, and not merely an agreement amongst themselves, but that too is far from essential: plenty of moral relativists have done enormous good, and made enormous moral progress -- for instance towards creating institutions and traditions of tolerance -- without ever seeking an objective truth, or realising that they were finding one. In fact many did not realise that they were creating agreement at all, merely a tolerance of disagreement. And incidentally, they were increasing the number of unsolved problems in society by promoting dissent and diversity.

Increasing the number of unsolved problem-2s, but decreasing the number of unsolved problem-1s.

What we need to avoid, both in society and in our own minds, is not unsolved problems,

Ambiguous between problem-1s and problem-2s.

not even insoluble problems,

Ambiguous between problem-1s and problem-2s.

Also doesn't seem to be counting preference changing as a solution, contrary to the standard TCS attitude which regards preference changing as a normal part of common preference finding, and part of problem solving.

but a state in which our problems are not being solved

But this time it means problem-1s.

-- where thinking is occurring but none of our theories are changing.

I believe that the answer to question (2) is yes -- human problems that cannot be solved even in principle, given the prevailing time and resource constraint, are legion. Albeit, nowhere near as legion as non-TCS believers would have it. My main argument in support of this thesis is based on introspection: Let him or her who is without ongoing inner conflict proffer the first refutation.

This is a bit like saying, at the time of the Renaissance, that science is impossible because "let him who is without superstition proffer the first refutation". The whole point about reason is that it does not require everything to be right before it can work. That is just another version of the "who should rule?" error in politics. The important thing is not to start out right, but to try to set things up in such a way that what is wrong can be altered. The object of the exercise is not to create a chimerical (and highly undesirable!) problem-free state,

A problem-2-free state is bad. As in, not having any problems we might like to work on. This is bad because it creates a very hard problem-1: the problem of boredom (having no problem-2s to work on, while wanting some will cause TCS-coercion).

A problem-1-free state is ... well there is another ambiguity. Problem-1s are fine if one is rationally coping with them. It's not bad to have human problems and deal with them. What's bad is failure to cope with them, i.e. TCS-coercion.

How can we tell which/when problem-1s get bad? When they do harm (TCS-coercion).

To put it another way: problem-1s are bad when one acts on an idea while having a criticism of it. But if it's just the potential for such a thing in the future, that's part of normal life and fine.

but simply to embark upon actually solving problems rather than being stuck not solving any (or not solving one's own, anyway). Happiness is solving one's problems, not 'being without problems'.

"one's problems" refers only to problem-1s, but "being without problems" and "actually solving problems" are ambiguous.

In other words, I suggest that there isn't a person alive whose creativity is not diminished in some significant way by the existence of inner conflict. Or rather dozens, if not hundreds or thousands, of inner conflicts.

Yes. But having diminished creativity (compared to what is maximally possible, presumably) is and always will be the human condition. Minds are fallible. Fortunately, it is not one's distance from the ideal state that makes one unhappy, but an inability to move towards it.

And if you cannot find a common preference for all the problems that arise within your own mind, it is a logical absurdity to expect to be able always to find a common preference with another, equally conflicted, mind.

Just as well, really. If you found a common preference for all the problems within your own mind, you'd be dead. If you found a common preference for all the problems you have with another person with whom you interact closely, you'd be the same person.

[SNIP]

However, and it is an important however, to approach this goal we must dare to face the inescapable facts that, in practice, it is by no means always possible to find a common preference; that therefore it is not always possible to avoid coercion;

This does not follow, or at least, not in any useful sense. Demosthenes could just as well have make the identical comments about science:

[Demosthenes could have written:]

In the sphere of science:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

I believe that the answer to question (1) is, no -- there are no scientific problems that are intrinsically insoluble, given unbounded resources.

Right. And why should it follow from this that a certain minimum of superstition is unavoidable in any scientific enterprise, and that people who try to reject superstition on principle will undergo "intellectual and moral corrosion" if, as is inevitable, they fail to achieve this perfectly -- or even if they fail completely?

As Bronowski stressed and illustrated in so many ways, doing science depends on adopting a certain morality: a desire for truth, a tolerance, an openness to change, an awareness of one's own fallibility and the fallibility of authority, yet also a respect and understanding for tradition ... (It's the same morality as TCS depends on.) And yes, no scientist has ever been entirely free from irrationality, superstition, dogma and all the things that the canons of rationality say are supposed to be absent from a true scientist's mind. Yet none of that provides the slightest argument that a person entering upon a life of science is likely to become unhappy

Tangent: this is a misuse of probability. Whether that happens depends on human choices not chance.

in their work, is likely to find their enterprise ruined either because they encounter a scientific problem that they never solve, or because they fail to rid their own minds of certain superstitions that prevent them from solving anything.

The thing is, all these sweeping statements about insoluble problems

Ambiguous.

and unlimited resources, though true (some of them trivially, some because of fallibilism) are irrelevant to the issue here, of whether a lifestyle that rejects coercion is possible and practical in the here and now. A TCS family can and should reject coercion in exactly the same sense, and by the same means, and for the same reason, as a scientist can and should reject superstition. And to the same extent: utterly. In neither case can the objective ever be achieved perfectly, with finite resources. In neither case can any guarantee be given about what the outcome will be. Will they be happier than if they become astrologers instead? Who knows? And certainly good intentions alone can guarantee nothing. In neither case can the enterprise be without setbacks and failures, perhaps disasters. And in neither case is any of this important, because ... well, whatever goes wrong, however badly, superstition is going to make it worse.

-- David Deutsch
http://www.qubit.org/people/david/David.html

And Josh Jordan wrote:

I think it makes sense to proceed according to the best plan you have, even if you know of flaws in it.

What if those flaws are superstition? Or TCS-coercion?

Whatever happens, acting against one's best judgment -- e.g. by disregarding criticisms of flaws one knows -- is only going to make things worse.


Elliot Temple | Permalink | Message (1)

Reasoning from Problems not Assumptions

Ron Garret (RG) wrote (CR means Critical Rationalism):

All reasoning has to start from assumptions. Assumptions by definition can't be proven or disproven. So how can we evaluate our core assumptions? If we try to use reason, that reasoning must itself be based on some assumptions like, "Reason is the best way to evaluate assumptions." But since that is an assumption, how can we evaluate it without getting into a infinite regression?

And near the end of the post:

The point is: the apparent infinite regress of rationality bottoms out in its effectiveness

And in comments:

BTW, I very much doubt that CR actually claims that reasoning is possible with no assumptions. If Popper (or Deutsch) ever actually said this, it's news to me. It seems self-evident to me that all reasoning has to start with assumptions. Whatever else a reasoning process consists of, there has to be some point in the process at which you assert for the first time the truth of some proposition. That assertion cannot be based on the truth of any previously asserted proposition because, if it were, it would not be the first time you asserted the truth of some proposition. A proposition that is asserted to be true without any prior assertions to support it is by definition an assumption.

(Note that even this argument makes assumptions, e.g. that reasoning has a beginning, that it involves the assertion of propositions, that words like "assert" and "proposition" have coherent meanings, etc. etc. etc.)

The view described by RG is the standard, non-CR view. It is regarded by CR as incorrectly relying on foundations and justification, and as not having the right paradigm. Example quotes about foundations (partly to explain, partly because of RG’s doubts that the CR thinkers Karl Popper (KP) or David Deutsch (DD) disagree with him):

KP in The Logic of Scientific Discovery:

The empirical basis of objective science has thus nothing 'absolute' about it.[4] Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or 'given' base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.

DD in The Beginning of Infinity:

The whole motivation for seeking a perfectly secure foundation for mathematics was mistaken. It was a form of justificationism. Mathematics is characterized by its use of proofs in the same way that science is characterized by its use of experimental testing; in neither case is that the object of the exercise. The object of mathematics is to understand – to explain – abstract entities. Proof is primarily a means of ruling out false explanations; and sometimes it also provides mathematical truths that need to be explained. But, like all fields in which progress is possible, mathematics seeks not random truths but good explanations.

DD in The Beginning of Infinity:

there can be no such thing as an ultimate explanation: just as ‘the gods did it’ is always a bad explanation, so any other purported foundation of all explanations must be bad too. It must be easily variable because it cannot answer the question: why that foundation and not another? Nothing can be explained only in terms of itself.

KP in Conjectures and Refutations:

The question about the sources of our knowledge can be replaced in a similar way [to replacing the “Who should rule?” question in politics]. It has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?

The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. This view may be said to be as old as Xenophanes. Xenophanes knew that our knowledge is guesswork, opinion—doxa rather than epistēmē—as shown by his verses (DK, B, 18 and 34):

The gods did not reveal, from the beginning,
All things to us; but in the course of time,
Through seeking we may learn, and know things better.

But as for certain truth, no man has known it,
Nor will he know it; neither of the gods,
Nor yet of all the things of which I speak.
And even if by chance he were to utter
The perfect truth, he would himself not know it;
For all is but a woven web of guesses.

Yet the traditional question of the authoritative sources of knowledge is repeated even today—and very often by positivists and by other philosophers who believe themselves to be in revolt against authority.

The proper answer to my question ‘How can we hope to detect and eliminate error?’ is, I believe, ‘By criticizing the theories or guesses of others and—if we can train ourselves to do so—by criticizing our own theories or guesses.’ (The latter point is highly desirable, but not indispensable; for if we fail to criticize our own theories, there may be others to do it for us.) This answer sums up a position which I propose to call ‘critical rationalism’.

[...]

So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring—there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.’

The standard, non-CR view involves problems like a regress because it tries to do things like argue for ideas "based on the truth of any previously asserted proposition” (RG’s words above). RG acknowledges some of the problems with arbitrary foundations or, in the alternative, an infinite regress. He tries to solve them by suggesting an effectiveness criterion for judging ideas. This doesn’t solve the problem: it is an arbitrary foundation or leads to a regressing debate about the effectiveness of the effectiveness criterion, and the effectiveness of whatever arguments are used in that debate, and so on.

The CR view is that we start our reasoning with problems, not assumptions. We proceed to brainstorm guesses about solutions. We do not assert that our guesses are true; we expect our guesses to one day be discarded as obsolete falsehoods because progress is infinite. And then we criticize the guesses. This leads to fixing the errors in some guesses and rejecting other guesses, and generally to progress.

The CR paradigm is not about establishing things on the basis of assumptions or on any other basis or foundation, nor is it about choosing a criterion for what types of theories are best (e.g. effective ones or simple ones). The CR paradigm is about error correction. CR says we learn not by making foundational assumptions and building from them to other ideas, but by making unjustified guesses to try to solve our problems, which we then expose to error correction.

Both CR and the standard view try to deal with the problem of differentiating good and bad ideas. The standard view seeks to find a good starting point, and good methods of thinking, so that bad ideas can never be introduced (or at least are hard to introduce). The CR view accepts there is no way to avoid error or even to make it uncommon, and instead focuses its primary effort on error correction. We can’t make error uncommon because we’re all alike in our infinite ignorance (as KP said) and we’re always at the beginning of infinity (as DD said) with infinite stuff left to learn. (There are also other arguments about fallibilism.)

The CR view on assumptions and foundations is that we can start anywhere. We can start with high level ideas or low level ideas. We can start in the middle. Anything goes because we aren’t trying to solve the problem of avoiding error by limiting where we begin our reasoning. What’s important is that all ideas be held open to error correction. Nothing is beyond question or criticism. There are no limits beyond which we can’t delve further and learn more. No matter where we start, we can always work in any direction. We can flesh out prior or lower level ideas more. We can flesh out later or higher level ideas more. We can go sideways. And things don’t organize neatly into levels anyway, for all is a woven, tangled, chaotic, web of guesses, not a pyramid hierarchy.

What stops the regress of asking “Why?” and “How do you know?” infinitely? Nothing formal. CR isn’t about proving we’re right. A CRist will say, “I’ve explained why I think this, and how I know, in what I think is an adequate level of detail to solve the problem I’m trying to solve. Do you see an error I’ve made?” CR is about searching for and fixing errors, not establishing that our answers are correct. We expect our answers will be improved in the future. We follow our interests in our attempts to live our lives, solve problems, and learn. There are infinite places we may direct our attention and we make judgments about which to prioritize. These interests and judgments, like everything else, are themselves open to criticism.

There is no way to provide infinite detail about one’s reasoning. This is not actually a problem unique to foundations. It applies just as well to the consequences of one’s reasoning (the further implications). But we don’t need infinite detail if we aren’t after a guarantee of correctness. If instead we know we may well be wrong, but we’re doing our best to find and correct errors, then the finite detail is adequate for that purpose. And there are no bounds on where we can go into more detail. Any part that people think could use more questioning can be critically considered more. We never have to stop, we just stop when we think our attention is better used elsewhere (and we don’t know of an error with that).

A criticism is an explanation of why an idea does not solve the problem it’s claiming to solve. The reason we shouldn’t accept (or act on) criticized ideas, even tentatively, is because we have an explanation of why they won’t work. And all criticisms are themselves open to criticism. (What do you do if people keep throw infinitely many dumb criticisms at an idea? In short, criticize infinite categories of idea all at once. Criticize patterns of error. Don’t criticize all the criticisms individually. In general, good will and good faith are helpful and make things better. But if someone wants to throw infinitely many criticisms at an idea, they may try it. It’s easy to do that if you generate the criticisms according to a pattern, but then they can also be criticized as a group because they fit that pattern. To defend against this, we’ll only need one counter-argument for each pattern the critic thinks of to form an infinite set of criticisms from. So we don’t have a greater burden than he does. And actually it’s better than that if we can identify a meta-pattern – a pattern to his patterns – and criticize that. If we use powerful criticisms with high “reach” (DD’s term meaning broad/wide applicability), which deal with the right issues, it becomes harder and harder for a critic to think of anything new to say which isn’t already addressed by our criticisms. And we can write them down and reuse them with all future critics. That is one of the main projects intellectuals should be engaged in.)

Our guesses can be arbitrary non sequiturs. They need not be based on anything – the source or basis is not the important thing. However, it’s hard to make them survive criticism if they don’t use any existing knowledge. It’s hard to start over, without the benefit of any existing knowledge (which has had a bunch of error-correction effort already put into it) and make something good. So we often build on, e.g., the English language. However, just because I use the English language to help me formulate my idea does not mean my idea depends on the English language in some kind of chain of logical implication. The English language is not necessarily assumed or an important basis. My idea may well be approximately autonomous. Maybe we’ll one day find huge flaws in English, and find that Japanese is much better, and then notice that my idea can be easily translated to Japanese because it was never actually tightly coupled to English in the first place. It’s like how the C programming language isn’t based on any particular CPU architecture and code can be recompiled for other architectures (so while my code needs a CPU to run, it’s not based on whatever CPU I’m currently using).

The CR paradigm lacks the solidity sought by the standard view. It doesn’t justify its ideas. It doesn’t provide justified, true belief. It doesn’t offer ways to demonstrate that an idea is true so that we need never worry about it having an error again. It doesn’t offer ways to positively establish ideas. It differentiates good and bad ideas by criticism of the bad ones, not by anything to bestow a good, positive status on the good ideas (which CR views as merely ideas which are not currently known to be wrong). CR is all we can have due to logical problems that the standard view has been unable to deal with century after century. And CR is enough for science to work, among other things.

I suggest rereading the DD and KP quotes (that I gave above) at this point. I think they’ll make more sense after reading the rest (both what they mean and how they are relevant), and they’ll also help clarify my text. See e.g. how KP talks about the sources of our ideas not mattering.

This is all a lot to understand. As far as I’ve been able to determine, DD and probably Feynman are the only people who ever understood CR by reading Popper’s books, without the help of a bunch of discussion with people who already knew CR (like Popper, Popper's students, or DD). We’ve never found a single person who has understood CR well from DD’s books without discussing with DD or DD’s students. I had many large confusions after reading FoR, which took years of discussion, study and DD help to resolve. CR is deeply counterintuitive because it goes against ~2300 years of philosophical tradition, and those ideas have major influence throughout our culture. Supporting people’s CR learning processes, if they’re interested, is one of the important purposes of this forum. Questions are welcome and you shouldn’t expect to fully understand this already or soon.

Note that CR theory explains this (the previous paragraph). Errors are inevitable and common, including when understanding even one sentence[1]. Trying your best to correct your own errors is a good start, but critical discussion has big advantages. People have different strengths and weaknesses, knowledge and ignorance, biases and irrationalities, etc. People differ. External criticism is valuable because other people will catch errors you miss (including errors they made in the past and already fixed). Because error correction is such a big deal, critical discussion is approximately necessary for ambitious people (the alternative plan is to be one of the best thinkers ever who is so much better than ~everyone at ~everything that external criticism doesn’t add much). Critical discussion also lets people share explanations, problems, and other knowledge which isn’t criticism, which is also helpful.

[1] DD in The Beginning of Infinity:

SOCRATES: But wait! What about when knowledge does not come from guesswork – as when a god sends me a dream? What about when I simply hear ideas from other people? They may have guessed them, but I then obtain them merely by listening.
HERMES: You do not. In all those cases, you still have to guess in order to acquire the knowledge.
SOCRATES: I do?
HERMES: Of course. Have you yourself not often been misunderstood, even by people trying hard to understand you?
SOCRATES: Yes.
HERMES: Have you, in turn, not often misunderstood what someone means, even when he is trying to tell you as clearly as he can?
SOCRATES: Indeed I have. Not least during this conversation!
HERMES: Well, this is not an attribute of philosophical ideas only, but of all ideas. Remember when you all got lost on your way here from the ship? And why?
SOCRATES: It was because – as we realized with hindsight – we completely misunderstood the directions given to us by the captain.
HERMES: So, when you got the wrong idea of what he meant, despite having listened attentively to every word he said, where did that wrong idea come from? Not from him, presumably . . .
SOCRATES: I see. It must come from within ourselves. It must be a guess. Though, until this moment, it had never even remotely occurred to me that I had been guessing.
HERMES: So why would you expect that anything different happens when you do understand someone correctly?
SOCRATES: I see. When we hear something being said, we guess what it means, without realizing what we are doing. That is beginning to make sense to me.

When you read books, you guess. Many guesses are wrong. You fix many of them yourself. Critical discussion helps fix more errors. People routinely overestimate how well they understood moderately difficult books that they read, and it becomes a huge problem with very hard material like CR books. Understanding of books should be tested, and one of the best methods of doing that is to write down your understanding and then share it with people who already understand the book and see if they agree that you have their position right. (You can do this test of understanding whether you agree or disagree with the material).

Summary: According to CR, making assumptions is not the way one solves problems. One solves problems by brainstorming solutions and doing error correction on the solutions. And while doing that, CR holds that it’s important to recognize the fallibility of all of our ideas. We should hold our ideas open to critical questioning and improvement, and expect that they can be improved, not take them to be true. (Here I'm contradicting "All reasoning has to start from assumptions." An "assumption" means a proposition taken to be true). CR holds things like: Don’t assume your ideas are true; keep looking for errors.


I originally wrote this in 2019 and I've made minor edits.


Elliot Temple | Permalink | Messages (0)

Learning and Unlearning Habits

When people learn a new computer game, what happens? Especially a pretty good gamer and a pretty fast paced game. He forms some habits. He learns to press certain combos of buttons. He learns to react in X way to Y situation. He learns some pattern recognition – for various patterns, start shooting. For various other patterns, start blocking. Stuff like that.

So he’s creating, in a matter of minutes, new habits, new automatic reactions, new intuitions, new things that are now second nature or intuitive and he can do them without much thought. You have to get the basics of the game to be like that so you can think about more advanced strategy. Just as we automate walking around in real life, we also need to automate walking around in video games so we can focus on other parts of the games. (btw sometimes ppl automate video game controls so much that they forget what the controls are. like you ask them how they did that, and they are like “uhhhh i hit the button, idk i didn’t think about it”. sometimes they have to like look at their hand to see what buttons they are pressing, or stop and remember the buttons, or something. it’s so automatic they aren’t thinking about it. it’s a little like asking a person which muscles he uses when walking, except less hard.)

ok so this video game player is creating habits/automazations/etc. and what always happens is: some are mistakes. so he has to unlearn some. he has to change some. some of his first guesses about how to play the game turn out wrong.

and that isn’t that big a deal. that’s just part of learning. you gotta do some unlearning too. video game players do that all the time. it’s so common.

sometimes you have to relearn things even if you didn’t make a mistake, btw. like you learn to beat a boss, then later there is a similar boss with some changes. so you take your old habits for the first boss and you make adjustments so they can work on the second boss. so in some situation, with the new boss, you have to stop yourself from doing Y after X, as you were in the habit of doing. you dismantle the habit that was automating that.

when people can’t dismantle or change automated habits it’s commonly an indication of irrationality, dishonesty, etc. it can also be an indication that the habit is used by a hundred other habits which rely on it, so it’s hard to mess with because of its complex involvement in lots of other stuff you don’t want to break. and ppl forget how habits work that they made long ago, especially in early childhood, which is what’s going on with some sexual orientation stuff (that’s in addition to the other things from earlier in this paragraph, it doesn’t have to be just one).


Mastery typically comes from practicing to the point that encountering new errors is rare, and you figured out solutions to all the errors you’ve seen before (except maybe a few rare ones that you decided to ignore). When nothing is gonna go wrong then you can go faster and it starts getting boring consciously (cuz there’s nothing left for your conscious mind to do, no changes are needed, no additional creativity is needed) and you stop paying conscious attention to it. (people often stop paying conscious attention way too early, btw, which prevents them actually getting good at stuff.)


The above were two sections of a Fallible Ideas email I wrote in 2019. I edited the term "workstation" to "habit" in a few places. I talked about mental workstations in this post, but "habit" is clearer for people who haven't read that. I was answering a question about firing workers at one's mental workstations (aka automatized ideas, aka habits) or dismantling/retiring the workstations. I like the metaphor of the mind as a factory with many workstations (with machines, robots or low-skill workers) and the conscious mind as a manager, inspector or leader who can go around and look at workstations, review what people are doing, make changes, build new workstations, etc., and when the manager isn't present the workstations keep running without him (the unconscious mind). You can only look at one part of your mind at a time (or maybe fewer than ten parts at once), and the only way to get much done is with automation so stuff works without your manager/conscious-attention being there. Your mind is like a powerful factory that's mostly automated and whenever you need to do manual labor (conscious/manager attention) that's really inefficient and slow. Conscious/manager attention is best used for fixing workstations or creating new workstations, not for doing work that could be done by a workstation. (It's OK for the manager to do work a few times when you're new to it, to figure out how to do it, but then he needs to delegate. Practice should involve figuring out how to delegate and set up automated workstations to do something and get those working right, not your conscious mind doing everything itself. Practice should primarily be a process of automating, not a process of your consciousness/manager practicing stuff himself. Once you figure out how to do something initially, then further practice should be kinda like doing job-training for subordinates (the subordinates being cheap, plentiful mental resources that require little to no conscious attention once they're set up). The conscious mind tells them what to do then watches them try doing the work and gives corrections.)


Elliot Temple | Permalink | Messages (0)

Henry Hazlitt on Practice

In Thinking as a Science (1916), Henry Hazlitt wrote (my emphasis):

The secret of practice is to learn thoroughly one thing at a time.

As already stated, we act according to habit. The only way to break an old habit or to form a new one is to give our whole attention to the process. The new action will soon require less and less attention, until finally we shall do it automatically, without thought—in short, we shall have formed another habit. This accomplished we can turn to still others.

I agree and have been advocating this for years. People learn to do something correctly, once, and then think they've learned it and they're done. But that's just the first step. For skills you'll use often, you should practice until you can do it cheaply, easily and reliably. E.g. it's important to be able to type using almost zero conscious attention so that I can focus my attention on the ideas I'm writing. It's best to think in an objective – not biased – way pretty much automatically in general so that you can focus on considering a specific topic (like economics); people who need to use a bunch of mental focus to avoid bias are at a big disadvantage because they have less attention left for the actual topic (and what often happens is, at some point, they focus their attention on the topic and then their habitual bias starts happening).


Elliot Temple | Permalink | Messages (0)

Learning to Mastery and Repetition

I originally wrote this to the Fallible Ideas email list in 2019.


every adult learned some stuff to the point of MASTERY – very low attention needed, can do it great while tired/low-focus/low-effort, very low error rate, etc.

like walking. and talking. and reading. and, for many people, basic arithmetic. and, for many people, the year WWII ended and the number of states in the US and the number of original colonies (they don’t have to stop and think about those things, they just know, instantly).

doesn’t work in all contexts. giving a speech or walking on ice are different. but that’s ok. they know that. they pay more attention in those contexts. they understand pretty well what is mastered and what isn’t.

there are generic things that ~everyone gains mastery over, like walking. and there are generic things that lots of ppl gain mastery over, like some basic arithmetic.

and there are other things that only a few ppl gain mastery over. like i mastered tons of chess skills. lots of stuff is automated to the point where i can play good chess moves in under 1 second. and i could still mostly do that even though i quit chess many years ago – like i’d be worse now, and rusty, but still worlds better than a beginner. and giving me 10 minutes to think about a move right now, vs. 10 seconds, still wouldn’t make a ton of difference. the skill i still have is still mostly automatic. (when i was actively playing, 10 seconds vs. 10 minutes also wasn’t a huge difference. it matters, especially when playing someone who is very similar skill level to you, but over 90% of your skill works within 10 seconds, and the extra 10min of thought only adds a bit extra.) btw i haven’t mastered chess as a whole, i just have mastery over lots of pieces of chess to the point that i’m a good player as a whole but certainly not the best. mastery doesn’t mean perfection overall, it can just mean mastering a specific piece of something, or sub-skill, and then you have mastery over that piece. mastery is about getting something to the point of it being really automatic – very low error rate while using very little conscious attention.

some ppl get really good at an instrument or a sport or many other things.

but most stuff that ppl master, they master in childhood. and they don’t remember the learning process very well. and so, as adults, they don’t have a good example to refer to for how to learn. they haven’t mastered anything recently.

most adults either learned to touch type as a kid or they still aren’t great at it. actually mastering it as an adult is uncommon.

Dennis replied:

I agree wholeheartedly. It's a really rewarding experience to have learned something new and somewhat mastered it as an adult. It's a neat way to reward one's future self. I still thank myself for teaching myself to 10 finger touch type last year. Somehow I had gotten by using just three or four fingers over the years, and this is just so much better now.

My original email continued:

so one of the things i recommend ppl do is master something. learn something. see how learning works. doesn’t matter what it is. just gotta succeed. it shouldn’t be very hard. don’t make philosophy be the first thing you learn really well in the last 20 years. that’s ridiculous. learn something easier for practice. you can learn a bit of philosophy but don’t go for mastery until you master some easier stuff.

the best thing to master, in general, for practice, is a video game. there are lots of options but video games have a lot of very good characteristics. but if you don’t like them, or you have something else that you really wanna use, you can consider alternatives. i have explained in the past what’s good about video games, what kinda characteristics to look for in something to master, and written about many game examples.

what lots of ppl do is learn stuff a little bit, halfway, don’t master it, and move on. then repeat.

so, yet again, i advise ppl to learn a video game to get a feel for mastery and how learning works. or master something else. but no one listens to me. to the extent anyone else here plays video games, they don’t stream it on twitch, they don’t master it, they don’t talk about it much, and they aren’t very good.

Dennis replied:

In one of Popper's essays I read the other day he talks about the difference between creative learning (ie problem solving) and learning by repetition. [...]

Do you differentiate at all between the two modes of learning? I've been wondering about Popper's remark about learning by repetition. He seems to claim that its akin to induction, but induction is impossible, so... how could anyone learn by repetition? Also, I doubt people actually have two different modes of learning. [...]

I replied:

You can’t learn merely by repetition, you have to think about what will and won’t work. Repeating can’t figure out solutions and can’t do anything to find or correct errors.

Some of my examples are simpler because people should master some easier things before aiming for some harder ones. There has to be a progression.

In order to effectively think creatively about chess strategies, you can’t be too distracted by remembering how the pieces move. Practice does help automate one’s understanding of the piece movement rules. But practice isn’t just about repeating things, you think through what the rule for moving a piece is and figure out where it can go – it gets actual conscious attention when you’re learning it. You couldn’t just repeat correct piece movements without conscious attention, as a practice method, because you don’t know them well enough yet. (You could repeatedly move a rook back and forth between two adjacent squares, or something else simple, and thus make correct moves without thinking about it even though you don’t know the piece moves well, but you wouldn’t learn much by doing that, that’d be bad practice.)

It’s the same with everything else. Interesting, creative conscious thought is always building on many layers of thinking that were conscious in the past but no longer require conscious attention – that attention is now freed up for more advanced things.

Learning touch typing requires directing conscious attention to doing it correctly, as well as some creative problem solving – identifying what you’re screwing up and figuring out how to fix it. Generally this means doing things slowly at first so you can get it correct even though you’re barely able to do it. Then you speed up a bit at a time and check for new errors happening due to going faster. Trying the same thing at successively faster speeds isn’t really repetition because the speed is changing. You do repeat a little because of variance – to find out if you are making mistakes at a new speed, you might need to do it 20 times, perhaps more, depending on what sort of error rate is acceptable. Doing it once at a new speed doesn’t mean you can do it reliably at that speed. The same method is common with instruments and many other things people learn.

Since there’s infinite potential progress, ideally ~all our current thinking would be so easy in the future that it takes almost no conscious attention, and we could consciously focus on more advanced things. I think this is an atypical goal, but important. I generally don’t regard things as finished if I can do them but it’s hard or slow or it only works 1 in 3 times (or even 99 out of 100 can be too low depending on what it is). As one example, I think it’s a travesty that most of the world’s so-called “intellectuals” can only read at 300 words per minute or fewer and aren’t trying to improve that, they think they’re done learning to read even though they do it slowly using lots of conscious attention.


Elliot Temple | Permalink | Messages (0)

Learning with Easy Steps in Vindictus

This is reposted from a blog comment I wrote in 2019. On FI, Kate asked about my Hardness, Emotions, Mental Automation post:

I think the meaning of hard/easy used in the statement is the second one, i.e. hard/easy for me (now). Whether or not something is also considered inherently hard doesn’t matter. The key is whether it’s currently hard for you — whether your manager is going to have to do it.

It’s still unclear to me whether “only do things which are easy" is suggesting that people not try to fix irrational thinking methods or figure out how to use FI if they consider those things to be hard.

there is a learning/doing distinction. first you learn how to do something, say dentistry, then you do it (fill cavities, etc). so one of the meanings is you should learn enough that dentistry is easy before it's your job. don't learn enough you can do it, learn enough that it is now easy. dentists should have mastery so they can do it with a low error rate (and VERY LOW rate of major errors) even when tired, distracted, unfocused, etc.

and also the learning process shouldn't really be hard. say you're trying to beat a level in a video game. if your goal is "beat the level" then that's hard. but that's about doing, not learning. if your goal is "try strategy X and see if it works or not", that could be a step towards learning to beat the level which is also easy. if strategy X is too hard, then you could have easy immediate goals like "try action X1" and "try action X2" and so on – try out individual parts of it before trying to do the whole thing.

in Vindictus (example gameplay video) there are lots of boss fights you can do by yourself and get a gold medal for being hit 3 times or fewer. success is hard in some sense. but the learning process doesn't have to involve hard steps. first you can just stand there and let the boss hit you and watch what he does. that's easy! after you watch a bit, you can start to figure out what his attacks are. lots of the bosses only have like 5 different attacks. if remembering is hard you can write them down. you can even record video clips of each attack. that's more work but it isn't hard. so this step of seeing what the attacks are can be pretty easy, especially if you aren't rushing yourself. like if you are trying to remember every attack after you've seen it once, that's hard. but if you take your time and are OK with remembering an attack after seeing it 10 times, then it's not very hard.

the next step is blocking/dodging attacks (each character in the game has a few defensive options, mostly dodges and blocks). you can figure this out without doing anything hard, too. for each boss attack, try your first defensive move at various different timings. you can get a good idea of the right timing by letting the boss hit you and seeing what you take damage. your defensive move should generally be used around .5 seconds before the time you took damage, though it varies by move. if this isn't working well, try your character's second and third defensive moves and see if they work better for dealing with this attack.

many boss attacks have multiple parts. like they swing their sword 3 times in a row, and it's a set pattern of those 3 swings. so you can figure out a series of 2-3 defensive moves to defend against all 3 sword swings. (sometimes attacks come close together and you can stop multiple attacks with one defense.)

for each attack, there is some kind of clue that it's coming. the main clues are animations like a boss moves his sword or shoulders back before swinging forward. you see them getting ready to attack in some way. so you also need to learn some kinda thing that you will react to – the signal that it's time to do that defensive pattern for that move.

this can all be done pretty intuitively but it can also be done by conscious design and you write a list of every attack, every signal its coming, and what defensive moves you plan to use for that attack.

which part of that is hard? no part. if you do it in this methodical way, every part is easy. it's not like you need super fast reactions times. the game isn't hard in that way. if you calmly watch for the signal that a specific attack is coming, and you aren't worrying about anything else, then you can block/dodge it with a bit of practice, it's not that hard (and if a different attack happens first you just let it hit you and wait until the boss does the one you're trying to stop).

the individual parts of the game aren't that hard. but the complexity adds up when you're watching for 10 different possible attacks (on the harder, more complicated bosses) while also doing your attacks and also there are other allies on your team who the boss might target (if the boss does a move aimed at you, or aimed at a guy off to your right, then the patterns of blocks and dodges that protect you, and the timing to do them, can be different. where the boss is aiming changes where his sword ends up at different times.) and also you can be remembering to drink health potions every 4 seconds and use your cat statue every 70 seconds and tracking how much SP you have (points for doing special moves) and then managing which special moves to use, when, and so on. and then your ally dies and you want to go resurrect him but that requires standing still for 3 seconds so you have to find a safe time to do that between boss attacks. etc.

but basically all of that can be learned as a sequence of easy steps, too.

once you learn to defend attacks you practice until it becomes more of an automatic habit. you get it to the point its easy to do all the attacks for a boss, it's second nature, its intuitive, your error rate is low. then you try attacking in between the bosses attacks. you'll already have a sense of how much downtime there is after which attacks since you've seen them a bunch. so you can estimate how big of an attack you can fit in after each boss attack, and you try it out and see what works. that's assuming you can already do your attacks easily. if you can't, no problem, you just practice attacking without worrying about defense (initially just do this in an empty area with no enemies). and then practice on easy enemies where getting hit isn't a big deal, so even if your error rate for defense is high, cuz you're focused on attacking, it doesn't matter much.

before you actually use your attacking or defending as a skill – before you try to DO it for real instead of doing it in a learning/practicing context – you need to get it to be easy, you master it so an automatic mental workstation can do it. so by the time you're trying to kill the boss, you have all the skills needed to do it, and it isn't scary or hard like it would be if you just went up to him the first time and tried to win.

and after you practice, you still don't expect to win. if your goal was to go straight from practice to success then that'd be hard. instead, you practice and then you try fighting the boss for real as a test to see how well you do. you're checking how effective your practice was, what your error rate currently is. that's easy cuz the goal isn't "make no errors", the goal is "see how many errors i make". so you do the blocking and attacking in easy, automated ways, which is important cuz now your conscious attention is mostly used for just watching to see how often you screw up. that's not a hard thing to do! you just autopilot attacking and defending while consciously watching how well it works. that's it. ez. then you can see if you need more practice, and if so for which parts. and you can also identify problems like a particular strategy for blocking a particular move is unreliable, so maybe you need more practice or maybe you need to change the strategy – do a different defensive option or do the first block after an earlier visual cue. there are other errors you'll see happen, like a boss can have two different attacks that look similar at first, so you mix them up and sometimes you do the defense for attack 1 when the boss is doing attack 2, so it doesn't work. so while you're autopiloting and seeing how it goes, you can watch for issues like that with your conscious attention, and then you can figure out a solution, like you can look at the attacks more closely until you find a difference which is pretty easy for you to recognize once you know what to look for, and then you can start looking for that and, with a little practice, autopilot doing that. or you can also find a defensive option that works for the first part of both attacks, so it's ok if you don't know which is which until you're doing the second defensive move.

people find the game hard cuz they are trying to e.g. do lots of attacking right now instead of just focusing their attention on defense. or they never practice alone, they just fight in groups where other people are always moving the boss around and creating chaos, and everyone is rushing to keep up with everyone else on doing damage. and if you are just less ambitious in the short term, you can make tons of stuff way easier. i was having trouble with some bosses in the last two days and what i started doing is only using my simplest attack which takes the shortest time. that immediately solved the problem of doing an attack that is too long and then i'm not ready to defend against the boss's next attack. and it meant attacking took even less attention and i could focus on defense more. the downside is that the simplest, fastest attack does the lowest damage. but so what? a bit of patience made it way easier and actually saved time overall (cuz it takes longer to kill the boss, but fewer retries, so actually that saves time). that works great on bosses where my goal is to get the gold medal one time – if it's 5 minutes slower but saves some retries that's fine. i don't need efficient offense for a boss where i just want one good kill. there are other bosses that you fight more often so you want to learn to do your offense more efficiently, but it's not needed in all cases. (also part of the issue is some of the old bosses i was fighting, which i only needed one good kill on, actually have different designs than some of the modern bosses that people fight more. some of them actually have overly short windows for you to attack during if you are playing alone. it's fine if you play with an ally cuz then half the time the boss attacks the ally and you can just go stand behind the boss and have time to attack. but for certain heroes, soloing some of the old bosses involves shorter attack windows than you're used to with the modern bosses, so partly you just need to be willing to use your small attacks and be content with that. and if you had to fight that boss every day it'd be annoying, but you don't, and the newer bosses you fight more often have some larger downtime parts built in, on purpose, to let you do your big attacks sometimes.)

the point of this example is if you approach things step by step, every step can be easy. cuz you have a specific goal in the current step which is not big picture success, and you just do that.


Elliot Temple | Permalink | Messages (0)

David Deutsch's Books and Fans

I've become very suspicious of any fan of The Beginning of Infinity (BoI), who says it's great, but who has not read The Fabric of Reality (FoR). I've seen a repeated pattern where people who haven't read FoR are shallow fans of BoI who don't know much about Critical Rationalism (CR). Overall, BoI is more popular than FoR, but is attracting worse fans than FoR did. Anyone who has read both and likes BoI way more is also highly suspicious and likely didn't understand much from either book. I don't think "BoI is way better than FoR" is a reasonable opinion. Anyone who goes around recommending BoI, but who recommends FoR much less or not at all, is probably a bad thinker.

Also, Deutsch's books should be read visually (which makes it a lot easier to catch more details, take your time and only advance to the next paragraph when you're ready, reread things, etc.). Be very suspicious of the understanding of anyone who's read them only as audio books. (Deutsch is irresponsibly selling audio books with no warning that understanding his ideas from an audio book is unrealistic. It's an unsuitable format for a first reading of his difficult, wordy books that contain many long, convoluted sentences. Audio books are fine for a casual second reading to review the book a bit while knowing you're missing a lot. They're also fine for a blind person who is very experienced with audio books, listens at a much slower speed than they usually listen to books at, and regularly rewinds to hear parts again. But a sighted person who starts with the audio book is almost certainly fooling themselves rather than actually understanding much.)

BoI is doing a much better job than FoR of attracting social climbers who talk about the book as a way of bragging. BoI is more popular, and it's a bit easier than FoR to read in a shallow way and think you liked it without learning much. BoI also has more things that can be used as slogans or sound bites.

If you haven't read either book, FoR should be read before BoI. I strongly recommend reading them in the order they were written. FoR does a better job of introducing and explaining CR ideas for new readers. FoR also does a much better job at introducing the many worlds interpretation of quantum physics. Deutsch put his most important things to say in his first book and didn't repeat them all in his second book (which would be fine, except that he doesn't tell anyone to read the books in order).

FoR is a deeper book with more technical details. It goes into more depth on some specific topics rather than focusing as much as BoI on being of general interest. It is a popular science book meant for the general reader, but some sections are less useful for most readers. In particular, the two chapters (11 and 12) about the physics of time can be skipped. The last chapter (14), about the end of the universe, is also skippable, especially the omega point discussion.

FoR talks about four strands, CR, evolution, quantum physics and computation. The key chapters in FoR to learn about CR and evolution are 1, 3-4 and 7-8. The CR and evolution strands are the more useful and easier to understand for almost everyone.

But if you're trying to learn about philosophy, I don't recommend starting with Deutsch's books. I used to recommend them more, but most people find them too difficult to learn anything substantial from, especially as a starting place. Most people who like them, and think they're learning something, didn't actually understand much.

Some good places to start learning are my Critical Fallibilism website and Eli Goldratt's books (especially The Goal, It's Not Luck, and The Choice). After that, you'd have a better chance to actually learn from FoR, though I'd recommend first reading chapters 1 and 2 of Philosophy: Who Needs It by Ayn Rand and reading Understanding Objectivism by Leonard Peikoff (which talks about how to learn a philosophy).

Also, if you don't already read books regularly, you may be more successful by first getting into reading and then trying to read FoR later when reading a book is already a common, easy and enjoyable activity for you. Many people should start by trying to form a habit of reading regularly, and enjoying it, using fun books like novels. I like Robert Heinlein best for sci-fi (start with his juveniles) and Brandon Sanderson for fantasy. My reading recommendations in the previous paragraph are much easier to read than Deutsch's books, and might actually work for newer readers, though they're significantly harder reading than Harry Potter. Also, you may want to start getting into reading more with audio books or text to speech, and that's fine and works well for many people, but at some point you should transition to also getting comfortable with visual reading, which you'll need for reading authors like Deutsch or Popper.


Elliot Temple | Permalink | Messages (0)