New Induction Disproof

Deutsch, Popper, and Feynman aren't inductivists. I could add more people to this list, like me. So here we see a clear pattern of people not being inductivists. There's a bunch of data points with a certain thing in common (a person not being inductivist). Let's apply induction to this pattern. So we extrapolate the general trend: induction leads us to conclude against induction. Oh no, a contradiction! I guess we'll have to throw out induction.

Q&A:

Q: Your data set is incomplete.
A: All data sets are incomplete.

Q: Your data set isn't random.
A: No data sets are entirely random.

Q: I have an explanation of why your method of selecting data points leads to a misleading result.
A: That's nice. I like explanations.

Q: Don't you care that I have a criticism of your argument?
A: I said we should throw out induction. As you may know, I think we should use an explanation-focussed approach. I took your claim to have an explanation, and lack of claim to have induced anything, as agreement.

Q: But how am I supposed to object to your argument using only induction? Induction isn't a tool for criticizing invalid uses of induction.
A: So you're saying induction cannot tell us which inductions are true or false. We need explanation to do that. So induction is useless without explanation, but explanation is not useless without induction.

Q: That doesn't prove induction is useless.
A: Have you ever thought about how much of the work, in a supposed induction, is done by induction, and how much by explanation?

Q: No.
A: Try it sometime.

Elliot Temple | Permalink | Messages (20)

Rethinking Popper Papers Comments

http://www.flu.cas.cz/rethinkingpopper/description.html
Preserving the authority of reason, Popper can...
The authority of reason is a contradiction.


http://www.flu.cas.cz/rethinkingpopper/papers.html

Jarvie:
Thus an author [like Popper] may be a privileged interpreter [of his own writing] but he is not necessarily reliable, infallible, the last word, or anything like that.
Why is a Popper conference paper claiming there exist privileged sources of knowledge (sometimes)?


Boyer:
Popper was essentially right about verification and passive induction : the former is inaccessible, outside formal sciences, and the latter is a myth
Why is a Popperian conference paper saying verification is possible (sometimes), and claiming further that that was Popper's view?
And we should certainly prefer the hypothesis that resists to our best criticisms better than the others do.
Why does he think we have a way to judge how good a theory is? And on a continuum, it sounds like. Popper offered no such technique.

What Popper offered is the idea that we can reject theories with even one false consequence. There's no continuum of falsity, they are just false, end of story. If criticism leaves us with exactly one remaining theory, then we should tentatively adopt it.

BTW that may sound implausible or unlikely to happen to have exactly one viable theory at some time. It isn't because there are techniques for ending up with exactly one theory, though I don't recall Popper ever explaining them.


Udell:
Moreover, it is Popper, not Rawls, who identifies and emphasizes the connection between justice and full employment.
I don't remember reading that and it has no citation. Can anyone provide a cite?
Rawls’s method reflects his recognition that a strong moral conviction about a particular action or institution—e.g., slavery, sexism—may override the appeal of an otherwise appealing moral principle
What does it mean for an idea to override an idea? Either it refutes it, or it doesn't. I think this concept is incoherent.
Popper the first anti-foundationalist philosopher in the analytic tradition
But Popper was not an analytic philosopher. He criticized analytic philosophy.


Verhofstadt:

This paper begins by explaining liberalism. However, it never mentions tradition (except negatively in passing). I would summarize the liberal attitude as about optimistic reform of existing knowledge (traditions), and contrast it with conservatism (keeping traditions unchanged) and radicalism or utopianism (which does not value tradition and try to reform it, but instead ignores it and is happy to start from scratch). When a supposed liberal has nothing good to say about tradition, I fear he is actually a radical. Popper himself certainly had good things to say about tradition, and a Popperian should know that.

The paper focusses more on liberalism as being about freedom, individualism, justice, and humanitarianism. But many conservatives and radicals are in favor of all of those things. So how can they be the defining characteristics of liberalism? Further, many liberals accept restrictions on freedom and the other things. This paper approvingly points out that Popper accepts restrictions on freedom later.
neoliberals and libertarians consider free market as a kind of scientific certainty
One can adopt fallibilism and be a free market libertarian, like me. The paper contains no argument that one cannot, just this alienating assertion.

The paper goes on to attack religion. I don't think it's a good idea for a Popper conference paper — and ironically one about liberalism, which is supposed to advocate tolerance — to be intolerant of views held by many Popperians. It'd be better to focus on things agreeable to Popperians.

A better, more Popperian way to criticize libertarianism or religion would be to consider what problem they are trying to solve, what they get right, what they get wrong, and how they can be improved, instead of being hostile to them. It is unliberal to be hostile to fellow liberals instead of trying to work together.


Swann:
Popperian Selectionism and Its Implications for Education, or ‘What to Do About the Myth of Learning by Instruction from Without?’
That's the title.
Although Popper was vehemently opposed to the discussion of words and their meaning (Popper, 1992[1974], § 7), my experience in talking about learning with educationists has led me to accompany any exposition of a Popperian view of learning with what I term an evolutionary definition. I propose that learning is best defined as
Swann acknowledges doing something Popper was "vehemently opposed to". She spends four consecutive paragraphs doing it. The only reason for opposing Popper that she provides is an appeal to experience which she claims "led" her. But experience does not lead people: that is the myth of instruction from without, the very myth her paper criticizes.
The process itself is not entirely conscious, so you will not be aware of more than a few aspects of it.
It does not follow from a claim that X is not entirely Y that little of X is Y.
A criticism, even if valid, may be inappropriate if ultimately it serves to stifle creativity and inhibit further trial and error-elimination
How does critically seeking the truth stifle truth seeking?
What is at issue here is a choice between two competing theories. One proposes that ‘No learning takes place by instruction from without’, the other that ‘Some learning takes place by instruction from without’. Although both theories are about events in the world, neither has the potential to be refuted by reference to empirical evidence.
Although we don't know how to test the theories today, surely we will in the future. They are theories about the mechanisms by which some physical objects function. Why would that be impossible to test by observing those physical objects?
The function of the brain is to select and create; it has no means of taking in information
The Popperian position is the brain cannot directly take in knowledge. Of course it does take in information through the senses. But that information is not useful until it is processed and interpreted. I am at a loss as to how someone can think the brain does not take in any information at all. The paper does not include any arguments for this proposition, though there is a cite.

Elliot Temple | Permalink | Messages (2)

Limits of Critical Discussion?

How effective can critical discussion be as the primary mode of learning more in a field once you reach the very top of the field? Once you know all the common ideas and arguments, then people who only know those common ideas and arguments can be of little use.

Or perhaps not. If you have a new idea, then the reactions of those same people to the new idea will be new to you.

But what if you are so far ahead of others in the field that their reactions to new ideas consistently contain nothing you didn't already consider? Then critical discussion wouldn't be especially useful. Is such a scenario realistic? If one was in it, should they make progress by critical discussion within their own mind? Or should they find another field to work on? Or should they teach others and help them catch up? Or should they make progress in this field by some other method?

Elliot Temple | Permalink | Messages (9)

Fallibilism

The word 'fallibility' has two different meanings. One is that we can't be absolutely sure of anything. The other is that mistakes are common. These meanings are both the same kind of thing, but the first is much narrower than the second. I embrace the truth of both meanings.

Sometimes fallibilists argue that math cannot have certainty because performing a proof is a physical process, and during physical processes things can go wrong (e.g. i could be drugged to unconsciousness and then awake with tampered memories such that I thought I'd completely the proof correctly when I hadn't). This argument is correct, but it is only an argument for the first, lesser meaning of fallibility. Although it gives an example demonstrating the possibility of a mistake, it does not show that mistakes are common.

A similar kind of argument is made by fallibilists with inductivists. We may point out that, as a matter of logic, inductive conclusions do not deductively follow from their premises, and therefore they are fallible. Again, this is an argument for fallibility in the first sense -- error is possible -- but it does not say whether error is common or not.

One result of this situation is that some people are converted to fallibilism but only in the first sense. When they encounter people who embrace fallibilism in the deeper sense, they become confused because these people discuss fallibilism but in a different way than they understand it. There can be further confusion because both groups identify themselves by the same label, "fallibilists", and may then wonder why they are disagreeing so much.

The more thorough meaning of fallibilism is required for most important fallibilist arguments. This is known to many anti-fallibilists who claim fallibilism is stupid and useless because not a lot of interesting truths follow from it (they have in mind the more limited meaning of fallibilism). And emphasizing that error is possible could be deemed misleading if it is in fact very very rare and perhaps even negligible.

Here are some examples of how the stronger meaning of fallibilism leads to important conclusions the weaker meaning does not:

Should parents take seriously the possibility that, in the face of a disagreement, their child might be in the right? If mistakes are common, including mistakes by parents, then yes they should. This is a clear implication from the strong meaning of fallibilism. But on the other hand if the parent having made a mistake is only a very remote possibility, one in a million, then one could considering taking a different attitude.

Should lovers who think they won't end up with broken hearts take seriously the possibility that their knowledge of how to avoid being hurt may contain a mistake? That depends if mistakes are commonplace or extremely rare. If the rate of making mistakes like that is one per hundred million couples then it's not worth worrying about. If it's one per two couples then it'd be crazy not to think about it a lot.

When a person seems to misunderstand my argument, should I believe he is doing it deliberately (perhaps because he sees that it refutes his position)? If mistakes in understanding arguments are extremely rare, then it would follow that it's usually deliberate. But if mistakes are common, then I shouldn't take it to be deliberate.

In general, when I disagree with someone, is he mistaken, am I mistaken, or is he a bad person? If mistakes are common, either of us could be mistaken. If mistakes are extraordinary rare, then I may have to conclude he is a bad person who wants to adopt mistaken ideas due to bias or some other factor. This is especially true if I have multiple disagreements with him. If mistakes are very rare, can he really be innocently mistaken on all those issues?

Elliot Temple | Permalink | Messages (4)

Popper on Burke and Tradition

_Conjectures and Refutations_ p 162
[Edmund Burke] fought, as you know, against the ideas of the French Revolution, and his most effective weapon was his analysis of that irrational power which we call 'tradition'. I mention Burke because I think he has never been properly answered by rationalists. Instead rationalists tended to ignore his criticism and to persevere in their anti-traditionalists attitude without taking up the challenge. Undoubtedly there is a traditional hostility between rationalism and traditionalism. Rationalists are inclined to adopt the attitude: 'I am not interested in tradition. I want to judge everything on its merits and demerits, and I want to do this quite independently of any tradition. I want to judge it with my own brain, and not with the brains of other people who lived long ago.'

That the matter is not quite so simple as this attitude assumes emerges from the fact that the rationalist who says such things is himself very much bound by a rationalist tradition which traditionally says them. This shows the weakness of certain traditional attitudes towawrds the problem of tradition.
I see confusion here. The right attitude is to judge ideas on their merits and demerits, but to do so with the aid of both reason and traditional knowledge. This is perhaps clearer to see if one renames "traditional knowledge" to "existing knowledge". Existing knowledge is good, and shouldn't be disregarded even by people with a very high opinion of reason and individual judgment.

Existing knowledge should be used whenever doing so seems unproblematic, and improved when it seems problematic. It should be respected as something valuable, but not something beyond criticism. I think this attitude harnesses the good points of both the rationalists and traditionalists and also demonstrates they are not fundamentally in conflict.

Elliot Temple | Permalink | Messages (6)

Critical Preferences and Strong Arguments

This post is a followup. For context, click here to read the first post.

The following is intended as a statement of my position but does not attempt to argue for it in detail.

The concept of a critical preference makes the common sense assumption that there are strong and weak arguments, or in other words that arguments or ideas can be evaluated on a continuum based on their merit.

The merit of an idea is often metaphorically stated in terms of its weight (e.g. Popper wrote "weighty though inconclusive arguments", Objective Knowledge p 41). It's also commonly stated in terms of probability or likeliness. And it's also stated in terms of ranking or scoring ideas to see which is best.

Ideas do have merit, and they can be closer or further from the truth (more or less truthlike, if you prefer). However, we never know how much merit an idea has. We can't evaluate ideas that way.

(BTW suppose we could evaluate how much merit ideas have. A second assumption is that doing so would be useful and that it would make sense to prefer the idea with more merit. That should not be assumed uncritically.)

Popper did not give detailed arguments for the idea that we can or should evaluate arguments by their strength or amount of merit. That's why I call it an assumption. I think he uncritically took it for granted without discussion, as have most (all?) other philosophers.

In the strength based approach, an idea could score a 1, or a 2, or a 20. In Popper's view, the numbers don't have an absolute meaning; they can only be compared with the scores of other ideas. Or in other words, we never know how close to the truth we've come on an absolute scale. In this approach, an idea can have infinitely many different evaluations.

In my approach, an idea can only have three possible evaluations. An idea can be unproblematic (non-refuted), problematic (refuted), or we're unsure. Ignoring the possibility of not taking a stance, which isn't very important, an idea gets a boolean evaluation: it's either OK or not OK.

If we see a problem with an idea, then it's no good, it's refuted. We should never accept, or act on, ideas we know are flawed. Or in other words, if we know about an error it's irrational to continue with the error anyway.

On the other hand, if we have two ideas and we can see no problem with either, then we can have no reason to prefer one over the other. This way of assessing ideas does not allow for the middle ground of "weighty though inconclusive arguments".

If an idea is flawed, it may have a close variant which is unproblematic. Whenever we refute an idea, we should look for variants of the idea which have not been refuted. There may be good parts which can be rescued.

My approach is in significant agreement with Popper's epistemology because it does not allow for the possibility of ideas having support. Some people would say we can differentiate non-refuted ideas by how much support each has, but I follow Popper in denying that.

Popper's alternative to support is criticism. I accept the critical approach. Where I differ is in not allowing an idea to be both criticized and non-refuted. I don't think it makes sense to simultaneously accept a criticism of an idea, and accept the idea. We should make up our mind (keeping open the possibility of changing our mind at any time), or say we aren't sure.

As I see it, a criticism either points out a flaw in an idea or it doesn't. And we either have a criticism of the criticism, or we don't. A criticism can't contradict a theory and be itself non-refuted, but also fail to be decisive. On what grounds would it fail to be decisive, given we see no flaw in it?

Let's now consider the situation where we have conflicting non-refuted ideas, which is the problem that critical preferences try to solve. How should we approach such a conflict? We can make progress by criticizing ideas. But it may take us a while to think of a criticism, and we may need to carry on with life in the meantime. In that case, the critical preferences approach attempts to compare the non-refuted ideas, evaluate their merit, and act on the best one.

My approach to solving this problem is to declare the conflict (temporarily) undecided (pending a new idea or criticism) and then to ask the question, "Given the situation, including that conflict being undecided, what should be done?" Answering this new question does not depend on resolving the conflict, so it gets us unstuck.

When approaching this new question we may get stuck again on some other conflict of ideas. Being stuck is always temporary, but temporary can be a long time, so again we'll need to do something about it. What we can do is repeat the same method as before: declare that conflict undecided and consider what to do given that the undecided conflicts are undecided.

A special case of this method is discussed here. It discusses avoiding coercion. Coercion is an active conflict between ideas within one mind with relevance to a choice being made now. But the method can be applied in the general case of any conflict between ideas.

My approach accepts what we do not know, and seeks a good explanation of how to proceed given our situation. It is always possible to find such an explanation. It may sound difficult, but actually you already do it dozens of times per day without realizing it. Just like people must use conjectures and refutations to understand each other in English conversations (and must use them in all their thinking), and when they first hear that idea it sounds bizarre, but they already do it quickly, reliably, and without realizing what they are doing.

Elliot Temple | Permalink | Messages (16)

Using False Theories

C&R by Popper p 306
we are, in many cases, quite well served by theories which are known to be false.
This is a mistake! Consider a theory of motion, say, which we'll call T. We know T is false, but it's also a good approximation to the truth in common and well defined circumstances.

We do not use theory T. We use theory U which consists of what I said in the first paragraph: that theory T is an approximation, and useful in certain circumstances. Theory U contains in it theory T, but also some other ideas including the refutation of T. Theory U is a way of approximating motion in certain circumstances, it's useful, and it's not known to be false. Theory U is just plain better.

If we can't create a true variant of T or any other false theory, like we did with U, then T is not actually useful at all. Refuted theories can only be useful via non-refuted theories that make reference to them, not on their own.

Elliot Temple | Permalink | Messages (19)

Weak Theory Example

T1 is a testable, scientific theory to solve problem P. T2 is a significantly less testable theory to solve P. In Popper's view, barring some important other consideration, if both T1 and T2 are non-refuted then we must prefer T1 and say it's better.

But T1 might not be better. You could easily choose T1 so it's false and T2 so it's true as best we know today, without contradicting the situation description.

You can assert that T1 is better, as far as we know, given the current state of knowledge. But is it? Where is the argument that it is? This looks to me like both explanationless philosophy and positive philosophy (T1 is supported by its testability, and T2 isn't). T2 is losing out without any criticism of it.

What we should do is not say T1 is better, but say: T2 needs to be testable to be a viable theory because X. X can be a generic reason such as scientific theories should be testable and P is a scientific problem. Once we say this, we are now making a criticial argument: we're criticizing T2. This offers T2 the chance to defend itself, which never came up in the original analysis.

It's now up to T2 to offer a reason that it doesn't need to be more testable, or actually is more testable. T2 can criticize the criticism of it, or be refuted. (BTW if T2 didn't already contain this reason, and it has to be invented, then T2 is refuted and T2b is now standing, where T2b consists of the content of T2 plus the new content that criticizes this criticism of T2.)

Then if the testibility criticism is criticized, it can either be refuted or be ammended to include a criticism of that criticism. And so on. This approach takes seriously the idea that we only learn from criticism. That makes sense because criticisms are error-correcting statements: they explain a flaw in something, which helps us avoid a mistake.

Elliot Temple | Permalink | Message (1)

Examples of Accepting Contradicting Ideas

People commonly say things like, "That's a good point, but alone it's insffucient for me to change my position."

In a debate club meeting, or a Presidential debate, most of the non-partisan audience usually comes away thinking both sides made some good points.

Debaters think an idea can suffer a few setbacks, but still be a good idea. They aren't after perfection but just trying to get the better of their debating opponent.

These are examples of the same mistake underlying critical preferences: simultaneously accepting two conflicting ideas (such as a position, and a criticism of that position).

PS Notice that "simultaneously accepting two conflicting ideas (and making a decision about the issue)" would be a passable definition of coercion for TCS to use. This highlights the connection between coercion and epistemology. The concept of coercion in TCS is about when rational processes in a mind break down. The TCS theory of coercion tries to answer questions like: What happens then? (Suffering; a big mess.) What causes the breakdown to happen? (Different parts of the mind in conflict and the failure to resolve this by creating one single idea of how to proceed.) What's a description of what the mind looks like when it happens? (It contains conflicting, active theories.)

Elliot Temple | Permalink | Messages (0)

Another Problem Related To Critical Preferences

X is a good trait. A has more of X than B does. Therefore A is better than B.

That is a non sequitur.

You can add, "All other things being equal" and it's still a non sequitur.

X being a good or desirable trait does not mean all things with more X are better. There being all sorts of reasons X is amazing does not mean X is amazing in all contexts and in relation to all problems.

You'd need to say X is universally good, and all other things are equal. In other words, you're saying the only difference between A and B is amount of something that is always good. With the premises that strong, then the claim works. However, it's now highly unrealistic.

It's hard to find things that are universally good to have more of. Any medicine or food will kill you if you overdose enough. Too much money would crush us all, or can get you mugged. An iPhone is amazing, but an iPhone that's found by a hostage taker who previously asked for everyone's phones can get you killed.

You can try claims like "more virtue is universally good". That is true enough, but that's because the word "virtue" is itself already context sensitive. It's also basically a tautology and immune to criticism, because whatever is good to do is what's virtuous to do. And it's controversial how to act virtuously or judge virtue. If you try to get specific like, "helping the needy is universally good," then you run into the problem that it's false. For example, if Obama spent too much time working in soup kitchens, that wouldn't leave him enough time to run the country well, so it'd turn out badly.

You could try "more error correction is a universal good thing" but that's false too. Some things are good enough, and more error correction would be an inefficient use of effort.

You might try to rescue things by saying, "X is good in some contexts, and this is one of those contexts." Then you'll need to give a fallible argument for that. That is an improvement on the original approach.

Now for the other premise, "all other things being equal." They never are. Life is complicated and there's almost always dozens of relevant factors. Even if they were equal, we wouldn't know it, because we can never observe all other things to check for their equality. We could guess they are equal, which would hold if we didn't miss anything. But the premise "all other things being equal, unless I think of some possible relevant factor" isn't so impressive. You might as well just say directly, "A is better than B, unless I'm mistaken."

Elliot Temple | Permalink | Message (1)