Rationally Resolving Conflicts of Ideas

I was planning to write an essay explaining the method of rationally resolving conflicts and always acting on a single idea with no outstanding criticisms. It would followup on my essay Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made where I mentioned the method but didn't explain it.

I knew I'd already written a number of explanations on the topic, so I decided to reread them for preparation. While reading them I decided that the topic is hard and it'd be very hard to write a single essay which is good enough for someone to understand it. Maybe if they already had a lot of relevant background knowledge, like knowing Popper, Deutsch or TCS, one essay could work OK. But for an Objectivist audience, or most audiences, I think it'd be really hard.

So I had a different idea I think will work better: gather together multiple essays. This lets people learn about the subject from a bunch of different angles. I think this way will be the most helpful to someone who is interested in understanding this philosophy.

Each link below was chosen selectively. I reread all of them as well as other things that I decided not to include. It may look like a lot, but I don't think you should expect an important new idea in epistemology to be really easy and short to learn. I've put the links in the order I recommend reading them, and included some explanations below.

Instead of one perfect essay – which is impossible – I present instead some variations on a theme.

Update 2017: Buy my Yes or No Philosophy to learn a ton more about this stuff. It has over 6 hours of video and 75 pages of writing. See also this free essay giving a short argument for it.

Update Oct 2016: Read my new Rejecting Gradations of Certainty.

Popper's critical preferences idea is incorrect. It's similar to standard epistemology, but better, but still shares some incorrectness with rival epistemologies. My criticisms of it can be made of any other standard epistemology (including Objectivism) with minor modifications. I explained a related criticism of Objectivism in my prior essay.

Critical Preferences
Critical Preferences and Strong Arguments

The next one helps clarify a relevant epistemology point:

Corroboration

Regress problems are a major issue in epistemology. Understanding the method of rationally resolving conflicts between ideas to get a single idea with no outstanding criticism helps deal with regresses.

Regress Problems

Confused about anything? Maybe these summary pieces will help:

Conflict, Criticism, Learning, Reason
All Problems are Soluble
We Can Always Act on Non-Criticized Ideas

This next piece clarifies an important point:

Criticism is Contextual

Coercion is an important idea to understand. It comes from Taking Children Seriously (TCS), the Popperian educational and parenting philosophy by David Deutsch. TCS's concept of "coercion" is somewhat different than the dictionary, keep in mind that it's our own terminology. TCS also has a concept of a "common preference" (CP). A CP is any way of resolving a problem between people which they all prefer. It is not a compromise; it's only a CP if everyone fully prefers it. The idea of a CP is that it's a preference which everyone shares in common, rather than disagreeing.

CPs are the only way to solve problems. And any non-coercive solution is a CP. CPs turn out to be equivalent to non-coercion. One of my innovations is to understand that these concepts can be extended. It's not just about conflicts between people. It's really about conflicts between ideas, including ideas within the same mind. Thus coercion and CPs are both major ideas in epistemology.

TCS's "most distinctive feature is the idea that it is both possible and desirable to bring up children entirely without doing things to them against their will, or making them do things against their will, and that they are entitled to the same rights, respect and control over their lives as adults." In other words, achieving common preferences, rather than coercion, is possible and desirable.

Don't understand what I'm talking about? Don't worry. Explanations follow:

Taking Children Seriously
Coercion

The next essay explains the method of creating a single idea with no outstanding criticisms to solve problems and how that is always possible and avoids coercion.

Avoiding Coercion
Avoiding Coercion Clarification

This email clarifies some important points about two different types of problems (I call them "human" and "abstract"). It also provides some historical context by commenting on a 2001 David Deutsch email.

Human Problems and Abstract Problems

The next two help clarify a couple things:

Multiple Incompatible Unrefuted Conjectures
Handling Information Overload

Now that you know what coercion is, here's an early explanation of the topic:

Coercion and Critical Preferences

This is an earlier piece covering some of the same ideas in a different way:

Resolving Conflicts of Interest

These pieces have some general introductory overview about how I approach philosophy. They will help put things in context:

Think
Philosophy: What For?

Update: This new piece (July 2017) talks about equivocations and criticizes the evidential continuum: Don't Equivocate

Want to understand more?

Read these essays and dialogs. Read Fallible Ideas. Join my discussion group and actually ask questions.

Elliot Temple | Permalink | Messages (241)

Accepting vs. Preferring Theories – Reply to David Deutsch

David Deutsch has some misconceptions about epistemology. I explained the issue on Twitter.

I've reproduced the important part below. Quotes are DD, regular text is me.

There's no such thing as 'acceptance' of a theory into the realm of science. Theories are conjectures and remain so. (Popper, Miller.)

We don't accept theories "into the realm of science", we tentatively accept them as fallible, conjectural, non-refuted solutions to problems (in contexts).

But there's no such thing as rejection either. Critical preference (Popper) refers to the state of a debate—often complex, inconsistent, and transient.

Some of them [theories] are preferred (for some purposes) because they seem to have survived criticism that their rivals haven't. That's not the same as having been accepted—even tentatively. I use quantum theory to understand the world, yet am sure it's false.

Tentatively accepting an idea (for a problem context) doesn't mean accepting it as true, so "sure it's false" doesn't contradict acceptance. Acceptance means deciding/evaluating it's non-refuted, rivals are refuted, and you will now act/believe/etc (pending reason to reconsider).

Acceptance deals with the decision point where you move past evaluating the theory, you reach a conclusion (for now, tentatively). you don't consider things forever, sometimes you make judgements and move on to thinking about other things. ofc it's fluid and we often revisit.

Acceptance is clearer word than preference for up-or-down, yes-or-no decisions. Preference often means believing X is better than Y, rather than judging X to have zero flaws (that you know of) & judging Y to be decisively flawed, no good at all (variant of Y could ofc still work)

Acceptance makes sense as a contrast against (tentative) rejection. Preference makes more sense if u think u have a bunch of ideas which u evaluate as having different degrees of goodness, & u prefer the one that currently has the highest score/support/justification/authority.


Update: DD responded, sorta:

You are blocked from following @DavidDeutschOxf and viewing @DavidDeutschOxf's Tweets.


Update: April 2019:

DD twitter blocked Alan, maybe for this blog post critical of LT:

https://conjecturesandrefutations.com/2019/03/16/lulie-tanett-vs-critical-rationalism/

DD twitter blocked Justin, maybe for this tweet critical of LT:

https://twitter.com/j_mallone/status/1107349577538158592


Elliot Temple | Permalink | Messages (8)

Critical Rationalism Epistemology Explanations

I discussed epistemology in a recent email:

I really enjoyed David Deutsch's explanation of Popper's epistemology and since reading Fabric of Reality I've read quite a bit of Popper. I've become convinced that Deutsch's explanation of Popper is correct, but I can also see why few people come away from Popper understanding him correctly. I believe Deutsch interprets Popper in a way that is much easier to understand.

Yes, I agree. DD refined and streamlined Critical Rationalism, and he's a better writer than Popper was. Popper made the huge breakthrough in the field and wrote a lot of good material about it, but there's still more work to do before most people get it.

Plus, I think he actually adds some ideas to Popper that matter that make it less misleading. Popper was struggling himself to understand his own theories, so it's understandable that he struggled to explain some parts of it.

I agree. I don't blame Popper for this, since he had very original and important ideas. He did more than enough!

(For example, it was problematic to refer to good theories as 'improbable' rather than 'hard to vary.' In context, I feel Popper meant the same thing, but the words he chose were problematic for conveying the meaning to others.)

So I've been wondering if it's possible to boil Popper's epistemology (with additions and interpretations from Deutsch) down to a few basic principles that seem 'self evident' and then to draw necessary corollaries. If this could be done, it would make Popper's epistemology much easier to understand.

Here is what I've come up with so far. (I'm looking for feedback from others familiar with Popper's epistemology as interpreted and adjusted by Deutsch to point out where I got it wrong or are missing things..)

Criteria for a Good Explanation:

1. We should prefer theories that are explanations over those that are not.

This is an approximation.

The point of an idea is to solve a problem (or multiple problems). We should prefer ideas which solve problems.

Many interesting problems require explanations to solve them, but not all. Whether we want an explanation depends on the problem being addressed.

In general, we want to understand things, not just be told answers to trust on authority. So we need explanations of how and why the answers will work, that way we can think for ourselves, recognize what sort of situations would be an exception, and potentially fix errors or make improvements.

But some problems don't need explanations. I might ask my friend, who is good at cooking, "How long should I boil an egg?" and just want to hear a number of minutes without any explanation. Finding out the number of minutes solves my cooking problem. I didn't want to try to understand how cooking eggs works, and I didn't want to debate the matter or check my friend's ideas for errors, I just wanted it to come out decently. It can be reasonable to prioritize what issues I investigate more and which I don't.

2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.

Hard to vary given what constraints?

Any idea is easy to vary if there are no constraints. You can vary it to literally any other idea, arbitrarily, in one step.

The standard constraint on varying an idea is that it still solves (most of) the same problems as before. To improve an idea, we want to make it solve more and better problems than before with little or no downside to the changes.

The problems ideas solve aren't just things like "explain the motion of balls" or "help me organize my family so we don't fight". Another important type of problem is understanding how ideas fit together with other ideas. Our knowledge has tons of connections where we understand ideas (often from different fields) to be compatible, and we understand how and why they are compatible. Fitting our knowledge together into a unified picture is an important problem.

The more our knowledge is constrained by connections to problems and other ideas, the more highly adapted it is to that problem situation, and therefore the harder it is to vary while keeping the same or greater level of adaptation. The more ideas are connected to other problems and ideas, the less wiggle room there is to make arbitrary changes without breaking anything.

Fundamentally, "hard to vary" just means "is knowledge". Knowledge in the CR view is adapted information. The more adapted information is, the more chance a random change will make it worse instead of better (worse and better here are relative to the problem situation).

There are many ways to look at knowledge that are pretty equivalent. Some ways are: ideas adapted to a problem situation, ideas that are hard to vary, non-arbitrary ideas, ideas that break symmetries (that give you a way to differentiate things, prefer some over others, evaluate some as better than others, etc. You can imagine that, by default, there's tons of ideas and they all look kinda equally good. And when two ideas disagree with each other, by default that is a symmetric situation: either one could be mistaken and we can't take sides. Knowledge lets us take sides; it helps us break the symmetry of "X contradicts Y, therefore also Y contradicts X" and helps us differentiate ideas so they don't all look the same to us.)

3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.

Ideas should be rejected when they are refuted. A refutation is an explanation of how/why the idea will not solve the problem it was trying to solve. (Sometimes an idea is proposed as a solution to multiple different problems. In that case, it may be refuted as a solution to some problems while not being refuted as a solution for others. In this way, criticism and refutation are contextual rather than universal.)

You don't need a better idea in order to decide that an idea won't work – that it fails to solve the problem you thought it solved. If it simply won't work, it's no good, whether you have a better idea or not.

These are fairly basic and really do seem 'self evident.' But are they complete? What did I miss?

I then added a number of corollaries that come out of the principles to explain the implications.

1. We should prefer theories that are explanations over those that are not.
a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

Also, we never need to prefer one idea over another when they are compatible. We can have both.

When two ideas contradict each other, then at least one is false. We can't determine that one is false by looking at their positive virtues (how wonderful are they, how useful are they, how much do they explain). Instead, we have to deal with contradictions by figuring out that an idea is actually wrong, we have to look at things critically.

b. Corollary 1-2: We should prefer actual explanations over pseudo-explanations (particularly explanation spoilers) disguised as explanations.
c. Corollary 1-3: If the explanatory power of a theory comes by referencing another theory, then we prefer the other theory because it’s the one that actually explains things.
2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.
a. Corollary 2-1: We should prefer explanations that have survived the strongest criticisms or tests we have currently been able to devise.

Criticisms don't have strengths. A criticism either explains why an idea fails to solve a problem, or it doesn't.

See: https://yesornophilosophy.com and http://curi.us/1595-rationally-resolving-conflicts-of-ideas and especially http://curi.us/1917-rejecting-gradations-of-certainty

Popper and DD both got this wrong, despite DD's brilliant criticism of weighing ideas in BoI. The idea of arguments having strengths is really ingrained in common sense in our culture.

b. Corollary 2-2: We should prefer explanations that are consistent with other good explanations (that makes it harder to vary), unless it violates the first principle.
3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.
a. Corollary 3-1: We should prefer theories (or explanations) that suggest tests that the previously best explanation can’t pass but the new one can. (This is called a Critical Test.)
b. Corollary 3-2: It is difficult to devise a Critical Test of a theory without first conjecturing a better theory first.
c. Corollary 3-3: A theory that fails a test due to a problem in a theory and a theory that fails a test due to some other factor (say experimental error) are often indistinguishable unless you have a better theory to explain which is which.

Yes, after a major existing idea fails an experimental test we generally need some explanatory knowledge to understand what's going on, and what the consequences are, and what we should do next.


Elliot Temple | Permalink | Messages (41)

Errors Merit Post-Mortems

After people make errors, they should do post-mortems. How did that error happen? What caused it? What thinking processes were used and how did they fail? Try to ask “Why?” several times to get to deeper issues than your initial answers.

And then, especially, what other errors would that cause also cause? This gives info about the need to make changes going forward, or not. Is it a one-time error or part of a pattern?

Effective post-mortems are something people generally don’t want to do. What causes errors? Frequently it’s irrationality, including dishonesty.

Lots of things merit post-mortems other than losing a debate. If you have an inconclusive debate, why didn’t you do better? No doubt there were errors in your communication and ideas. If you ask a question, why were you ignorant of the answer? What happened there? Maybe you made a mistake. That should be considered. After you ask a question and get an answer, you should post-mortem whether your understanding is now adequate. People usually don’t discuss thoroughly enough to effectively learn the answers to their questions.

Regarding questions: If you were ignorant of something because you hadn’t yet gotten around to learning about it, and you knew the limits of your knowledge, that can be a quick and easy post-mortem. That’s fine, but you should check if that’s what happened or it’s something else that merits more attention. Another common, quick post-mortem for a question is, “I asked because the other person was unclear, not because of my own ignorance.” But many questions relate to your own confusions and what went wrong should be post-mortemed. And if you hadn’t learned something yet, you should consider if you are organizing your learning priorities in a reasonable way. Why learn this now? Why not earlier or later? Do you have considered reasoning about that?

What if you try to post-mortem something and you don’t know what went wrong? If your post-mortem fails, that is itself something to post-mortem! Consider what you’ve done to learn how to post-mortem effectively in general. Have you studied techniques and practiced them? Did you start with easier cases and succeed many times? Do you have a history of successes and failures which you can compare this current failure to? Do you know what your success rate at post-mortems is in general, on average? And you should consider if you put enough effort into this particular post-mortem or just gave up fast.

You may wonder: We make errors all the time. Should we post-mortem all of them? That sounds like it’d take too much time and effort.

First, you can only post-mortem known errors. You have to find out something is an error. You can’t post-mortem it as an error just because people 500 years from now will know better. This limits the issues to be addressed.

Second, an irrelevant “error” is not an error. Suppose I’m moving to a new home. I’m measuring to see where things will fit. I measure my couch and the measurement is accurate to within a half inch. I measure where I want to put it and find there are 5 inches to spare (if it was really close, I’d re-measure). The fact that my measurement is an eighth of an inch off is not an error. The general principle is that errors are reasons a solution to a problem won’t work. The small measurement “error” doesn’t prevent my from succeeding at the problem I’m working on, so it’s not an error. It would be an error in a different context like doing a science experiment that relies on much more accurate measurements, but I’m not doing that.

Third, yes you should try to post-mortem all your errors that get past the previous two points. If you find this overwhelming, there are two things to do:

  1. Do easier stuff so you make fewer errors. Get your error rate under control. There’s no benefit to doing stuff that’s full of errors – it won’t work. Correctness works better both for immediate practical benefits (you get more stuff done that is actually good or effective instead of broken) and for learning better so you can do better in the future.
  2. Learn and write down recurring patterns/themes/concepts and reuse them instead of trying to work out every post-mortem from scratch. If you develop good ideas that can help with multiple post-mortems, that’ll speed it up a ton. Reusing ideas is a major part of Paths Forward and is crucial to all of life.

Elliot Temple | Permalink | Messages (9)

Fallible Justificationism

This is adapted from a Feb 2013 email. I explain why I don't think all justificationism is infallibilist. Although I'm discussing directly with Alan, this issue came up because I'm disagreeing with David Deutsch (DD). DD claims in The Beginning of Infinity that the problem with justificationism is infallibilism:

To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

DD says fallibilism is the opposing position to justificationism and that justificationists are seeking a feeling of certainty. And when I criticized this, DD defended this view in discussion emails (rather than saying that's not what he meant or revising his view). DD thinks justificationism necessarily implies infallibilism. I disagree. I believe that some justificationism isn't infallibilist. (Note that DD has a very strong "all" type claim and I have a weak "not all" type claim. If only 99% of justificationism is infallibilist, then I'm right and DD is wrong. The debate isn't about what's common or typical.)

Alan Forrester wrote:

[Justification is] impossible. Knowledge can't be proven to be true since any argument that allegedly proves this has to start with premises and rules of inference that might be wrong. In addition, any alleged foundation for knowledge would be unexplained and arbitrary, so saying that an idea is a foundation is grossly irrational.

I replied:

But "justified" does not mean "proven true".

I agree that knowledge cannot be proven true, but how is that a complete argument that justification is impossible?

And Alan replied:

You're right, it's not a complete explanation.

Justified means shown to be true or probably true. I didn't cover the "probably true" part. The case in which something is claimed to be true is explicitly covered here. Showing that a statement X is probably true either means (1) showing that "statement X is probably true" is true, or it means that (2) X is conjectured to be probably true. (1) has exactly the same problem as the original theory.

In (2) X is admitted to be a conjecture and then the issue is that this conjecture is false, as argued by David in the chapter of BoI on choices. I don't label that as a justificationist position. It is mistaken but it is not exactly the same mistake as thinking that stuff can be proved true or probably true.

In parallel, Alan had also written:

If you kid yourself that your ideas can be guaranteed true or probably true, rather than admitting that any idea you hold could be wrong, then you are fooling yourself and will spend at least some of your time engaged in an empty ritual of "justification" rather than looking for better ideas.

I replied:

The basic theme here is a criticism of infallibilism. It criticizes guarantees and failure to admit one's ideas could be wrong.

I agree with this. But I do not agree that criticizing infallibilism is a good reply to someone advocating justificationism, not infallibilism. Because they are not the same thing. And he didn't say anything glaringly and specifically infallibilist (e.g. he never denied that any idea he has could turn out to be a mistake), but he did advocate justificationism, and the argument is about justification.

And Alan replied:

Justificationism is inherently infallibilist. If you can show that some idea is true or probably true, then when you do that you can't be mistaken about it being true or probably true, and so there's no point in looking for criticism of that idea.

My reply below responds to both of these issues.


Justificationism is not necessarily infallibilist. Justification does not mean guaranteeing ideas are true or probably true. The meaning is closer to: supporting some ideas as better than others with positive arguments.

This thing -- increasing the status of ideas in a positive way -- is what Popper calls justificationism and criticizes in Realism and the Aim of Science.

I'll give a quote from my own email from Jan 2013, which begins with a Popper quote, and then I'll continue my explanation below:

Realism and the Aim of Science, by Karl Popper, page 19:

The central problem of the philosophy of knowledge, at least since the Reformation, has been this. How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs? I shall call this our first problem. This problem has led, historically, to a second problem: How can we justify our theories or beliefs? And this second problem is, in turn, bound up with a number of other questions: What does a justification consist of? and, more especially: Is it possible to justify our theories or beliefs rationally: that is to say, by giving reasons -- 'positive reasons' (as I shall call them), such as an appeal to observation; reasons, that is, for holding them to be true, or at least 'probable' (in the sense of the probability calculus)? Clearly there is an unstated, and apparently innocuous, assumption which sponsors the transition from the first to the second question: namely, that one adjudicates among competing claims by determining which of them can be justified by positive reasons, and which cannot.

Now Bartley suggests that my approach solves the first problem, yet in doing so changes its structure completely. For I reject the second problem as irrelevant, and the usual answers to it as incorrect. And I also reject as incorrect the assumption that leads from the first to the second problem. I assert (differing, Bartley contends, from all previous rationalists except perhaps those who were driven into scepticism) that we cannot give any positive justification or any positive reason for our theories and our beliefs. That is to say, we cannot give any positive reasons for holding our theories to be true. Moreover, I assert that the belief we can give such reasons, and should seek for them is itself neither a rational nor a true belief, but one that can be shown to be without merit.

(I was just about to write the word 'baseless' where I have written 'without merit'. This provides a good example of just how much our language is influenced by the unconscious assumptions that are attacked within my own approach. It is assumed, without criticism, that only a view that lacks merit must be baseless -- without basis, in the sense of being unfounded, or unjustified, or unsupported. Whereas, on my view, all views -- good and bad -- are in this important sense baseless, unfounded, unjustified, unsupported.)

In so far as my approach involves all this, my solution of the central problem of justification -- as it has always been understood -- is as unambiguously negative as that of any irrationalist or sceptic.

If you want to understand this well, I suggest reading the whole chapter in the book. Please don't think this quote tells all.

Some takeaways:

  • Justificationism has to do with positive reasons.

  • Positive reasons and justification are a mistake. Popper rejects them.

  • The right approach to epistemology is negative, critical. With no compromises.

  • Lots of language is justificationist. It's easy to make such mistakes. What's important is to look
    out for mistakes and try to correct them. ("Solid", as DD recently used, was a similar mistake.)

  • Popper writes with too much fancy punctuation which makes it harder to read.

A key part of the issue is the problem situation:

How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs?

Justificationism is an answer to this problem. It answers: the theories and beliefs with more justification are better. Adjudicate in their favor.

This is not an inherently infallibilist answer. One could believe that his conception of which theories have how much justification is fallible, and still give this answer. One could believe that his adjudications are final, or one could believe that his adjudications could be overturned when new justifications are discovered. Infallibilism is not excluded nor required.


Looking at the big picture, there is the critical approach to evaluating ideas and the justificationist or "positive" approach.

In the Popperian critical approach, we use criticism to reject ideas. Criticism is the method of sorting out good and bad ideas. (Note that because this is the only approach that actually works, everyone does it whenever they think successfully, whether they realize it or not. It isn't optional.) The ideas which survive criticism are the winners.

In the justificationist approach, rather than refuting ideas with negative criticism, we build them up with positive arguments. Ideas are supported with supporting evidence and arguments. The ones we're able to support the most are the winners. (Note: this doesn't work, no successful thinking works this way.)

These two rival approaches are very different and very important. It's important to differentiate between them and to have words for them. This is why Popper named the justificationist approach, which had gone without a name because everyone took it for granted and didn't realize it had any rival or alternative approaches.

Both approaches are compatible with both infallibilism and fallibilism. They are metaphorically orthogonal to the issue of fallibility. In other words, fallibilism and justificationism are separate issues.

Fallibilism is about whether or not our evaluations of ideas should be subjected to revision and re-checking, or whether anything can be established with finality so that we no longer have to consider arguments on the topic, whether they be critical or justifying arguments.

All four combinations are possible:

Infallible critical approach: you believe that once socialist criticisms convince you capitalism is false, no new arguments could ever overturn that.

Infallible justificationist approach: you believe that once socialist arguments establish the greatness of socialism, then no new arguments could ever overturn that.

Fallible critical approach: you believe that although you currently consider socialist criticisms of capitalism compelling, new arguments could change your mind.

Fallible justificationist approach: you believe that although you currently consider socialist justifying arguments compelling (at establishing the greatness and high status of the socialism, and therefore its superiority to less justified rivals), you are open to the possibility that there is a better system which could be argued for even more strongly and justified even more and better than socialism.


BTW, there are some complicating factors.

Although there is an inherent asymmetry between positive and negative arguments (justifying and critical arguments), many arguments can be converted from one type to the other while retaining some of the knowledge.

For example, someone might argue that the single particle two slit experiment supports (justifies) the many-worlds interpretation of quantum physics. This can be converted into criticisms of rivals which are incompatible with the experiment. (You can convert the other way too, but the critical version is better.)

Another complicating factor is that justificationists typically do allow negative arguments. But they use them differently. They think negative arguments lower status. So you might have two strong positive arguments for an idea, but also one mild negative argument against it. This idea would then be evaluated as a little worse than a rival idea with two strong positive arguments but no negative arguments against it. But the idea with two strong positive arguments and one weak criticism would be evaluated above an idea with one weak positive argument and no criticism.

This is easier to express in numbers, but usually isn't. E.g. one argument might add 100 justification and another adds 50, and then a minor criticism subtracts 10 and a more serious criticism subtracts 50, for a final score of 90. Instead, people say things like "strong argument" and "weak argument" and it's ambiguous how many weak arguments add up to the same positive value as a strong argument.

In justification, arguments need strengths. Why? Because simply counting up how many arguments each idea has for it (and possibly subtracting the number of criticisms) is too open to abuse by using lots of unimportant arguments to get a high count. So arguments must be weighted by their importance.

If you try to avoid this entirely, then justificationism stops functioning as a solution to the problem of evaluating competing ideas. You would have many competing ideas, each with one or more argument on their side, and no way to adjudicate. To use justificationism, you have to have a way of deciding which ideas have more justificationism.

The critical approach, properly conceived, works differently than that. Arguments do not have strengths or weights, and nor do we count them up. How can that be? How can we adjudicate between competing ideas with out that? Because one criticism is decisive. What we seek are ideas we don't have any criticisms of. Those receive a good evaluation. Ideas we do have criticisms of receive a bad evaluation. (These evaluations are open to revision as we learn new things.) (Also there are only two possible evaluations in this system. The ideas we do have criticisms of, and the ideas we don't. If you don't do it that way, and you follow the logic of your approach consistently, you end up with all the problems of justificationism. Unless perhaps you have a new third approach.)


Elliot Temple | Permalink | Messages (0)

Popperian Alternative to Induction

This wrote this on an Objectivist discussion forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0265.shtml

I wrote:

Observe what? There are always many many things you could observe. Real scientific observation is selective.

Perform which action? There are many many actions one could perform. Real scientific action is selective.

Which patterns? There's always many many patterns.

In each case, being selective requires complex (critical) thinking. Ideas come first. Induction is supposed to explain how thinking works, but actually presupposes it.

Merlin Jetton replied:

Okay. Give us your answer to these questions. Please give us simple methods that cover all possible cases. How do we delimit those infinitely many possible conjectures?

(Following Popper.) We don't run into all the same problems because we use different methods in the first place.

We don't start with observation, scientific experiment, or finding patterns. All of those come later, after you already have various ideas. Then you do them according to your ideas. This is not problematic in general. It is a problem when you say stuff is "step 1" that actually presupposes ideas, and then claim your set of steps is a solution in epistemology and is how we get ideas.

We have a different approach that is not like induction and avoids many of induction's problems. By using different methods some problems never come up. We never have the problem of figuring out what to observe before having ideas, for example, because we say ideas come first before observations.

How are ideas learned then? Not from observations. Ideas come first. That's not to say observations are excluded. Observations are very useful. But first you need some ideas. Then you can observe (selectively, according to your ideas about what is important, what is interesting, what is notable, what is relevant to problems of interest, what clashes with your expectations, etc, etc ... and if your way of observing doesn't work out you can improve it with criticism, you can change and adjust it) and use the observations to help with further ideas (in a critical role – they rule things out).

Now this is a hard issue and you haven't read the literature and don't be too ambitious about how much you expect to learn from a summary. But anyway, because it's hard I'm going to split it up. First we'll consider an adult who wants to learn something. Then we could talk about how a child gets started after. I'll save that for later if the adult explanation goes over OK. The child is the harder case. I think it's too much to do the child first, all at once.

So, one of Popper's insights is that starting places aren't so important. I'm guessing this sounds dumb to you, because you're a foundationalist and think you have to start with the right foundations/premises/basis and then build up from there, step by step, making sure not to introduce errors or contradictions as you go. And Popper criticized and rejected that approach and offered a significantly different approach.

So let me try to explain what Popper's approach is like. People make mistakes. People are fallible. Errors are common. People mess up all the time. This isn't skepticism. People also get things right, learn, acquire knowledge, make scientific progress, etc, etc... But it's important to understand how easy it is to make mistakes. Knowledge is possible but hard to come by. To get knowledge you have to put a ton of effort into dealing with the problem of mistakes. I think if you read this the right way, you could agree with it. Objectivism recognizes that lots of philosophies go wrong and using the right methods is important and makes a big difference and some stuff like that.

So, OK, error is common and a big part of epistemology and philosophy is how you deal with error. What are you going to do about it? One school of thought tries to avoid errors. You use the right methods and then you get the right answers. That sounds very plausible but I don't think it's the right approach. I'll try to talk about Popper's approach instead. Popper's approach is you do try to avoid errors but you're never going to avoid all of them in the first place. That's not the primary most important thing. Whatever you do, some errors are going to get through. What you really have to do is set up mechanisms to identify and correct errors.

Popper applied this approach widely. Take politics and political systems. One of Popper's big ideas about politics is that trying to elect the right ruler is the wrong thing to focus on. Electing the right guy is trying to avoid errors. Yes you should put some effort into that but you can't do it perfectly and it's not the most important issue. What is the most important issue? That errors can be identified and corrected. In politics that means if you elect the wrong guy you find out fast, and you can get rid of him fast and you can get rid of him without violence. Popper called the wrong approach the "Who should rule?" problem and said most political philosophy argues about who should rule, when it should be focussing a lot more on how to set up political systems capable of correcting mistakes about who gets to rule.

What about epistemology? "Which ideas should we start with?" is a bit like "Who should rule?" You're never going to get it perfect and it shouldn't be the primary focus of your attention. Instead you want to set things up so if you start with the wrong ideas you can find out about the mistake and fix it quickly, easily, cheaply.

error correction is (a lot) more important than starting in a good place. look at it another way. if you start in a bad place but keep making progress, after a while you'll get to a good place and keep going. but if you start in a good place but aren't correcting errors, there is no progress, things never get better, long term you're doomed. so error correction is the more crucial thing that you really need.

so how can adults be selective? how can they decide what scientific experiments to do or which actions and results to investigate? how can they decide what patterns to look for? answer: they already have ideas about that. they can use the ideas they already have. that's ok! they don't need me to tell them some perfect answer. i could give them some advice and there could be some value in it, but it doesn't matter so much. they should start with the ideas they already have, use those, and then if something goes wrong they can make adjustments to try to do something about it. (and they can also philosophically examine their ideas and try to criticize instead of waiting for something noticeable to go wrong.)

in one sense, we're both advocating the same thing. people can and do use the ideas they already have about how to be selective, what issues to focus on, which patterns are notable, and more. but we Popperians know that is what's going on, and know how to keep making progress from there even if people aren't great at it. inductivists on the other hand think they have this method from first principles that is how people think but actually it smuggles in all sorts of common sense and pre-existing ideas as unexamined, uncriticized premises. and that's a really bad idea. those premises being smuggled in are good enough to start with, but what you really need to do is examine and criticize them!

i have not addressed how children/infants get started. i also haven't explained how thinking works at a lower level. (being able to criticize and correct errors requires thinking. how is that done?). we can get to those next if what i'm saying so far goes over ok. also the very short answer for how thinking works is that evolution is the only known theory for how knowledge can be created from non-knowledge. human thinking, at a low level, uses an evolutionary process to create knowledge. (i mean thinking literally uses evolution, not metaphorically. and no i'm not saying you consciously do that).


Elliot Temple | Permalink | Messages (0)

Rand, Popper and Fallibility

I wrote this at an Objectivist forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0261.shtml

Popper is by no means perfect. The important thing is the best interpretations (that we can think of) of his best ideas. The comment below about "animals" is a good example. I do not agree with his attitude to animals in general, and I'm uncomfortable with this statement. However, everything he said about animals (not much) can be removed from his epistemology without damaging the important parts.

Popper made some bad statements about epistemology, and some worse ones about politics. I don't think this should get in the way of learning from him. That said, I agree with Popper's main points below.

1) Can you show if Popper ever fully realized that the falsification of a universal positive proposition is a necessary truth? In other words, if a black swan is found, then the proposition "All swans are white" is falsified, but more than that, it is absolutely falsified (which is a form of absolute knowledge/absolute certainty)? Even if you can't, please discuss.

No, Popper denied this. The claim that we have found a black swan is fallible, as is our understanding of its implications.

Fallibility is not a problem in general. We can act on, live with, and use fallible knowledge. However, it does start to contradict you a lot when you start saying things like "absolute certainty".

Rand isn't fully clear about this. Atlas Shrugged:

"Do not say that you're afraid to trust your mind because you know so little. Are you safer in surrendering to mystics and discarding the little that you know? Live and act within the limit of your knowledge and keep expanding it to the limit of your life. Redeem your mind from the hockshops of authority. Accept the fact that you are not omniscient, but playing a zombie will not give you omniscience—that your mind is fallible, but becoming mindless will not make you infallible—that an error made on your own is safer than ten truths accepted on faith, because the first leaves you the means to correct it, but the second destroys your capacity to distinguish truth from error. In place of your dream of an omniscient automaton, accept the fact that any knowledge man acquires is acquired by his own will and effort, and that that is his distinction in the universe, that is his nature, his morality, his glory.

"Discard that unlimited license to evil which consists of claiming that man is imperfect. By what standard do you damn him when you claim it? Accept the fact that in the realm of morality nothing less than perfection will do. But perfection is not to be gauged by mystic commandments to practice the impossible [...]

Here Rand accepts fallibility and only rejects misuses like claiming man is "imperfect" to license evil. Man's imperfection is not an excuse for any evil -- agreed.

Rand has just acknowledged that man and his ideas and achievements are fallible. But then she decides to demand moral "perfection". Which must mean some sort of contextual, achievable perfection -- not the sort of infallible, omniscient perfection Popper rejects and Rand acknowledges as impossible.

It's the same when Rand talks about "certainty" which is really "contextual certainty" which is open to criticism, arguments, improvement, changing our mind, etc... (Only in new contexts, but every time anyone thinks of anything, or any time passed, then the context has changed at least a little. So the new context requirement doesn't cause trouble.)

2) Can you offer something to redeem Popper of seemingly damning quotes such as:

In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality.

... which preemptively denies the possibility of axiomatic concepts (i.e., the possibility of statements that speak about reality, but are not, themselves, falsifiable).

Any statement which speaks about reality is potentially falsifiable (open to the possibility of criticism using empirical evidence) because, if it speaks about reality, then it runs the risk of being contradicted by reality.

Popper does deny axiomatic concepts, meaning infallible statements. Statements that you couldn't even try to argue with, potentially criticize, question, or improve on. All ideas should be open to the possibility of critical questioning and progress.

There is a big difference between open to refutation and refuted. What's wrong with keeping things open to the potential that, if someone has a new idea, we could learn better in the future?

"If realism is true, if we are animals trying to adjust ourselves to our environment, then our knowledge can be only the trial-and-error affair which I have depicted. If realism is true, our belief in the reality of the world, and in physical laws, cannot be demonstrable, or shown to be certain or 'reasonable' by any valid reasoning. In other words, if realism is right, we cannot expect or hope to have more than conjectural knowledge."

... which preemptively denies the possibility of arriving at a necessary truth about the world.

Conjectural knowledge (or trial-and-error knowledge) is Popper's term for fallible knowledge. It's objective, effective, connected to reality, etc, but not infallible. We improve it by identifying and correcting errors, so our knowledge makes progress.

We cannot establish our ideas are infallibly correct, or even that they are good or reasonable. Such claims (that some idea is good) never have authority. Rather, we accept them as long as we don't find any errors with them.

I think this is different than Objectivism, but correct. Well, sort of different. The following passage in ITOE could be read as something kind of like a defense of this Popperian position (and I think that is the correct reading).

One of Rand's themes here, in my words, is that fallibility doesn't invalidate knowledge.

The extent of today’s confusion about the nature of man’s conceptual faculty, is eloquently demonstrated by the following : it is precisely the “open-end” character of concepts, the essence of their cognitive function, that modern philosophers cite in their attempts to demonstrate that concepts have no cognitive validity. “When can we claim that we know what a concept stands for?” they clamor—and offer, as an example of man’s predicament, the fact that one may believe all swans to be white, then discover the existence of a black swan and thus find one’s concept invalidated.

This view implies the unadmitted presupposition that concepts are not a cognitive device of man’s type of consciousness, but a repository of closed, out-of-context omniscience —and that concepts refer, not to the existents of the external world, but to the frozen, arrested state of knowledge inside any given consciousness at any given moment. On such a premise, every advance of knowledge is a setback, a demonstration of man’s ignorance. For example, the savages knew that man possesses a head, a torso, two legs and two arms; when the scientists of the Renaissance began to dissect corpses and discovered the nature of man’s internal organs, they invalidated the savages’ concept “man”; when modern scientists discovered that man possesses internal glands, they invalidated the Renaissance concept “man,” etc.

Like a spoiled, disillusioned child, who had expected predigested capsules of automatic knowledge, a logical positivist stamps his foot at reality and cries that context, integration, mental effort and first-hand inquiry are too much to expect of him, that he rejects so demanding a method of cognition, and that he will manufacture his own “constructs” from now on. (This amounts, in effect, to the declaration: “Since the intrinsic has failed us, the subjective is our only alternative.”) The joke is on his listeners: it is this exponent of a primordial mystic’s craving for an effortless, rigid, automatic omniscience that modern men take for an advocate of a free-flowing, dynamic, progressive science.

One of the things that stands out to me in discussions like this is that all today's Objectivists seem (to me) more at odds with Popper than Rand's own writing is.

I'll close with one more relevant ITOE quote:

Man is neither infallible nor omniscient; if he were, a discipline such as epistemology—the theory of knowledge—would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts—a level inadequate to the cognitive requirements of his survival—man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no automatic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. Two questions are involved in his every conclusion, conviction, decision, choice or claim: What do I know?—and: How do I know it?


Elliot Temple | Permalink | Messages (0)