How should we think about gradations of certainty in Critical Rationalist terms?
there are the following 3 situations regarding one single unambiguous problem. this is complete.
1) you have zero candidate solutions that aren't refuted by criticism.
gradations of certainty won't help. you need to brainstorm!
2) you have exactly one candidate solution which is not refuted by criticism.
tentatively accept it. gradations of certainty won't help anything.
(if you don't want to tentatively accept it – e.g. b/c you think it'd be better to brainstorm and criticize more – then that is a criticism of accepting it at this time.)
3) you have more than one candidate solution which is not refuted by criticism.
this is where gradations of certainty are mainly meant to help. but they don't for several reasons. here are 6 points, 3A-3F:
3A) you can convert this situation (3) into situation (1) via a criticism like one of these 2:
3A1) none of the ideas under consideration are good enough to address their rivals.
3A2) none of these ideas under consideration tell me what to do right now given the unsettled dispute between them.
(if no criticisms along those lines apply, then that would mean some of the ideas you have solve your problem. they tell you what to do or think given the various ideas and criticism. in which case, do/think that. it's situation (2).)
3B) when it comes to taking action in life, you can and should come up with a single idea about what to do, which you have no criticism of, given the various unresolved issues.
3C) if you aren't going to take any actions related to the issue, then there's no harm in leaving it unresolved for now and not knowing the answer. you don't have to rate gradations of certainty, you can just say there's several candidates and you haven't sorted it out yet. you would only need to rank them, or otherwise decide which to pursue, if you were going to take some action in relation to the truth of this matter (in which case see 3B)
3D) anything you could use to rank one idea ahead of another (in terms of more gradations of certainty, more justification, more whatever kind of score) either does or doesn't involve a criticism.
if it doesn't involve a criticism of any kind, then why/how does it provide a reason to rank one uncriticized reason above another one (or add to the score of one over another)?
if it does involve a criticism, then the criticism should be addressed. criticisms are explanations of problems. addressing it requires conceptual thinking such as counter-arguments, explanations of why it's not a problem after all in this context, explanations of how to improve the idea to also address this criticism, etc. either you can address the criticism or you can't. if you can't that's a big deal! criticisms you see no way to address are show stoppers.
one doesn't ever have to act on or believe an idea one knows an unanswered criticism of. and one shouldn't.
also to make criticism more precise, you want to look at it like first you have:
- context (background knowledge, etc)
- idea proposed to solve that problem
then you criticize whether the idea solves the problem in the context. (i consider context implied as part of a problem, so i won't always mention it.)
if you have a reason the idea does not solve the problem, that's a show stopper. the idea doesn't work for what it's supposed to do. it doesn't solve the problem. if you don't have a criticism of the idea successfully solving the problem, then you don't have a criticism at all.
this differs from some loose ways to think about criticism which are often good enough. like you can point out a flaw, a thing you'd like to be better, without any particular problem in mind. then when you consider using the idea as a solution to some problem, in some context, you will find either the flaw does or doesn't prevent the idea from solving that problem.
in general, any flaw you point out ruins an idea as a solution to some problems and does not ruin it as a solution to some other problems.
3E) ranking or scoring anything using more than one variable is very problematic. it often means arbitrarily weighting the factors. this is a good article: http://www.newyorker.com/magazine/2011/02/14/the-order-of-things
3F) suppose you have a big pile of ideas. and then you get a list of criticisms. (it could be pointing out some ideas contradict some evidence. or whatever else). then you go through and check which ideas are refuted by at least one criticism, and which aren't. this does nothing to rank ideas or give gradations. it only divides ideas into two categories – refuted and not refuted. all the ideas in the non-refuted category were refuted by NONE of the criticism, so they all have equal status.
i think what some people do is basically believe all their ideas are wrong, bad, refuted. and then they try to approach gradations of certainty by which ones are less wrong. e.g. one idea is refuted by 20 criticisms, and another idea is only refuted by 5 criticisms. so the one that's only refuted 5 times has a higher degree of certainty. this is a big mistake. we can do better. and also the way they count how much is one criticism (with or without weighing how much each criticism counts) is arbitrary and fruitless.
something they should consider instead is forming a meta idea: "Idea A is refuted in like a TON of ways and seems really bad and show-stopping to me b/c... Idea B has some known flaws but i think there's a good shot they won't ruin everything, in regards to this specific use case, b/c... And all the other ideas I know of are even worse than A b/c... So i will use idea B for this specific task."
then consider this meta idea: do you have a criticism of it, yes or no? if no, great, you've got a non-refuted idea to proceed with. if you do have a criticism of this meta idea, you better look at what it is and think about what to do about it.
for a lot more info, see this post: http://curi.us/1595-rationally-resolving-conflicts-of-ideas
in reply to this comment from another thread: http://curi.us/comments/show/6836
(the above blog post also replies to it)
> what it means to experience certainty
bad anti-fallibilist concept, forget it
> but I doubt it's possible (or worthwhile) to discard the concept of certainty altogether.
DD, Popper and I do discard it.
> I haven’t found any reason to think Deutsch would disagree with Bayesian inference (which, as far as I’m aware, is neutral toward justificationism vs. fallibilism)
Bayesians are Justifiactionists.
Fallibilism and Justificationism are compatible. They aren't on the same spectrum. They're different kinds of things.
DD, Popper and I reject Bayesian epistemology. (This is separate from the mathematical formula, Bayes' Theorem, which is correct and is useful for e.g. thinking well about bags of colored marbles or populations with diseases and tests for whether you have the diseases.)
And not just have a few minor objections to Bayesian Epistemology (BE). Disagree on tons of stuff and consider BE very bad and unproductive.
We also reject learning by *inference* in general.
there's lots but a few quick sources:
> rather than speaking about how the accumulation of evidence can contribute to the “certainty” that an explanation is true
that's because he entirely rejects the accumulation of evidence.
there is no positive evidence for ideas. there is no supporting evidence. none, period, ever.
> More specifically, how should we articulate the fact that we - I think, quite rationally - feel greater confidence in the prior than the latter, even though we may accept both explanations simultaneously?
in general i suggest doing it vaguely and informally.
but why do you want to do that? if you specified a problem you want to solve (like you're trying to reach a verdict in order to accomplish something) then we could talk about how to solve it.
fundamentally if two ideas both solve problems and there are no criticisms, they have the same status. and fundamentally there's no way to know where our mistakes are without finding them – that'd be predicting the future growth of knowledge.
you can do things like say, "i just came up with this idea and didn't think about it much yet. it hasn't yet been exposed to all the standard criticism i expose my ideas to. so i have low confidence." here it's clear we aren't speaking about whether the idea is true or false, and instead are making meaningful statements about actual events, procedures, etc, and these statements can be useful for something (e.g. don't tell your boss, who is very busy and wants to minimize how much he has to read, until after you think it over more).
the stuff about betting is hard to discuss. i think most scientists don't understand evolution – and therefore don't actually accept it. and i think it's very hard adjudicate bets about what lots of people believe. (people are very complex and difficult to survey accurately).
but broadly i agree that you can place bets kinda like those by looking at things like what criticisms an idea has survived (and e.g. whether any important already-known strategies of criticism haven't been tried yet) and how much thought (from how good of people) has gone into refuting it. you'd also want to consider how much additional thought you estimate will go into trying to refute them during the bet period (the more-criticized idea may also be the one getting a lot more ongoing attempts are refutation). this will not be totally reliable – and you can't actually quantify how reliable it is – but you may choose to bet money on it, if you like.
> **2.) There is clearly such a thing as overselling an explanation.**
i agree. and the guy at the bar doesn't seem to have done much brainstorming of alternative explanations. if he did, he'd realize there are multiple common possibilities which are compatible with the evidence (the wink), plus some rarer possibilities. he ought to come up with a more nuanced idea about how life is complicated, he has incomplete evidence, various things are possible, some are more typical in our culture, and he should try to find a way forward that won't be a disaster (e.g. sexual harassment) in any common scenario and also will allow the hookup to happen if she's into it. (i'm assuming he wants the hookup). even better if he can handle it in such a way that even if she wasn't in to him, she may become in to him. so he can do things like speak charismatically and check for several more indicators of interest and then escalate in a non-threatening way.
> **3.) We are in a better position now to say that our best available explanations are indeed the best available explanations than when we likewise accepted these explanations in the past.**
why? because when we criticize our ideas and they survive, we add to our stock of known criticisms. we keep track of those criticisms and try them out again on other ideas. this makes it harder to brainstorm new rival ideas which aren't already refuted by some existing already-known criticism.
the harder it is to brainstorm new ideas that aren't refuted by existing criticism, the more confident people start getting, and the higher quality they think the ideas are.
this is fine when being imprecise and if you know what it means. but you will run into problems if you try to replace the methods of epistemology (e.g. the stuff in the blog post above) with this. also it's not formally measurable, and the way people estimate it involves a lot of vague biases, preconceptions, unstated assumptions, background knowledge, etc
(btw looking at an increasing stock of criticism making it more and more difficult to brainstorm new ideas that aren't refuted by pre-existing criticism is my original idea. some of this other stuff is my refined versions too.)
Mistake in the above comment. It says fallibilism and justificationism are compatible. I think you meant to say they are NOT compatible.
Not a typo. Justifications don't have to be infallible to be justifications.
What curi means is that fallibilist philosophy is just the acceptance that there is no certainty. Difference between anti-justificationism and fallibalist justificationism, is that fallibilist justificationism believes that we can confirm our hypothesis by gathering evidence. Anti-justificationism (I.e justification scepticism) accepts that evidence does not confirm theories. The reason is because evidence is interpreted in the light of theories, it would be circular then to say that the evidence supports the theory that is being used to interpret it.
More generally, an argument itself is not used to establish its conclusion, but to show what follows from what, so it is easier to criticise. No premises can establish any conclusion, unless the premises themselves are established, but the premises, in order to be established, need to be the conclusions of other arguments with established premises.
"btw looking at an increasing stock of criticism making it more and more difficult to brainstorm new ideas that aren't refuted by pre-existing criticism is my original idea. some of this other stuff is my refined versions too."
Really, I thought it was pretty obvious that if a claim is true, that any idea in conflict with it is not true. Einsteins theory, in your eyes, is part of our stock of criticisms, and can be used to criticise technological ideas. This is called applied science.
that's not what i mean. a point i differ on: how can evidence+theories support any theory when it either does or doesn't refute every theory?
@#7444 i sympathize. once you understand philosophy well, lots of stuff that most people are massively confused about becomes kinda easy. philosophy is awesome like that.
I want to add to the above comment. Ideas that are in conflict with other ideas, means that a problem can be identified, namely that we have to devise a way to choose between the ideas. A criticism in the end is just an idea, or an explanation, that other ideas or explanations are in conflict with. So the force of pre-existing criticism making it harder to come with ideas seems to ,e diminished. Each idea, unless it is exactly the same as an old idea, always brings something new to the table, which we then have to devise a new criticisms of, from our existing stock of knowledge.
I was not claiming that it does. Both yours and my criticisms of justification are not in conflict. I was just highlighting with my own criticism. You accept the difference I posted oiut, between justificationist fallibilism and anti-justificationism. Why challenge a criticism you can't offer a criticism against, but only put forward an alternative criticis against the same thing.
you often don't need a new criticism.
for example, consider addition of two even numbers. a criticism of some solutions is the answer can't be odd. there are infinitely many new ideas to be found claiming stuff like 234231+78293434=5 and 23434+823940=172738941. but you don't need a new criticism to address them. the old one already applies.
it's the same with ideas. consider Rand's refutation of altruism. it also refutes, with no changes, many slight variants of altruism that don't change anything that Rand's criticism depends on.
and it's the same with Mises's economic calculation argument. it doesn't merely refute the one concept of socialism from the real world. it applies to many, many possible socialist systems that are each a little different but have the common characteristic of not being able to do economic planning b/c they don't have prices. we don't have to devise a new criticism based on the old argument when someone says "i've got an idea, it's like socialism, and there are no prices, BUT we shoot a few more of the rich people at the start than normal and also we steal ALL their candle holders to fund the government for a while". the econ calc arg still works.
Yes, I agree with what you are saying. But what zi am trying to highlight is that each new invariant of socialism has some new content, that new content is not effected by the criticism of the previous one, so the new content itself, could still be good, when you seperate what is not Nobel and what is.
The novelty is not refuted just because some other bit is refuted.
Haha, invariant of socialism seems apt. But I mean to say variant.
> The novelty is not refuted just because some other bit is refuted.
the idea as a whole is refuted already.
if there's any value in the novel part, then make a new idea which includes it but isn't already refuted. **then** we'd need a new criticism (or maybe we'd like it).
and not every idea has meaningful new content. like consider variations on socialism which involve a transition period with a tax rate. and one says it's 30%, another says 30.00001%, another says 30.00002%, another says 30.00003%, etc. there's no real novelty there and no need for new criticism.
you don't mean the idea as a whole. You mean a theory is refuted, but even refuted theories have content that is true. Ideas are not fully formed theories, they are sometimes just statements like, I want to make tea with cinnamon. Etc.
Also if the original criticisms is in conflict with the new idea that is Nobel we have to devise a way to choose between them, that does mean seeing whether the old idea does stand up to the new criticism. A new socialist theory might have a novel aspect that actually shows the criticism to be flawed. It is not given that it ithe early criticism is definitely not mistaken.
the idea as a whole is refuted. take the whole thing. evaluate it. false!
you can get new ideas by breaking out out some sub-parts of prior ideas as separate ideas, which may or may not be refuted. the sub-part has to stand on its own or it's refuted too. if it doesn't stand on its own, it could still be combined with some other stuff to potentially get a non-refuted idea.
> we have to devise a way to choose between them
we have a way. the criticism explains a flaw in the criticized idea.
for the new idea to win it'd have to be different so that criticism no longer applies. the flaw being explained is gone. it can do this in one of two ways. it can get rid of the criticized thing. or it can add a new thing to handle the criticism. an explanation of why X is wrong does not explain why X+Y is wrong if Y is a counter-criticism refuting the criticism or an explanation of why the criticism doesn't actually apply or something like that.
Let me pull back though. I agre though that theories that we accept can be used as critical ammunition against new ideas, and that we do build a stock of criticism. criticisms are explanations and the more we develope explanations of the world we can unify them. Our stock of criticism are not necessarily uniformly consistent. Our criticisms of theories are usually just evidence explained by other theories that we take to be true, and so a theory contains a lot of critical depth against other theories.
the sub-part has to stand on its own or it's refuted too
This is false. Singular statements that are the content of theories cannot be refuted in this way. If they are true, they are true. What you mean to say is that they are not independently testable. They are currently part of a false theory.
But just because
all animals have four legs
Dogs are animals
Dogs have four legs.
The theory is refuted, but tone of the premises and the conclusion remain true. They are not refuted.
any idea that doesn't stand on its own is refuted by the "that's incomplete" criticism. it doesn't work. it needs something else which is missing.
"Dogs are animals." does stand on its own, of course...
t can get rid of the criticized thing. or it can add a new thing to handle the criticism. an explanation of why X is wrong does not explain why X+Y is wrong if Y is a counter-criticism refuting the criticism or an explanation of why the criticism doesn't actually apply or something like that.
I said that in my response to you to highlight that we cannot take previous criticism as definitely true. Each novel thing we have to deal with because it is unexpected. We have to think about it and evaluate it, we can't just dismiss it.
I agree it makes it harder to for new ideas to survive.
I think we are arguing about something different now.
stop posting unmarked quotes.
quotes go like this:
> quoted text
any idea that doesn't stand on its own is refuted by the "that's incomplete" criticism. it doesn't work. it needs something else which is missing.
"Dogs are animals." does stand on its own, of course...
This is a very strange way of saying something is refuted. To say it's refuted usually means that the truth value is false. The truth value is not false.so it's not refuted. It stands in need of elaboration, but that is not a criticism of the statement.
Oops, sorry, I was not aware of the convention.
refuted includes false as well as ambiguous and incoherent stuff. e.g. "dogs are" without any question it's supposed to be an answer to isn't exactly false but it's refuted by the criticism that it doesn't make clear what problem it's supposed to address and how it addresses it. if you want to call it false i wouldn't mind.
I think this is irrelevant to what we are discussing. Which is are statements that are part of refuted theories necessarily refuted if they part of refuted theories. The answer to this question must be no,
I think I agree with the post.
There is something I'm unsure about though.
Suppose I want something (for the sake of argument, a meal).
I have options. I could get something cheap which isn't unpleasant, it will satisfy hunger and I wont dislike it. Say that costs $5.
Or I could get something nicer. I enjoy it more (like I give it 8/10 instead of 7/10). It costs $20.
Or I could get something super nice. It's the nicest food I could eat. It costs lots more.
Typically I'll choose the cheap option. I don't care about food enjoyment that much, as long as it's not unpleasant I'd prefer to save the money.
Sometimes I fancy something that I can only get with the nicer option. I pay more on that occasion. Then I go back to ordinary stuff.
I don't get the super costly stuff ever. I would prefer to save the money for other things.
I don't think this is weighting (I'm not thinking something like (niceness * X)-(cost * Y) = value and looking for the highest value result)
But I'm not sure what clearly sets this apart from weighting. It seems very similar.
Is it that, when weighting, I'm not forming preferences well?
Like, if weighting I'm doing some sum based on some parameters I make up, I set my preference as "get the best result from this made-up equation" rather than "this food will satisfy me, and the cost is not something I consider a problem".
you can weigh lots of things, e.g. people, literally. and you can make decisions that take into account a weight in some way, e.g. dating skinny girls over fat girls.
the decisions themselves, e.g. to date a particular girl or not, are yes/no affairs. they never involve weighting the truth of an idea or your certainty it's true. there's no weights in epistemology. there are weights of people, in pounds. you can score/weigh colleges by distance from your parents in miles, too, but that isn't a weight or score for an idea or for your certainty of an idea.
Ah, I think I get it better.
It's not about how to make choices, but whether an idea is true or not.
So you could choose to make a decision with some weighting calculation. I doubt it would be a great method, but it could be done. This wouldn't be rating the highest-scoring girl as eg 80% the best girl, she either would be or wouldn't be. Her score either is or isn't higher than the others.
You couldn't even say "this girl is 80% of the best possible girl ever" because you don't know all possible girls yet. You don't know where 100% is.
However the idea "I'm going to date that girl" is an absolute, as is "this is my weighting system" and "I'm going to date the girl who gets the highest score in this system".
> e.g. one idea is refuted by 20 criticisms, and another idea is only refuted by 5 criticisms
And thinking about this part in this example. If an idea of dating one girl has 20 crits, and an idea of another girl has 5 crits, they're both wrong.
Find a girl where the idea of dating her has 0 crits, or find out why the crits are wrong, or don't date, or you'll end up suffering from doing something you think is wrong.