Aubrey de Grey Discussion, 10

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I wouldn't draw a distinction there. If you don't know more criticisms, and resolved all the conflicts of ideas you know about, you're done, you resolved things. Whether you could potentially create more criticisms doesn't change that.
OK, of everything you’ve said so far that is the one that I find least able to accept. Thinking of things takes time - you aren’t disputing that. So, if at a given instant I have resolved all the conflicts I know about, but some of what I now think is really really new and I know I haven’t tried to refute it, how on earth can I be “done"?
As you say, you already know that you should make some effort to think critically about new ideas. So, you already have an idea that conflicts with the idea to declare yourself done immediately.

If you know a reason not to do something, that's an idea that conflicts with it.
That’s precisely what I previously called switching one’s brain off. Until one has given one’s brain a reasonable amount of time to come up with a refutation of a new concept, the debate is abundantly ongoing.

You make a good point about the cryonics example being sub-optimal because I’m the defender and you’re the critic. So, OK, let’s do as you suggest and switch (for now) to a topic where you’re the defender and I’m the critic. There is a readily available one: your approach to the formation of conclusions.
I see some problems with this choice:

Using an epistemology discussion as an example for itself adds complexity.

Using a topic where we disagree mixes demonstrating answering criticism with trying to persuade you.

Using a complex and large topic is harder.

I still will criticize justificationism because you still think it can create knowledge.



If I were to pick, I'd look for a simpler topic where we agree. For example, we both believe that death from aging and illness is bad. If SENS or cryonics succeeded, that would be a good thing not a bad thing.

I wonder if you think there's criticisms of this position which you don't have a refutation of? Some things you had to gloss over as "weak" arguments, rather than answer?

The idea that grass cures the common cold – or that this is a promising lead which should be studied in the near term – would also work. You gave an initial argument on this topic, but I replied criticizing it. You didn't then demonstrate your claimed ability to keep up arguments for a bad position indefinitely.
(Does it have a name?
Popper named it Critical Rationalism (CR).
- presumably something better than non-justificationism? I’m going to call it Elliotism for now, and my contrary position Aubreyism, since I have a feeling we’re both adopting positions that are special cases of whatever isms might already have been coined.) Let’s evaluate the validity of Elliotism using Elliotism.
What do you mean by "validity"? I'm guessing you mean justification.

To evaluate CR with CR, you would have to look at it with its own concepts like non-refutedness.
The present state of affairs is that I view Elliotism as incorrect - I think justificationism is flawed in an ideal world with infinite resources (especially time) but is all we have in the real world, whereas (as I understand it) Elliotism says that justificationism can be avoided and a purely boolean approach to refutation adopted, even in a resource-constrained world.
Yes, but, I think you've rejected or not understood important criticism of justificationism. You've tried to concede some points while not accepting their conclusions. So to clarify:

Justificationism is not a flawed but somewhat useful approach. It literally doesn't and can't create knowledge. All progress in all fields has come from other things.

Justificationists always sneak in some an ad hoc, poorly specified, unstated-and-hidden-from-criticism version of CR into their thinking, which is why they are able to think at all.

This is what you were doing when saying you clarified that meant Aubreyism step 1 to include creative and critical thinking.

So what you really do is some CR, then sometimes stop and ignore some criticisms. The justificationism in the remaining steps is an excuse that hides what's going on, but contributes no value.

Some more on this at the end.
I’ve articulated some rebuttals of Elliotism, and you’ve articulated a series of rebuttals of my rebuttals, but I’m finding them increasingly weak
"weak" is too vague to be answerable
- I’m no longer seeing them as reaching my threshold of “meaningful” (i.e. requiring a new rebuttal).
This is too vague to be answerable. What's the threshold, and which arguments don't meet it?
Rather, they seem only to reveal confusion on your part, such as elidin the difference between resolving a conflict of ideas and resolving a conflict of personalities, or ignoring what one knows
What who knows? I have not been ignoring things I know, so I'm unclear on what you're trying to get at.
about the time it typically takes to generate a rebuttal when there is one out there to be generated. I’ve mentioned these problems with Elliotism and I’m not satisfied with your replies. Does that mean I should consider the discussion to be over? Not according to Elliotism, because in your view you are still coming up with abundantly meaningful rebiuttals of my rebuttals, i.e. we’re nowhere near a win/win. But according to Aubreyism, I probably should, soon anyway, because I’ve given you a fair chance to come up with rebuttals that I find to be meaningful and you’ve tried and failed.
I don't know, specifically, what you're unsatisfied with.

It could help to focus on one criticism you think you're right about, and clarify what the problem is and why you think my reply doesn't solve it. Then go back and forth about it.


You mention two issues but without stating the criticism you believe is unanswered. This doesn't allow me to answer the issues.

1) You mention time for rebuttal creation. We discussed this. But at this point, I don't know what you think the problem is, how it refutes CR, and what was unsatisfactory about my explanations on the topic.

2) You mention the difference between conflicts of ideas and personalities. But I don't know what the criticism is.

Personalities consist of ideas, so in that sense there is no difference. I don't know what you would say about this – agree or disagree, and then reach what conclusion about CR.

But that's a literal answer which may be irrelevant.

I'm guessing your intended point is about the difference between getting people not to fight vs. actually making progress in a field like science. These are indeed very different. I'm aware of that and I don't know why you think it poses a problem for CR. With CR as with anything else, large breakthroughs aren't made at all times in every discussion. So what? The claim I've made is the possibility of acting only on non-refuted ideas.
Oh dear - we seem to have a bistable situation. Elliotism is valid if evaluated according to Elliotism, but Aubreyism is valid if evaluated according to Aubreyism. How are we supposed to get out of that?
One approach is looking at real world results. What methods were behind things we all agree were substantial knowledge creation? Popper has done some analysis of examples from the history of science.


Another approach is to ask a hard epistemology question like, "How can knowledge be created?" Then see how well the different proposed epistemologies deal with it.

CR has an answer to this, but justificationism doesn't.

CR's answer is that guesses and criticism works because it's evolution, complete with replication, variation and selection. How and why evolution is able to create knowledge is well understood and has books like The Selfish Gene about it, as well as being covered well in DD's books.

Justificationism claims to be an epistemology method capable of creating knowledge. It therefore ought to either explain

1) how it's evolution

or

2) what a different way knowledge can be created is, besides evolution, and how it uses that

If you can't do this, you should reject justificationism. Not as an imperfect but pragmatic approach, but as being completely ineffective and useless at creating any knowledge.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 11

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I wouldn't draw a distinction there. If you don't know more criticisms, and resolved all the conflicts of ideas you know about, you're done, you resolved things. Whether you could potentially create more criticisms doesn't change that.
OK, of everything you’ve said so far that is the one that I find least able to accept. Thinking of things takes time - you aren’t disputing that. So, if at a given instant I have resolved all the conflicts I know about, but some of what I now think is really really new and I know I haven’t tried to refute it, how on earth can I be “done"?
As you say, you already know that you should make some effort to think critically about new ideas. So, you already have an idea that conflicts with the idea to declare yourself done immediately.

If you know a reason not to do something, that's an idea that conflicts with it.
Ah, but hang on: what do I actually know, there? You’re trying to make it sound boolean by referring to “some” effort, but actually the question is how much effort.
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.

Just mentioning a quantity in some way doesn't contradict CR.
What I know is my past experience of how long it typically took to come up with a refutation of an idea that (before I tried refuting it) felt about as solid as the one I'm currently considering feels. That’s correlation, plain and simple. I’m solely going on my hunch of how solid what I already know feels, or converseiy how likely it is that if I put in a certain amount of time trying to refute what I think I will succeed. So it’s quantitative. I can never claim I’m “done” until I’ve put in what I feel is enough effort that putting in a lot more would still not bring forth a rebuttal. And that estimated amount of effort again comes from extrapolation from my past experience of how fast I come up with rebuttals.

To me, the above is so obvious a rebuttal
I think your rebuttal relies on CR being incompatible with dealing with any sort of quantity – a misconception I wasn't able to predict. Otherwise why would a statement of your approach be a rebuttal to CR?

It's specifically quantities of justification – of goodness of ideas – that CR is incompatible with.
of what you said that it makes no sense that you would not have come up with it yourself in the time it took you to write the email. That’s what I meant about your answers getting increasingly weak.
We have different worldviews, and this makes it hard to predict what you'll say. It's especially hard to predict replies I consider false. I could try to preemptively answer more things, but some won't be what you would have said, and longer emails have disadvantages.
I mean that it’s becoming easier and easier to come up with refutations of what you’re saying, and it seems to me that it’s becoming harder and harder for you to refute what I say - not that you’re finding it harder, but that the refutations you're giving are increasingly fragile. To my ear, they’re rapidly approaching the “that’s dumb, I disagree” level. And I don’t know what situation there would be that would make them sound like that to you too. You said earlier on that "It's hard to keep up meaningful criticism for long” and I said "That’s absolutely not my experience” - this is what I meant.
Justificationists always sneak in some an ad hoc, poorly specified, unstated-and-hidden-from-criticism version of CR into their thinking, which is why they are able to think at all.
This is what you were doing when saying you clarified that meant Aubreyism step 1 to include creative and critical thinking.
Yes, absolutely. I don’t think I know what pure justificationism is, but for sure I agree (as I have since the start of our exchange) that CR is a better way to proceed than just by hunches and correlations.

Proceed by which correlations? Why those instead of other ones? How do you get from "X correlates with Y [in Z context]" to "I will decide A over B or C [in context D]"? Are any explanations involved? I don't know the specifics of your approach to correlations.

We've discussed correlations some, but our perspectives on the matter are so different that it wasn't easy to create full mutual understanding. It'll take some more discussion. More on this below.
Thus, indeed Aubreyism is a hybrid between the two - it uses CR as a way to make decisions, but with a triage mechanism so that those decisions can be made in acceptable time. I’m fine with the idea that the triage part contributes no value in and of itself, because what it does do, instead, is allow the value from the CR part to manifest itself in real-world actions in a timely fashion.
Situation: you have 10 ideas, eliminate 5-8 with some CR tools, and run out of time to ponder.

You propose deciding between the remaining ideas with hunches. You say this is good because it's timely. You say the resulting value comes from CR + timeliness.

Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.

I suspect you'll be unwilling to switch to dice. Meaning you believe the hunches have value other than timeliness. Contrary to your comments above.

What do you think?
More generally, going back to my assertion that you do in fact make decisions in just the same way I do, I claim that this subjective, quantitative, non-value-adding evaluation of how different two conflicting positions feel in their solidity, and thus of how much effort one should put into further rebutting each of them, is an absolutely unavoidable aspect of applying CR in a timely fashion.
In my view, I explained how CR can finish in time. At this point, I don't know clearly and specifically why you think that method doesn't work, and I'm not convinced you understand the method well enough to evaluate. Last email, I pointed out that some of your comments are too vague to be answerable. You didn't elaborate on those points.

Bigger picture, let's try to get some perspective.

Epistemology is COMPLEX. Communication between different perspectives is VERY HARD.

When people have very different ideas, misunderstandings happen constantly, and patient back-and-forth is needed to correct them. Things that are obvious in one perspective will need a lot of clarification to communicate to another perspective. An especially open minded and tolerant approach is needed.

We are doing well at this. We should be pleased. We've gotten somewhere. Most people attempting similar things fail spectacularly.

You understand where I'm coming from better now, and vice versa. We know outlines of each other's positions. And we have a much more specific idea of what we do and don't agree about. We've discovered timely CR is a key issue.

People get used to talking to similar people and expect conversations to proceed rapidly. Less has to be communicated, because only differences require much communication. People often omit some details, but the other guy with many shared premises fills in the blanks similarly. People also commonly gloss over disagreements to be polite.

So people often experience communication as easy. Then when it isn't, they can get frustrated and give up in the face of misunderstandings and disagreements.

And justificationism is super popular, so epistemology conversations often seem to go smoothly. Similar to how most regular people would smoothly agree with each other that death from aging is good. Then when confronted with SENS, problems start coming up in the discussion and they don't have the skills to deal with those problems.

Talking to people who think differently is valuable. Everyone has some blind spots and other mistakes, and similar people will share some of the same weaknesses. A different person, even if worse than you, could lack some of your weaknesses. Trading ideas between people with different perspectives is valuable. It's a little like comparative advantage from economics.

But the more different someone is, the more difficult communication is. Attitudes to discussion have to be adjusted.

We should be pleased to have a significant amount of successful communication already. But the initial differences were large. There's still a lot of room to understand each other better.

I think you haven't discussed some details so far (including literally not replying to some points) – and then are reaching tentative conclusions about them without full communication. That's fine for initial communication to get your viewpoint across. It works as a kind of feeling out stage. But you shouldn't expect too much from that method.

If you want to reach agreement, or understand CR more, we'll have to get into some of those details. We now have a better framework to do that.

So if you're interested, I think we may be able to focus the discussion much more, now that we have more of an outline established. To start with:

Do you think you have an argument that makes timely CR LITERALLY IMPOSSIBLE, in general, for some category of situations? Just a yes or no is fine.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 12

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Just mentioning a quantity in some way doesn't contradict CR.
Fully agreed - but:
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.
Not really, because the answer is a continuum. If X effort is not enough and X+Y effort is enough, then maybe X+Y/2 effort is enough and maybe it isn’t. And, oh dear, one can continue that binary chop forever, which takes infinite time because each step takes finite time. I claim there’s no way to short-circuit that that uses only yes/no questions.
"Is infinite precision useful here? yes/no."

"Is one decimal enough precision for solving the problem we're trying to solve? yes/no"

You don't have to use only yes/no questions, but they play a key role. After these two above, you might use some method to figure out the answer to adequate precision. Then there'd be some more yes/no questions:

"Was that method we used a correct method to use here?"

"Is this answer we got actually the answer that method should arrive at, or did we follow the method wrong?"

"Have we now gotten one answer we're happy with and have no criticism of? Can we, therefore, proceed with it?"
Plus, in the real world, at some point in that process one will in fact decide either that both the insufficiency of X and the sufficiency of X+Y are rebutted, or than neither of them is (which of the two depending on one’s standard for what constitutes a rebuttal) - which indeed terminates the binary chop, but not usefully for a pure-CR approach.
Rebuttals are useful because they have information about the topic of interest. What to do next would depend on what the rebuttals are. Typically they provide new leads. When they don't, that is itself notable and can even be thought of as a lead, e.g. one might learn, "This is much more mysterious than I previously thought, I'll have to look for a new way to approach it and use more precision" – which is a kind of lead.


The standard of a rebuttal, locally, is: does this flaw pointed out by criticism prevent the idea from solving the problem we're trying to solve? yes/no. If no, it's not a criticism IN CONTEXT of the problem being addressed.

But the full standard is much more complicated, because you may say, "Yes that idea will solve that problem. However it will cause these other problems, so don't do it." In other words, the context being considered may be expanded.
Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.
Actually I’m fine with that (i.e., I recognise that the triage is functionally equivalent to that). In practice I only roll the dice when I think I’m sure enough that I know what the best answer is - so, roughly, I guess I would want to be rolling three dice and going one way if all of them come up six and the other way otherwise - but that’s still dice-rolling.
There's a big perspective gap here.

I had in mind rolling dice with equal probability for each result.

If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.

Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 13

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
So here’s an interesting example of what I mean. I woke up this morning and realised that there is indeed a rather strong refutation of my binary chop argument below, namely “don’t bother, just use X+Y - one doesn’t need to take exactly the minimum amount of time needed, only enough".
I object to the concept of a "strong refutation". I don't think there are degrees or quantities of refutation.

A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong".

People have some ability to guess whether it will be easy or hard to proceed by finding a workable close variant of the criticized idea. And they may not understand in detail what's going on, so it can seem like a hunch, and be referred in terms of strong or weak criticism.

But:

  • Refuting more or fewer variant ideas is different than degrees of strength. Sometimes the differences matter.
  • Hunches only have value when actually there's some reasonable underlying process being done that someone doesn't know how to put into words. Like this. And it's better to know what's going on so one can know when it will fail, and try to improve one's approach.
  • People can only kinda estimate the prospects for CLOSE variants handling the criticism and continuing on similar to before. This gives NO indication of what may happen with less close variants.
  • This stuff is pretty misleading because either you're aware of a variant idea that isn't refuted, or you aren't. And you can't actually know in advance how well variants you aren't aware of will work.
But consider: yesterday I came up with the binary chop argument and it intuitively felt solid enough that I thought I’d spent enough time looking for refutations of it by the time I sent the email. I was wrong - and for sure I’ve been wrong in the same way many times in the past. But was I wrong to be sure enough of my argument to send the email? I’d say no. That’s because, as I understand your definition of a refutation, I can’t actually fix on a finite Y, because however large I choose Y to be I can always refute it by a pretty meaningful argument, namely by reference to past times when I (or indeed whole communities) have been wrong for a long time.
There are never any guarantees of being correct. Feeling sure is worthless, and no amount of that can make you less fallible.

We should actually basically expect all our ideas to be incorrect and one day be superseded. We're only at the BEGINNING of infinity.

The ways to deal with fallibilism are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.

You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.

I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
I direct you to the example of the “Lion and Man” problem, which was incorrectly “solved” for 25 years. It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of. Thus, we can only answer “yes stop now” in finite time to "Have I done enough effort? Should I do more effort or stop now?” if we’ve already made a quantitative (non-boolean), and indeed subjective and arbitrary, decision as to how much risk we’re willing to take that there is such a refutation.
The possibility of being mistaken is not an argument to consider thinking about an issue indefinitely and never act. And the risk of being mistaken, and consequences, are basically always unknown.

What one needs to do is come up with a method of allocating time, with an explanation of how it works and WHY it's good, and some understanding of what it should accomplish. Then one can watch out for problems, keep an ear open for better approaches known to others, and in either case consider changes to one's method.

This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.


And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.

It's only partially understood risks that can be mitigated against, and it's that partial understanding that allows judging what mitigation is worthwhile.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 14

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.
I think you answer this with this:
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.


Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.
It makes sense if we remember that the choice I’m actually talking about is not between X and Y, but between X, Y and continuing to ruminate. If I’ve decided to stop ruminating because X feels sufficiently far ahead of Y in the wiseness stakes, then I could just have a policy of always going with X, but I could equally step back and acknowledge that curtailing the rumination constitutes dice-rolling by proxy and just go ahead and do the actual dice-roll so as to feel more honest about my process. I think that makes fine sense.
I think you're talking about rolling dice meaning taking risks in life - which I have no objection to. Whereas I was talking about rolling dice specifically as a decision making procedure for making choices. And that was in context of making an argument which may not be worth looking up at this point, but there you have a clarification if you want.

To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily. Your defense of arbitrariness, rather than clearly explained methods, is that better isn't possible. If that's right, can you indicate specifically what aspects of CR you consider sometimes impossible, in what kinds of situations, and why it's impossible?

(Most of the time you used the word "subjective" rather than "arbitrary". If you think there's some big difference, please explain. What I see is a clear departure from objectivity, rationality and CR.)
The ways to deal with fallibilism
Do you mean something different here than “fallibility”?
I meant fallibilism, but now that you point it out I agree "fallibility" is a clearer word choice.
are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this. Do you think it's somehow incompatible with CR?

I do have some different ideas than you about what it entails. E.g. I think that it never entails acting on a refuted idea (refuted in the actor's current understanding). And never entails acting on one idea over another merely because of an arbitrary feeling that that idea is better.
You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.
Oh no, not at all - my engagement in this discussion is precisely to test my belief that my approach is good enough.
Yes, but you're arguing for the acceptance of those flaws as good enough.
I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Not really, because hardly anyone thinks what you think. If CR were a widely-held position, there would indeed be such books and papers, but as far as I understand it CR is held only by you, Deutsch and Popper (I restrict myself, of course, to people who have written anything on the topic for public consumption), and Popper’s adherence to it is not widely recognised. Am I wrong about that?
I think wrong. Popper is widely recognized as advocating CR, a term he coined. And there are other Critical Rationalists, for example:

http://www.amazon.com/Critical-Rationalism-Metaphysics-Science-Philosophy/dp/0792329600

This two volume CR book has essays by maybe 40 people.

CR is fairly well known among scientists. Example friendly familiar people include Feynman, Wheeler, Einstein, Medawar.

And there's other people like Alan Forrester ( http://conjecturesandrefutations.com ).

I in no way think that ideas should get hearings according to how many famous or academic people think they deserve hearings. But CR would pass that test.


I wonder if you're being thrown off because what I'm discussing includes some refinements to CR? If the replies to CR addressed it as Popper originally wrote it, that would be understandable.

But there are no quality criticisms of unmodified-CR (except by its advocates who wish to refine it). There's a total lack of any reasonable literature addressing Popper's epistemology by his opponents, and meanwhile people carry on with ideas contradicting what Popper explained.

I wonder also if you're overestimating the differences between unmodified CR and what I've been explaining. They're tiny if you use the differences between CR and Justificationism as a baseline. Like how the difference between Mac and Windows is tiny compared to the difference between a computer and a lightbulb.


Even if Popper didn't exist, any known flaws to be accepted with Justificationism ought to be carefully documented by people in the field. They should write clear explanations about why they think better is impossible in those cases, and why not to do research trying for better since it's bound to fail in ways they already understand, and the precise limits for what we're stuck with, and how to mitigate the problems. I don't think anything good along these lines exists either.
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
Sorry, misunderstanding - what I meant was “Since mathematical proofs are a field in which I have less of a problem with a pure CR approach than with most fields, because expert consensus nearly always turns out to be rather rapidly achieved”
I don't think lack of expert consensus in a field is problematic for CR or somehow reduces the CR purity available to an individual.

There are lots of reasons expert consensus isn't reached. Because they don't use CR. Because they are more interested in promotions and reputation than truth. Because they're irrational. Because they are judging the situation with different evidence and ideas, and it's not worth the transaction costs to share everything so they can agree, since there's no pressing need for them to agree.

What's the problem for CR with consensus-low fields?
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.
Same answer - I maintain that that’s what I already do.
Do you maintain that what I've described is somehow not pure CR? The context I was addressing included e.g.:
It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of.
You were presenting a criticism of CR, and when I talked about how to handle the issues, you've now said stuff along the lines of that's what you already do, indicating some agreement. Are you then withdrawing that criticism of CR? If so, do you think it's just you specifically who does CR (for this particular issue), or most people?

Or more precisely, the issue isn't really whether people do CR - everyone does. It's whether they *say* they do CR, whether they understand what they are doing, and whether they do it badly due to epistemological confusion.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 15

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.

Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
The coin flips are not to decide whether a given individual idea is true or false, they are to decide between pairs of ideas. So let’s say (for simplicity) that there are 2^N ideas, of which 90% are in one group of close variants and the other 10% are in a separate group of close variants. “Close”, here, simply means differing only in ways I don’t care about. Then I can do a knockout tournament to end up choosing a winning variant, and 90% of the time it will be in the first group. Since I don’t actually care about the features that distinguish the variants within either group, only the features that distinguish the groups. I’m done. In other words, the solidity of an idea is measured by how many close variants it has - let’s call it the “variant density” in its neighbourhood. In practice, there will typically be numerical quantities involved in the ideas, so there will be an infinite number of close variants in each group - but if I have a sense of the variant densities in the two regions then that’s no problem, because I don’t need to do the actual tournament.
OK, I get the rough idea, though I disagree with a lot of things here.

You are proposing a complex procedure, involving some tricky math. It looks to me like the kind of thing requiring, minimum, tens of thousands of words to explain how it works. And a lot of exposure to public criticism to fix some problems and refine, even if the main points are correct.

Perhaps, with a fuller explanation, I could see why Aubreyism is correct about this and change my mind. I have some reasons not to think so, but I do try to keep an open mind about explanations I haven't read yet, and I'd be willing to look at a longer version. Does one exist?

Some sample issues where I'd want more detail include (no need to answer these now):

  • Is the score the total variants anywhere, ignoring density, regions and neighborhoods? If so, why are those other things mentioned? If not, how is the score calculated?
  • Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
  • The "regions" discussed are not regions of space. What are they, how are they defined, what are they made out of, how is distance defined in them, how do different regions connect together?
  • The coin flipping procedure wouldn't halt. So what good is it?
  • I can imagine skipping the coin flipping procedure because the probabilities will be equally distributed among the infinite ideas. But then the probabilities will all be infinitesimal. Dealing with those infinitesimals requires explanation.
  • I'm guessing the approach involves grouping together infinitesimals by region. This maybe relies on there being a finite number of regions of ideas involved, which is a premise requiring discussion. It's not obvious because we're looking at all ideas in some kind of idea-space, rather than only looking at the finite set of ideas people actually propose (as Elliotism and CR do normally do).
  • When an idea has infinite variants, what infinity are we talking about? Is it in one-to-one correspondence with the integers, the reals, or what? Do all ideas with infinite variants have the same sort of infinity variants? Infinity is really tricky, and gets a lot worse when you're doing math or measurement, or trying to be precise in a way that depends on the detailed properties of infinity.
  • There are other ways to get infinite variants other than by varying numerical quantities. One of these approaches uses conjunctions – modify an idea by adding "and X". Does it matter if there are non-numerical ways to get infinite variants? Do they make a difference? Perhaps they are important to understanding the number and density of variants in a region?
  • Are there any cases where there's only finite variants of an idea? Does that matter?
  • You can't actually have 90% or 10% of 2^N and get a whole number. This won't harm the main ideas, but I think it's important to fix detail errors in one's epistemology (which I think you agree with: it's why you specified 2^N ideas, instead saying even or leaving unspecified).
  • Do ideas actually have different numbers of variants? Both for total number, and density. How does one know? How does one figure out total variant count, and density, for a particular idea?
  • How is the distance between two ideas determined? Or whatever is used for judging density.
  • What counts as a variant? In common discussion, we can make do with a loose idea of this. If I start with an idea and then think about a way to change it, that's a variant. This is especially fine when nothing much depends on what is a variant of what. But for measuring solidity, using a method which depends on what is a variant of what, we'll need a more precise meaning. One reason is that some variant construction methods will eventually construct ALL ideas, so everything will be regarded as a variant of everything else. (Example method: take ideas in English, vary by adding, removing or modifying one letter.) Addressing issues like this requires discussion.
  • Where does criticism factor into things?
  • What happens with ideas which we don't know about? Do we just proceed as if none of those exist, or is anything done about them?
  • Does one check his work to make sure he calculated his solidity measurements right? If so, for how long?
  • Is this procedure truth-seeking? Why or why not? Does it create knowledge? If so, how? Is it somehow equivalent to evolution, or not?
  • Why do people have disagreements? Is it exclusively because some people don't know how to measure idea solidity like this, because of calculation errors, and because of different ideas about what they care about?
  • One problem about closeness in terms of what people care about is circularity. Because this method is itself supposed to help people decide things like what to care about.
  • How does this fit with DD's arguments for ideas that are harder to vary? Your approach seems to favor ideas that are easier to vary, resulting in more variants.
  • I suspect there may be lots of variants of "a wizard did it". Is that a good idea? Am I counting its variants wrong? I admit I'm not really counting but just sorta wildly guessing because I don't think you or I know how to count variants.
That is only an offhand sampling of questions and issues. I could add more. And then create new lists questioning some of the answers as they were provided. Regarding what it takes to persuade me, this gives some indication of what kind of level of detail and completeness it takes. (Actually a lot of precision is lost in communication.)

Does this assessment of the situation make sense to you? That you're proposing a complex answer to a major epistemology problem, and there's dozens of questions about it that I'd want answers to. Note: not necessarily freshly written answers from you personally, if there is anything written by you or others at any time.

Do you think you know answers to every issue I listed? And if so, what do you think is the best way for me to learn those full answers? (Note: If for some answers you know where to look them up as needed, instead of always saving them in memory, that's fine.)

Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to. Or maybe something else I haven't thought of.
To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily.
I think my clarification above of the role of “variant density” as a measure of solidity answers this, but let me know if it doesn’t.
I agree with linking issues. Measuring solidity (aka support aka justification) is a key issue that other things depend on.

It's also a good example issue for the discussion below about how I might be persuaded. If I was persuaded of a working measure of solidity, I'd have a great deal to reconsider.
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this [quoted below]. Do you think it's somehow incompatible with CR?
On reflection, and especially given your further points below, I’d prefer to stick with Aubreyism and Elliotism rather than justificationism and CR, because I’m new to this field and inadequately clear as to precisely how the latter terms are defined, and because I think the positions we’re debating between are our own rather than other people’s.
OK, switching terminology.

Do you think
doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
is incompatible with Elliotism? How?
OK - as above, let’s forget unmodified CR and also unmodified justificationism. I think we’ve established that my approach is not unmodified justificationism, but instead it is (something like) CR triaged by justificationism. I’m still getting the impression that your stated approach, whether or not it’s reeeeally close to CR, is unable to make decisions adequately rapidly for real life, and thus is not what you actually do in real life.
I don't know what to do with that impression.

Do you believe you have a reason Elliotism could not be timely in theory no matter what? Or only a reason Elliotism is not timely today because it's not developed enough and the current approach is flawed, but one day there might be a breakthrough insight so that it can be timely?

I think the timeliness thing is a second key issue. If I was persuaded Elliotism isn't or can't be timely, I'd have a lot to reconsider. But I'm pretty unclear on the specifics of your counter-arguments regarding timeliness.
What's the problem for CR with consensus-low fields?
Speed of decision-making. The faster CR leads to consensus in a given field, the less it needs to be triaged.
OK, I have a rough idea of what you mean. I don't think this is important to our main disagreements.
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
Oh, sure - I think most people are a good deal more content than me to hold pairs of views that they recognise to be mutually incompatible.
What I was talking about above was an innocent-until-proven-guilty approach to ideas, which is found in both CR and Elliotism (without requiring infallible proof). You indicated agreement, but now bring up the issue of holding contradictory ideas, which I consider a different issue. I am unclear on whether you misunderstood what I was saying, consider these part of the same issue, or what.


Regarding holding contradictory ideas, do you have a clear limit? If I were to adopt Aubreyism, how would I decide which mutually incompatible views to keep or change? If the answer involves degrees of contentness, how do I calculate them?


Part of the Elliotism answer to this issue involves context. Whether ideas relevantly contradict each other is context dependent. Out of context contradictions aren't important. The important thing is to deal with relevant contradictions in one's current context. Put another way: deal with contradictions relevant to choices one makes.

Consider the contradicting ideas of quantum mechanics and general relativity. In a typical dinner-choosing context, neither of those ideas offers a meal suggestion. They both say essentially "no comment" in this context, which doesn't contradict. They aren't taking different sides in the dinner arbitration. I can get pizza for dinner without coming into conflict with either of those ideas.

On the other hand if there was a contradiction in context – basically meaning they are on disagreeing sides in an arbitration – then I'd address that with a win/win solution. Without such a solution, I could only proceed in a win/lose way and the loser would be part of me. And the loser would be chosen arbitrarily or irrationally (because if it weren't, then what was done would be a rational solution and we're back to win/win).

Understanding of context is one of the things which allows Elliotism to be timely. (A refutation of my understanding of context is another thing which would lead to me reconsidering a ton.)

If I were to change my mind and live by Aubreyism, I would require a detailed understanding of how to handle context under Aubreyism (for meals, contradictions, and everything else).
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
OK then: what theoretical sequence of events would conclude with you changing your mind about how you think decisions should be made, in favour of my view?
Starting at the end, I'd have to understand Aubreyism to my satisfaction, think it was right, think Elliotism and (unmodified) CR were both wrong. The exact details are hard to specify in advance because in the sequence of events I would change my mind about what criteria to use when deciding what ideas to favor. So I would not think Aubreyism has no known criticism, rather I'd understand and use Aubreyism's own criteria. And similarly I wouldn't be rejecting Elliotism or CR for having one outstanding criticism (taking into account context), but rather because of some reasons I learned from Aubreyism.

For that matter, I might not have to understand Aubreyism to my satisfaction. Maybe it'd teach me how to adopt ideas without understanding them to my current criteria of satisfaction. It could offer different criteria of satisfaction, but it could also offer a different approach.

So, disclaimer: the below discussion of persuasion contains Elliotist ideas. But if Elliotism is false, then I guess persuasion works some other way, which I don't know and can't speak to.


Starting more at the beginning, my ideas about Elliotism are broadly integrated into my thinking (meaning connected to other ideas). An example area where they are particularly tightly integrated is parenting and education. For ease of reference, my views are called TCS (Taking Children Seriously).

So I'd have to find out things like, if I rejected Elliotism, what views am I to adopt about parenting and education? Is Aubreyism somehow fully compatible with TCS (I don't think so)? Even if it was, I'd have to find out things like how to argue TCS in new ways using Aubreyism instead of Elliotism, there'd be changes.

To give you a sense of the integration, TCS has many essays which explicitly discuss Popper, (unmodified) CR, and Elliotism. A large part of the way TCS was created was applying CR ideas to parenting and education. And also, some TCS concepts played a significant role in creating Elliotism. In addition to TCS learning things from CR, CR can learn from TCS, resulting in a lot of the unmodified-CR/Elliotism differences.

If I'm to change my views on Elliotism and also on TCS, I'll also have to find out why the new views are moral, not immoral (or learn a new approach to morality). I'll have to find out why thousands of written TCS arguments are mistaken, and how far the mistakes go. (Small change in perspective and way of arguing basically saves all the old conclusions? Old conclusions have to be thrown out and recreated with Aubreyism? Somewhere in between?)

And when I try to change my thinking about TCS, I'll run into that fact that it's integrated with many other ideas, so will they have to change to? And they connect to yet more ideas.

So there's this tangled web of ideas. And this is just one area of integration, Elliotism and TCS. Elliotism is also integrated with my politics. And with my opinions of philosophy books. And with my approach to social life. All this could require reevaluation in light of changes to my epistemology.

How can something like this be approached?

It takes a lot of work (which I have willingness to do). One of the general facts of persuasion is, the person being persuaded has to do the large majority of the work. I'd have to persuade myself, with hints and help from you. That is the only way. You cannot make me change my mind, or do most of the work for me.

Though, again, this is an Elliotist view which might not be applicable if you refuted Elliotism. Maybe you can tell me a different way.

(Tangentially, you may note here some incompatibilities with this perspective and how school teachers approach education.)

Another consequence of this integration is that if you persuaded me I was wrong about politics, that could pose a problem for Elliotism. I'd have to figure out where the mistakes were and their full consequences, and that process might involve rejecting Elliotism. If I decide a political idea is false, and there's a chain of ideas from it to an Elliotism idea (which there is), then I'll have to find a mistake in that chain or else rethink part of Elliotism (which is itself linked with the rest of Elliotism and more, posing similar problems). So it could be possible to change my mind about Elliotism without ever discussing it.

Integration of ideas is stabilizing in some ways. If you say I'm wrong about X, I may know a dozen implications of X which I want to figure out how to deal with. This can make it more challenging to provide a satisfactory new view. But integration is also destabilizing because if I do change my mind about X, the implications spread more easily. Persuasion about one point can cause a chain reaction. Especially if I don't block off that chain reaction with a bunch of rationalizations, irrational evasions, refusals to think about implications of ideas, willful disconnections of ideas into more isolated pieces to prevent chain reaction, and so on.

The consequences of a refutation aren't predictable in advance. Maybe it turns out that idea was more isolated than you thought – or less. Maybe you can find mistaken connections near it, maybe not. Until you work out new non-refuted positions, you don't know if it will be a tiny fix or require a whole new philosophy.

Getting back to your question: The sequence of events to change my mind would be large, and largely outside of your control. The majority of it would be outside your view, even if I tried hard to share the process. My integrity would be required.

Ayn Rand says you can't "force a mind". Persuasion has to be voluntary. It's why the person to be persuaded must actively want to learn, and take initiative in the process, not be passive.

However, you could play a critically important role. If you told me one idea (e.g. how to measure solidity), and I worked out the rest from there, you would have had a major role.

More normally, I'd work out a bit from that idea, then ask you a question or argue a point, get your answer, work out a bit more, and so on. And some of your answers would refer me to books and webpages, rather than be written fresh.

It hasn't gone like this so far because I'm experiencing the epistemology discussion as you saying things I've already considered. And frequently already had several debates about. Not exactly identical ideas, but similar in the relevant ways so my previous analysis still applies. Rather than needing to rethink something, I've been using ideas I already know and making minor adjustments to fit the details of our conversation.

I'm also using the discussion to work on ongoing projects like trying to understand Elliotism more clearly, invent better ways to explain it, and better understand where and why people misunderstand it or disagree. I also have more tangential projects like trying to write better.

It's also being used by others who want to understand Elliotism better. People write comments and use things you or I said as a jumping off point for discussions. If you wanted, you could read those discussions and comments.

Those people are also relevant to the issue of a sequence of events in which I'd be persuaded of Aubreyism. If you managed to inspire any doubts about Elliotism, or raise any problems I didn't think I had an answer to, I would raise those issues with others and see what they said. So, via me (both writing and forwarding things), you'd have to end up persuading those people of Aubreyism too. And on the other hand, they could play a big role in persuading me of Aubreyism if they understood one of your correct points before me, and then translated it to my current way of thinking well. (The Aubreyism issue could also create a split and failure to agree, but I wouldn't expect it and I see no signs of that so far.)


I also want to differentiate between full persuasion and superficial persuasion. Sometimes people are persuaded about X pretty easily. But they haven't changed their mind about anything else, so now X contradicts a bunch of their other ideas. A common result is the persuasion doesn't last. Whereas if one is persuaded about X and then makes changes to other ideas until X is compatible with all their thinking, and there's various connections, that'd be a more full kind of persuasion that does a better job of lasting.

One reason superficial persuasion seems to work and last, sometimes, is because of selective attention. People will use idea X if and only if dealing with one particular topic, and not think about other stuff. Then for other topics, they only think about other stuff and not X. So the contradictions between their other ideas and X don't get noticed, because they only think about one or the other at a time.

This further speaks to the complexity and difficulty of rational persuasion.


Getting back to a sequence of events, I don't know a specific one in detail or I'd be persuaded now. What I know is more like the categories of events that would matter and what sorts of things have to happen. (The sequencing, to a substantial extent, is flexible. Like I could learn an epistemology idea and adjust my politics, or vice versa, the sequence can go either way. At least that's the Elliotism view.)

Trying to be more specific, here's an example. You say something I don't have an answer to. It could be about measuring solidity, but it could be about pretty much any of my views I've been explaining because I take them all seriously and they're all integrated. I investigate. I find problems with several of my related ideas. I also consider some related ideas which I don't see any problem with, so I ask you about the issue. My first question is whether you think those ideas are false and I'm missing it, or you think I'm mistaken that they are related.

Trying to fix some of these problems, I run into more problems. Some of them I don't see, but you tell them to me. I start arguing some Aubreyism ideas to others who agree with Elliotism, and learn Aubreyism well enough to win those arguments (although I have to relay back to you a few of their anti-Aubreyism arguments which I'm unable to answer myself. But the more more I do that, the more I pick up on how things work myself, eventually reaching full autonomy regarding Aubreyism). Others then help me with the task of reconciling various things with Aubreyism, such as the material in Popper's books. We do things like decide some parts can be rescued and figuring out how. Other parts have to be rejected, and we work through the implications of that and figure out where and why those implications stop. To do this well involves things like rereading books while keeping in mind some Aubreyism arguments and watching out for contradictions, and thus seeing the book material in a new way compared to prior readings with a different perspective. And it involves going back through thousands of things I and others wrote and using new Aubreyism knowledge to find errors, retract things, write new things about new positions, etc. The more Aubreyism has general principles, the better this will work – so I can find patterns in what has to change instead of dealing with individual cases.

OK, there's a story. Want to tell me a story where you change your mind?
I don’t think anyone does CR, and I also don’t think anyone does the slightly modified CR that you think you do. I think people do a triaged version of CR, and some people do the triaging better than others.
I acknowledge that's your position.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 16

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.

The other parts so far are all my emails including quotes from Aubrey de Grey. For this part, I'm posting his email. That's because I didn't quote everything when replying. Outlined quotes are older.
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.

Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
The coin flips are not to decide whether a given individual idea is true or false, they are to decide between pairs of ideas. So let’s say (for simplicity) that there are 2^N ideas, of which 90% are in one group of close variants and the other 10% are in a separate group of close variants. “Close”, here, simply means differing only in ways I don’t care about. Then I can do a knockout tournament to end up choosing a winning variant, and 90% of the time it will be in the first group. Since I don’t actually care about the features that distinguish the variants within either group, only the features that distinguish the groups. I’m done. In other words, the solidity of an idea is measured by how many close variants it has - let’s call it the “variant density” in its neighbourhood. In practice, there will typically be numerical quantities involved in the ideas, so there will be an infinite number of close variants in each group - but if I have a sense of the variant densities in the two regions then that’s no problem, because I don’t need to do the actual tournament.
OK, I get the rough idea, though I disagree with a lot of things here.

You are proposing a complex procedure, involving some tricky math. It looks to me like the kind of thing requiring, minimum, tens of thousands of words to explain how it works. And a lot of exposure to public criticism to fix some problems and refine, even if the main points are correct.
Not really, because the actual execution of the procedure is hugely condensed. It’s just the same as when mathematicians come up with a proof: they know that the only reason the proof is sound is because it can be reduced to set theory, but they also know that in Principia Mathematica it took a couple of hundred pages to prove that 1+1=2, so they are happy not to actually do the reduction.
Perhaps, with a fuller explanation, I could see why Aubreyism is correct about this and change my mind. I have some reasons not to think so, but I do try to keep an open mind about explanations I haven't read yet, and I'd be willing to look at a longer version. Does one exist?
No. Sorry :-)
Some sample issues where I'd want more detail include (no need to answer these now):
I will anyway, because all but the last two are easy (I think).
- Is the score the total variants anywhere, ignoring density, regions and neighborhoods? If so, why are those other things mentioned? If not, how is the score calculated?
No, it’s the total number of “close” variants, defined as I did before, i.e. variants that differ only in ways that one doesn’t care about.
- Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
Because they have historically turned out to be. Occam’s Razor, basically.
- The "regions" discussed are not regions of space. What are they, how are they defined, what are they made out of, how is distance defined in them, how do different regions connect together?
See above - different ideas differ in multiple ways, some of which one cares about and some of which one doesn’t, so they fall into equivalence classes, and the larger classes win.
- The coin flipping procedure wouldn't halt. So what good is it?
I’m not with you. Why wouldn’t it halt? It’s just a knockout tournalemt starting with 2^n players. Ah, are you talking about the infinite case? There, as I say, one indeed doesn’t do the flipping, one uses the densities. A way to estimate the densities would be just to sample 100 ideas that are in one of the two competing groups and see how many are in which group.
- I can imagine skipping the coin flipping procedure because the probabilities will be equally distributed among the infinite ideas. But then the probabilities will all be infinitesimal. Dealing with those infinitesimals requires explanation.
I think I’ve covered that above. Yes?
- I'm guessing the approach involves grouping together infinitesimals by region. This maybe relies on there being a finite number of regions of ideas involved, which is a premise requiring discussion. It's not obvious because we're looking at all ideas in some kind of idea-space, rather than only looking at the finite set of ideas people actually propose (as Elliotism and CR do normally do).
I think this is all compatible with the above, since only the number of equivalence classes of ideas needs to be finite, not the number of ideas.
- When an idea has infinite variants, what infinity are we talking about? Is it in one-to-one correspondence with the integers, the reals, or what? Do all ideas with infinite variants have the same sort of infinity variants? Infinity is really tricky, and gets a lot worse when you're doing math or measurement, or trying to be precise in a way that depends on the detailed properties of infinity.
I don’t think this matters for the sampling procedure I described above.
- There are other ways to get infinite variants other than by varying numerical quantities. One of these approaches uses conjunctions – modify an idea by adding "and X". Does it matter if there are non-numerical ways to get infinite variants? Do they make a difference? Perhaps they are important to understanding the number and density of variants in a region?
I don’t think this breaks the sampling procedure either.
- Are there any cases where there's only finite variants of an idea? Does that matter?
Not sure, and not as far as I can see.
- You can't actually have 90% or 10% of 2^N and get a whole number. This won't harm the main ideas, but I think it's important to fix detail errors in one's epistemology (which I think you agree with: it's why you specified 2^N ideas, instead saying even or leaving unspecified).
Fair enough! - sample 128 ideas instead of 100.
- Do ideas actually have different numbers of variants? Both for total number, and density. How does one know? How does one figure out total variant count, and density, for a particular idea?
Let me know if you think the sampling procedure doesn’t do that.
- How is the distance between two ideas determined? Or whatever is used for judging density.
See above.
- What counts as a variant? In common discussion, we can make do with a loose idea of this. If I start with an idea and then think about a way to change it, that's a variant. This is especially fine when nothing much depends on what is a variant of what. But for measuring solidity, using a method which depends on what is a variant of what, we'll need a more precise meaning. One reason is that some variant construction methods will eventually construct ALL ideas, so everything will be regarded as a variant of everything else. (Example method: take ideas in English, vary by adding, removing or modifying one letter.) Addressing issues like this requires discussion.
Again, I think my definitions and procedure cover this.
- Where does criticism factor into things?
It elucidates whether two ideas differ in ways one cares about. Changing one’s mind about that results in changing which equivalence class the ideas fall into.
- What happens with ideas which we don't know about? Do we just proceed as if none of those exist, or is anything done about them?
I think that’s part of the CR part of Aubreyism, rather than the triage part, i.e. one does it in the same way whether one is using Aubreyism or Elliotism.
- Does one check his work to make sure he calculated his solidity measurements right? If so, for how long?
Ditto.
- Is this procedure truth-seeking? Why or why not? Does it create knowledge? If so, how? Is it somehow equivalent to evolution, or not?
No it isn’t/doesn’t/isn’t - it is the triage layer that terminates a CR effort. The CR part is what is truth-seeking and creates knowledge.
- Why do people have disagreements? Is it exclusively because some people don't know how to measure idea solidity like this, because of calculation errors, and because of different ideas about what they care about?
All those things, sure, but probably other things too -same as for CR.
- One problem about closeness in terms of what people care about is circularity. Because this method is itself supposed to help people decide things like what to care about.
I don’t see that that implies circularity. Recursiveness, sure, but that’s OK, isn’t it?
- How does this fit with DD's arguments for ideas that are harder to vary? Your approach seems to favor ideas that are easier to vary, resulting in more variants.
Ah, good point. I don’t adequately recall his argument, though. Can you summarise it?
- I suspect there may be lots of variants of "a wizard did it". Is that a good idea? Am I counting its variants wrong? I admit I'm not really counting but just sorta wildly guessing because I don't think you or I know how to count variants.
Is that, basically, DD’s "harder to vary” argument?
That is only an offhand sampling of questions and issues. I could add more. And then create new lists questioning some of the answers as they were provided. Regarding what it takes to persuade me, this gives some indication of what kind of level of detail and completeness it takes. (Actually a lot of precision is lost in communication.)
Right.
Does this assessment of the situation make sense to you? That you're proposing a complex answer to a major epistemology problem, and there's dozens of questions about it that I'd want answers to. Note: not necessarily freshly written answers from you personally, if there is anything written by you or others at any time.
Understood; yes it does.
Do you think you know answers to every issue I listed? And if so, what do you think is the best way for me to learn those full answers? (Note: If for some answers you know where to look them up as needed, instead of always saving them in memory, that's fine.)

Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to.
I think that’s exactly what I’m doing - Aubreyism is precisely that.
Or maybe something else I haven't thought of.
To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily.
I think my clarification above of the role of “variant density” as a measure of solidity answers this, but let me know if it doesn’t.
I agree with linking issues. Measuring solidity (aka support aka justification) is a key issue that other things depend on.

It's also a good example issue for the discussion below about how I might be persuaded. If I was persuaded of a working measure of solidity, I'd have a great deal to reconsider.
OK - but then the question is whether yout current view permits you to change your mind about this (or indeed about anything big).
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this [quoted below]. Do you think it's somehow incompatible with CR?
On reflection, and especially given your further points below, I’d prefer to stick with Aubreyism and Elliotism rather than justificationism and CR, because I’m new to this field and inadequately clear as to precisely how the latter terms are defined, and because I think the positions we’re debating between are our own rather than other people’s.
OK, switching terminology.

Do you think
doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
is incompatible with Elliotism? How?
I think the first part is imcompatible, yes; Elliotism does not deliver doing one’s best with current knowledge, because it overly favours excessive rumination.
OK - as above, let’s forget unmodified CR and also unmodified justificationism. I think we’ve established that my approach is not unmodified justificationism, but instead it is (something like) CR triaged by justificationism. I’m still getting the impression that your stated approach, whether or not it’s reeeeally close to CR, is unable to make decisions adequately rapidly for real life, and thus is not what you actually do in real life.
I don't know what to do with that impression.

Do you believe you have a reason Elliotism could not be timely in theory no matter what? Or only a reason Elliotism is not timely today because it's not developed enough and the current approach is flawed, but one day there might be a breakthrough insight so that it can be timely?
I can’t really answer the first question, because I can’t identify the set of all possible variants of current Elliotism that you would still recognise as Elliotism. For the second question, yes, that’s what I think, and moreover I think the breakthrough in question is simply to add a triage step, which would turn it into Aubreyism.
I think the timeliness thing is a second key issue. If I was persuaded Elliotism isn't or can't be timely, I'd have a lot to reconsider. But I'm pretty unclear on the specifics of your counter-arguments regarding timeliness.
What's the problem for CR with consensus-low fields?
Speed of decision-making. The faster CR leads to consensus in a given field, the less it needs to be triaged.
OK, I have a rough idea of what you mean. I don't think this is important to our main disagreements.
I agree.
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
Oh, sure - I think most people are a good deal more content than me to hold pairs of views that they recognise to be mutually incompatible.
What I was talking about above was an innocent-until-proven-guilty approach to ideas, which is found in both CR and Elliotism (without requiring infallible proof). You indicated agreement, but now bring up the issue of holding contradictory ideas, which I consider a different issue. I am unclear on whether you misunderstood what I was saying, consider these part of the same issue, or what.
I think holding contradictory ideas is the same issue - it’s equivalent to not watching out for problems.
Regarding holding contradictory ideas, do you have a clear limit? If I were to adopt Aubreyism, how would I decide which mutually incompatible views to keep or change? If the answer involves degrees of contentness, how do I calculate them?
Sampling to estimate variant density, followed by deciding based on coin-flips. No it doesn’t involve degrees of contentness.
Part of the Elliotism answer to this issue involves context. Whether ideas relevantly contradict each other is context dependent. Out of context contradictions aren't important. The important thing is to deal with relevant contradictions in one's current context. Put another way: deal with contradictions relevant to choices one makes.

Consider the contradicting ideas of quantum mechanics and general relativity. In a typical dinner-choosing context, neither of those ideas offers a meal suggestion. They both say essentially "no comment" in this context, which doesn't contradict. They aren't taking different sides in the dinner arbitration. I can get pizza for dinner without coming into conflict with either of those ideas.

On the other hand if there was a contradiction in context – basically meaning they are on disagreeing sides in an arbitration – then I'd address that with a win/win solution. Without such a solution, I could only proceed in a win/lose way and the loser would be part of me. And the loser would be chosen arbitrarily or irrationally (because if it weren't, then what was done would be a rational solution and we're back to win/win).

Understanding of context is one of the things which allows Elliotism to be timely. (A refutation of my understanding of context is another thing which would lead to me reconsidering a ton.)
I think we agree on context. In the language of variants and equivalence classes and sampling and coin flips, the introduction of an out-of-context issue simply doubles the number of variants in each equivalence clas, so it doesn’t affect the decision-making outcome (nor the time it takes to make the decision).
If I were to change my mind and live by Aubreyism, I would require a detailed understanding of how to handle context under Aubreyism (for meals, contradictions, and everything else).
Let me know if the above suffices.
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
OK then: what theoretical sequence of events would conclude with you changing your mind about how you think decisions should be made, in favour of my view?
Starting at the end, I'd have to understand Aubreyism to my satisfaction, think it was right, think Elliotism and (unmodified) CR were both wrong. The exact details are hard to specify in advance because in the sequence of events I would change my mind about what criteria to use when deciding what ideas to favor. So I would not think Aubreyism has no known criticism, rather I'd understand and use Aubreyism's own criteria. And similarly I wouldn't be rejecting Elliotism or CR for having one outstanding criticism (taking into account context), but rather because of some reasons I learned from Aubreyism.

For that matter, I might not have to understand Aubreyism to my satisfaction. Maybe it'd teach me how to adopt ideas without understanding them to my current criteria of satisfaction. It could offer different criteria of satisfaction, but it could also offer a different approach.

So, disclaimer: the below discussion of persuasion contains Elliotist ideas. But if Elliotism is false, then I guess persuasion works some other way, which I don't know and can't speak to.
Right - we’re back to bistability.

I know, I have a better idea. I think you mentioned some time ago that before you encountered DD you thought differently about all this. Is that correct? If so, perhaps it will help if you relate the sequence of events that led you to change your mind. Since that will be a sequence of events that actually occurred, rather than a story about a hypothetical sequence, I think I’ll find it more useful.

Cheers, Aubrey
Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 17

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
- Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
Because they have historically turned out to be. Occam’s Razor, basically.
How do you know what happened historically? How does that tell you what will work in a particular case now?

What you wrote is a typical inductivist statement. The idea is there are multiple observations of history supporting the conclusion (that ideas with more variants turn out to be better). Then add an inductive principle like "the future is likely to resemble the past". Meanwhile no explanation is given for why this conclusion makes sense. Is induction what you mean?


Also that isn't Occam's Razor, which is about favoring simpler ideas. More variants isn't simpler. At least I don't think so. Simpler is only defined vaguely, which does allow arbitrary conclusions. (There have been some attempts to make Occam's Razor precise, which most people aren't familiar with, and which don't work.)
- The coin flipping procedure wouldn't halt. So what good is it?
I’m not with you. Why wouldn’t it halt? It’s just a knockout tournalemt starting with 2^n players. Ah, are you talking about the infinite case? There, as I say, one indeed doesn’t do the flipping, one uses the densities. A way to estimate the densities would be just to sample 100 ideas that are in one of the two competing groups and see how many are in which group.
Yes I meant the infinite case. By sample do you mean a random sample? In the infinite case, how do you get a random sample or otherwise make the sample fair?


Also, could you provide an example of using your method?
Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to.
I think that’s exactly what I’m doing - Aubreyism is precisely that.
But you just attempted to give answers to many questions, rather than tell me why those questions didn't need answers.
Do you think
doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
is incompatible with Elliotism? How?
I think the first part is imcompatible, yes; Elliotism does not deliver doing one’s best with current knowledge, because it overly favours excessive rumination.
Excessive rumination is something you – but not me – think is a consequence of Elliotism. A consequence of what specific things, for what reason, I'm unclear on. Tell me.

I wrote about how the amount of time (and other resources) used on an arbitration is tailored to the amount of time one thinks should be used. I'm not clear on what you objected to. My guess is you didn't understand, which I would have expected to take more clarifying questions.
OK - as above, let’s forget unmodified CR and also unmodified justificationism. I think we’ve established that my approach is not unmodified justificationism, but instead it is (something like) CR triaged by justificationism. I’m still getting the impression that your stated approach, whether or not it’s reeeeally close to CR, is unable to make decisions adequately rapidly for real life, and thus is not what you actually do in real life.
I don't know what to do with that impression.

Do you believe you have a reason Elliotism could not be timely in theory no matter what? Or only a reason Elliotism is not timely today because it's not developed enough and the current approach is flawed, but one day there might be a breakthrough insight so that it can be timely?
I can’t really answer the first question, because I can’t identify the set of all possible variants of current Elliotism that you would still recognise as Elliotism. For the second question, yes, that’s what I think, and moreover I think the breakthrough in question is simply to add a triage step, which would turn it into Aubreyism.
Why do you think Elliotism itself is lacking, rather than the lacking being in your incomplete understanding of Elliotism?
Part of the Elliotism answer to this issue involves context. Whether ideas relevantly contradict each other is context dependent. Out of context contradictions aren't important. The important thing is to deal with relevant contradictions in one's current context. Put another way: deal with contradictions relevant to choices one makes.

Consider the contradicting ideas of quantum mechanics and general relativity. In a typical dinner-choosing context, neither of those ideas offers a meal suggestion. They both say essentially "no comment" in this context, which doesn't contradict. They aren't taking different sides in the dinner arbitration. I can get pizza for dinner without coming into conflict with either of those ideas.

On the other hand if there was a contradiction in context – basically meaning they are on disagreeing sides in an arbitration – then I'd address that with a win/win solution. Without such a solution, I could only proceed in a win/lose way and the loser would be part of me. And the loser would be chosen arbitrarily or irrationally (because if it weren't, then what was done would be a rational solution and we're back to win/win).

Understanding of context is one of the things which allows Elliotism to be timely. (A refutation of my understanding of context is another thing which would lead to me reconsidering a ton.)
I think we agree on context. In the language of variants and equivalence classes and sampling and coin flips, the introduction of an out-of-context issue simply doubles the number of variants in each equivalence clas, so it doesn’t affect the decision-making outcome (nor the time it takes to make the decision).
What about the win/win vs win/lose issue?
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
OK then: what theoretical sequence of events would conclude with you changing your mind about how you think decisions should be made, in favour of my view?
Starting at the end, I'd have to understand Aubreyism to my satisfaction, think it was right, think Elliotism and (unmodified) CR were both wrong. The exact details are hard to specify in advance because in the sequence of events I would change my mind about what criteria to use when deciding what ideas to favor. So I would not think Aubreyism has no known criticism, rather I'd understand and use Aubreyism's own criteria. And similarly I wouldn't be rejecting Elliotism or CR for having one outstanding criticism (taking into account context), but rather because of some reasons I learned from Aubreyism.

For that matter, I might not have to understand Aubreyism to my satisfaction. Maybe it'd teach me how to adopt ideas without understanding them to my current criteria of satisfaction. It could offer different criteria of satisfaction, but it could also offer a different approach.

So, disclaimer: the below discussion of persuasion contains Elliotist ideas. But if Elliotism is false, then I guess persuasion works some other way, which I don't know and can't speak to.
Right - we’re back to bistability.
I don't think there's a big problem here. I already understand some things you say, and vice versa. This can be increased incrementally.

You might want to read Popper's essay "The Myth of the Framework".

You could tell me which things you considered false from what I said, and why. I don't know which are Aubreyism-compatible and which contradict Aubreyism. And you could tell me how you think persuasion should work. It takes more communication.
I know, I have a better idea. I think you mentioned some time ago that before you encountered DD you thought differently about all this. Is that correct? If so, perhaps it will help if you relate the sequence of events that led you to change your mind. Since that will be a sequence of events that actually occurred, rather than a story about a hypothetical sequence, I think I’ll find it more useful.
Correct, but there's not much to tell. DD (and others) were available for discussion. We discussed, people learned things. There was no master plan. I don't know what you're trying to find out.

The sequence of events is discussion #1, discussion #2, discussion #6,209, etc. Part of this can still be read as email archives.

Also I spent some time thinking and reading. Early on I read _The Fabric of Reality_ and http://web.archive.org/web/20030603214744/http://www.tcs.ac/Articles/index.html

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 18

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
Because they have historically turned out to be. Occam’s Razor, basically.
How do you know what happened historically? How does that tell you what will work in a particular case now?

What you wrote is a typical inductivist statement. The idea is there are multiple observations of history supporting the conclusion (that ideas with more variants turn out to be better). Then add an inductive principle like "the future is likely to resemble the past". Meanwhile no explanation is given for why this conclusion makes sense. Is induction what you mean?
Yes it is what I mean. I agree, we have no explanation for why the future has always resembled the past, and thus no basis for the presumption that it will continue to do so. So what? - how does Elliotism depart from that? And more particularly, how do you depart from it in your everyday life?
Popper (and DD) refuted induction. How do you want to handle this? Do you want me to rewrite the content in their books? I don't think that's a good approach.

Do you think the major points you're contradicting of Popper's (and DD's) work have been refuted, by you or someone else? If not, why reject them?

My friend thinks I should copy/paste BoI passages criticizing induction and ask if you have criticism. But I think that will encourage ad hoc replies out of context. And it's hard to judge which text to include in a quote for someone else. And I don't think you want to read from books. And I haven't gotten a clear picture of what you want to know or what would convince you, or e.g. why you think induction works. What do you think?
Also that isn't Occam's Razor, which is about favoring simpler ideas. More variants isn't simpler. At least I don't think so. Simpler is only defined vaguely, which does allow arbitrary conclusions. (There have been some attempts to make Occam's Razor precise, which most people aren't familiar with, and which don't work.)
Ah, I see the answer now. More variants is simpler, yes, because there’s a fixed set of things that can vary, each of which is either relevant or irrelevant to the decision one is trying to make. So, having more variants is the consequence of having more things that can vary be irrelevant to the decision on is trying to make - which is the same as having fewer be relevant. Which is also the same as being harder to vary in the DD sense, if I recall it correctly.
- The coin flipping procedure wouldn't halt. So what good is it?
I’m not with you. Why wouldn’t it halt? It’s just a knockout tournalemt starting with 2^n players. Ah, are you talking about the infinite case? There, as I say, one indeed doesn’t do the flipping, one uses the densities. A way to estimate the densities would be just to sample 100 ideas that are in one of the two competing groups and see how many are in which group.
Yes I meant the infinite case. By sample do you mean a random sample? In the infinite case, how do you get a random sample or otherwise make the sample fair?
Yes I mean random. I don’t understand your other question - why does it matter what randomisation method I use?
The random sampling you propose is impossible to do. There is no physical process that random samples from an infinite set with equal probability.

Even setting infinity aside, I don't think your proposal was to enumerate every variant on a numbered list and then do the random sample using the list. Because why sample to estimate when you already have that list? But without a list of the ideas (or equivalent), I don't know how you suggest to do the sampling, without infinity, either.

This would be easier to comment on if it was more clear what you were proposing. And I prefer not to assume people are proposing impossible nonsense, rather than asking what they mean (whereas you think Elliotism's timeliness is impossible, and prefer to claim that without specifics, over asking more about how Elliotism works). And I won't be surprised if you now say you actually meant something that's unlike what I think sampling is, or say you don't care if the sampling is unfair or arbitrary (which I tried to ask about but didn't get a direct reply to).

It seems like your position is ad hoc and you hadn't figured out in advance how it works (e.g. working out the issues with sampling), figured out what the problems in the field to be addressed are, or researched previous attempts at similar positions or alternatives (and you don't want to research them, preferring to reinvent the wheel for some reason?).
Also, could you provide an example of using your method?
I think I’ve answered that above, by my explanation of why seeking the alternative with more close variants is the same as Occam’s razor.
I mean an example like:

We're trying to decide what to get for dinner. I propose salmon sushi or tuna sushi. You propose pizza. We get sushi with 67% odds. Is that how it's supposed to work? (Note I only know the odds here because I have a full list of the ideas.)

But wait. I don't care what God's favorite natural number is; that's irrelevant. So there's infinite sushi variants like, "Get salmon sushi, and God's favorite natural number is 5" (vary the number).

Now what? Each idea just turned into infinite variants. Do we now say there are 2*infinity variants for sushi, and 1*infinity for pizza? And get sushi with what odds?

Should we have a sort of competition to see who can think up the most variants for their dinner choice to increase its odds? Will people who are especially clever with powersets win arguments, since they can better manufacture variants?

Or given your comments above about hard to vary, should I perhaps claim that there are fewer types of sushi than of pizza, so sushi is the better meal?


Could you adjust the example to illustrate how your approach works? I don't know how to use it.
Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to.
I think that’s exactly what I’m doing - Aubreyism is precisely that.
But you just attempted to give answers to many questions, rather than tell me why those questions didn't need answers.
Um, sure - my answers were an explanation for why a bunch of OTHER questions don’t need answers.
What are some example questions that don't need answers?
Excessive rumination is something you – but not me – think is a consequence of Elliotism. A consequence of what specific things, for what reason, I'm unclear on. Tell me.
Well, for example, I think caring about what randomisation method to use (above) is excessive rumination.
I think you're dramatically underestimating the complexity of epistemology and the importance of details, and treating epistemology unlike you treat biology. In science, I think you know that details matter, like what sampling method is used in an experiment. And in general know that seemingly minor details can change the results of experiments, and can't just be ignored.

I think you see epistemology as a field where smart amateurs can quickly make stuff up that sounds about right and reasonably expect to do as well as anyone, whereas you wouldn't treat biology that way. You don't treat epistemology like a rigorous science.

This is common. Many scientists make statements straying into epistemology and other areas of philosophy (and sometimes even politics), and claim their scientific expertise still applies (and many people in the audience seem to accept this). They don't recognize field boundaries accurately, or recognize that there is a lot to learn about philosophy (or politics) that wasn't in their science education. This happens routinely.

A good example was Estep and other scientists wrote a criticism of SENS which discussed a bunch of philosophy of science (which is a sub-field of epistemology). No one writing it even claims philosophy credentials. Yet they act like they're writing within their expertise, not outside it. This was then judged by expert judges, none of whom were selected for having philosophy expertise. This is then presented as expert discussion even though there's a bunch of philosophy discussion but no philosophy experts. Look at their own summary:

http://www2.technologyreview.com/sens/docs/estepetal.pdf
1) SENS is based on the scientifically unsupported speculations of Aubrey de Grey, which are camouflaged by the legitimate science of others; 2) SENS bears only a superficial resemblance to science or engineering; 3) SENS and de Grey’s writings in support of it are riddled with jargon- filled misunderstandings and misrepresentations; 4) SENS’ notoriety is due almost entirely to its emotional appeal; 5) SENS is pseudoscience. We base these conclusions on our extensive training and individual and collective hands-on experience in the areas covered by SENS, including the engineering of biological organisms for the purpose of extending life span.
2,4,5 are primarily philosophy issues. 1 and 3 are more of a mix because they partly raise issues of whether some specific scientific SENS arguments are correct. Then after making mostly philosophy claims, they say they base their conclusions on their scientific expertise. (Note: how or whether to base conclusions is an epistemology issue too.)

Then you thought I'd have to rely on your answer to Estep to find fault with his paper, even though philosophy is my field.

Do you see what I'm talking about? My position is that philosophy is a real field, which has knowledge and literature that matter. And you won't understand it if you don't treat it that way. What do you think?

I think my interest in the sampling method is a consequence of my mathematical knowledge, not of Elliotism.

It won't have been excessive even if I'm mistaken, because if I'm mistaken (and you know better) then I'll learn something. Or do you think it would be somehow excessive to want to learn about my mistake, if I'm wrong?

I don't see how I could use Aubreyism (on purpose, consciously) without knowing how to do the sampling part. That strikes me as pretty important, and I don't understand how you expect to gloss it over. I also don't see why I should find Aubreyism appealing without having an answer to my arguments about sampling (and some other arguments too).

Regardless, if there was a reason not to question and ruminate about some category of things, I could learn that reason and then not do it. So excessive rumination would not be built into Elliotism. It wouldn't be a problem with Elliotism, only potentially a problem with my ignorance of how much to ruminate about what.

Elliotism says that "how much to ruminate about what" is a topic open to knowledge creation. How will making the topic open to critical thinking lead to the wrong answer? What should be done instead?

So I ask again: why is excessive rumination a consequence of Elliotism? Which part of Elliotism causes or requires it? (And why don't you focus more on finding out what Elliotism is, before focusing on saying it's bad?)
I wrote about how the amount of time (and other resources) used on an arbitration is tailored to the amount of time one thinks should be used. I'm not clear on what you objected to. My guess is you didn't understand, which I would have expected to take more clarifying questions.
Maybe I don’t understand, but what you’ve seemed to be saying about that is what I’m saying is identical to what I do - triaging what you elsewhere describe as Elliotism, by reaching a point where you’re satisfied not to have answers.
I think you don't understand, and have been trying to teach me induction (among other things), and arguing with me. Rather than focusing on the sort of question-asking, misunderstanding-and-miscommunication-clearing-up, and other activities necessary to learn a complex philosophy like CR or Elliotism.

This is something I don't know how to handle well.

One difficulty is I don't know which parts of my explanations you didn't understand, and why. I've tried to find out several times but without much success. Without detailed feedback on my initial explanations, I don't know what to change (e.g. different emphasis, different details included, different questions and criticisms answered) for a second iteration to explain it in a way more personalized to your worldview. Communicating about complex topics and substantial disagreements typically requires many iterations using feedback.

I did try explaining some things multiple ways. But there are many, many possible ways to explain something. Going through a bunch semi-randomly without feedback is a bad approach.

I think there's also confusion because you don't clearly and precisely know what your position is, and modify it ad hoc during the discussion – often trying to incorporate points you think are good without realizing how they contradict other aspects of your position (e.g incorporating DD's epistemology for hard to vary, while using Occam's razor which is contradicted by DD's epistemology). Above you say, "Ah, I see the answer now," (regarding redefining Occam's Razor after introducing it) indicating that you're working out Aubreyism as you go along and it's a moving target. This nebulous and changing nature makes Aubreyism harder to differentiate from other positions, and also serves to partially immunize it from criticism by not presenting clear targets for criticism. (And it's further immunized because you accept things like losing, arbitrariness and subjectivity – so what's left to criticize? Even induction, which Popper says is an impossible myth, becomes possible again if you're willing to count reaching arbitrary conclusions as "induction".)

By contrast, my epistemology position hasn't changed at all during this discussion, and has targets for criticism such as public writing.

Also your figure-stuff-out-as-you-go approach makes the discussion much longer than if you knew the field and your position when we started. I don't mind, but it becomes unfair when you blame the discussion length on me and complain about it. You think I ask too many questions. But I don't know what you think I should do instead. Make more assumptions about what your positions are, and criticize those?

An example is you say you use some CR. But CR is a method of dealing with issues, of reaching conclusions. So what's left to do after that? Yet you, contrary to CR, want to have CR+triage. (And this while you don't really know what CR is.) And then you advocate justificationism and induction, both of which contradict the CR you claim to be (partly) using. I don't know what to make of this without asking questions. Lots of questions, like to find out how you deal with these issues. I could phrase it more as criticism instead of questions, but questions generally work better when a position is vague or incomplete.

(Why didn't I mention all of these things earlier? Because there's so many things I could mention, I haven't had the opportunity to discuss them all.)

Perhaps I should have written more meta discussion sooner, more like I've done in this email, rather than continuing to try in various ways to get somewhere with substantive points. DD for one would say I shouldn't be writing meta discussion even now. There are a bunch of ways meta discussion is problematic. Perhaps you'll like it, but I'm not confident.

One of DD's common strategies would be to delete most of what you write every email and ask a short question about a point of disagreement, and then repeat it (maybe with minor variations, or brief comments on why something isn't an answer) for the next three emails, without explaining why it matters. Usually ends badly. Here's an example of how I could have replied to you, in full:
On Nov 2, 2014, at 9:22 AM, Aubrey de Grey wrote:
On 28 Oct 2014, at 02:39, Elliot Temple wrote:
In the infinite case, how do you get a random sample or otherwise make the sample fair?
why does it matter what randomisation method I use?
Do you believe that all possible sampling methods would be acceptable?

If not, then in the infinite case, how do you get a random sample or otherwise make the sample fair?
This approach controls the discussion, avoids meta discussion, and is short. If you want me to write to you in this style, I can do that. But most people don't like it. It also needs a larger number of iterations than is necessary with longer emails.

I instead (in broad strokes) tried to explain where I was coming from earlier on, and now have been trying to explain why your position is problematic, and throughout I've tried to answer your questions and individual points you raise. Meanwhile you do things like ask what would persuade me, but don't answer what would persuade you. And you talk about how Aubreyism works while not asking many questions about how Elliotism works. And you make claims (e.g. about Elliotism having a timeliness flaw) and I respond by asking you questions to try to find out why you think that, so I can answer, so then you talk about your ideas more instead of finding out how Elliotism works.

I let this happen. I see it happening, see problems with it, but don't know how to fix it. I'm more willing than you to act like a child/learner/student, ask questions and not control discussion. And I have more patience. I don't think this discussion flow is optimal, but I don't know what to do about it. I don't know how to get someone to ask more questions and try to learn more. Nor do I know how to explain something to someone, so that they understand it, without adequate feedback and questions regarding my initial explanation, to give me some indication of where to go with iteration 2 (and 3 and 4). When the feedback is vague or non-specific, or sometimes there is none, then what is one to say next? Tough problem.

Big picture, one can't force a mind, and one can't provide the initiative or impetus for someone to learn something. People make their own choices. I think it's mostly out of my hands. Sometimes I try to explain to people what methods they'll have to use if they want to learn more (e.g. ask more questions), but it usually goes badly, e.g. b/c they say "Well maybe you should learn more" (I'm already trying to, very hard, and they aren't, and they're trying to lie about this reality) or they just don't do it and don't tell me what went wrong.
Why do you think Elliotism itself is lacking, rather than the lacking being in your incomplete understanding of Elliotism?
I could equally ask "Why do you think Elliotism itself is not lacking, rather than the lacking being in your incomplete understanding of Elliotism?”.
I'm open to public debate about this, with all comers. I've been taking every reasonable step I can figure out to find out about these things, while also being open to any suggestions from anyone about other steps to take.

Additionally, I have studied the field. In addition to reading things like Popper, I've also read about other approaches. And have sought out discussion with many people who disagree. I've made an extensive effort to find out what alternative views there are, and what's good about them, and what criticisms they have relevant to CR and Elliotism.

This includes asking people if they know anything to look into more, anyone worth talking to, etc. And looking at all those leads. It also includes work by others besides myself. There has been a collaborative effort to find any knowledge contrary to Popper.

E.g. an Australian Popperian looked over the philosophy books being taught in the Australian universities to check for anything good. He later checked over 200 university philosophy curriculums, primarily from the US, using their websites. Looking for new ideas, new leads, material not already refuted by Popper, material that may answer one of Popper's arguments, anything unexpected, and so on. (Nothing good was found.)


This is not to say Elliotism is perfect, but I've made an extensive effort to find and address flaws, and continue to make such an effort. If there are any flaws, no one knows them, or they're keeping the information to themselves. (Or in your case, we can consider the matter pending, but so far you haven't presented any new challenge to CR or Elliotism.)

What I've found is there are a lot of CR and Elliotism arguments which no one has refutations of. But e.g. there are no unanswered inductivist arguments.


A more parallel question to ask me is why I think induction is lacking, rather than the lacking being with my understanding of induction. The reason is because I've made every effort to find out about induction and how it works and what defenses of it exist for the criticisms I have.

Induction could be better than I know – but in that case it's also better than any inductivist knows, too. It's better in some unimagined way which no one knows about. (Or maybe some hermit knows and hasn't told anyone.)

The current state of the debate – which I've made every effort to advance, and which anyone may reply to whenever they want – is that induction faces many unanswered questions and criticisms, while CR/Elliotism don't. Despite serious and responsible effort, I have been unable to find any inductivist or writing with information to the contrary.

Whereas with Elliotism, you're just initially encountering it and don't know much about it (or much about the rest of the field), so I think you should have a more neutral undecided view.


None of these things would be a major issue if you wanted to simply debate some points, in detail, to a conclusion. But they become major issues when you consider giving up on the discussion, try to form an opinion without answering some of my arguments, think questioning aspects of your position is excessive rumination, don't want to read some arguments relevant to your claims (which is like a form of judging ideas by source instead of content. You treat the sources of written in a book by Popper or on a website by Elliot differently than the source of written in an email by Elliot), etc.
Recall: my claim is that you actually perform Aubreyism, you just don’t realise it. It could be that I understand Elliotism better than you, just as it could be that you understand it better than I. Right?
Elliotism is not defined by what I actually do.

For example, if what I actually do involves any induction ever, then Elliotism is false. In that case, you'd be right about that and I'd be wrong. But that wouldn't mean you understand what Elliotism is better than me.
How could we know? Using Aubreyism, we’d know by looking at how you and I have actually made decisions, changed our minds etc in the past, and comparing those actions with the descriptions of Aubreyism and Elliotism. Using Elliotism as you describe it, I’m not sure how we would decide.
If you could find any counter-example to Elliotism from real life, that would refute it.

By a counter-example I mean something that contradicts Elliotism, not merely something Elliotism says is unwise. If I or anyone else did something Elliotism says is impossible, Elliotism would be false.

If it turned out that I wasn't very good at doing Elliotism, but did nothing that contradicts what Elliotism claims about reality, then it could still be the case that people can and should do exclusively Elliotism.

What I (and you) personally do has little bearing on the issues of what epistemology is true.


A different way to approach these things is critical discussion focusing on what explanations and logic make sense. What should be done, and why? What's possible to do? What plans about what to do are actually ambiguous and ill-defined?

For example, induction is a lot like saying, "Take a bunch of data points. Plot them on a graph. Now draw a curve connecting them and continue it along the paper too. Now predict that additional data points will (likely) fall on that curve." But there are infinite such curves you could draw, and induction doesn't say which one to draw. That ambiguity is a big non-empirical problem. (Some people have tried to specify which curve, but there are problems with their answers.)

Note this initial argument about induction, like all initial arguments, doesn't cover everything in full. Because I don't know which additional details are important to your thinking, and there's far too many to include them all indiscriminately. The way to get from initial statements of issues to understanding generally involves multiple rounds of clarifying questions.
What about the win/win vs win/lose issue?
I go with arbitrary win/lose, i..e. coin flips.
Do you understand that that doesn't count as a "solution" for BoI's "problems are soluble"? By a solution DD means only a win/win solution. But you're trying to make losing and non-solutions a fundamental feature of epistemology, contrary to BoI. Do you have some criticisms of BoI? Do you think DD was mistaken not to include a chapter about how most problems will never be solved and you have to find a way to go through life that copes with losing in regard to most issues that come up?


Or instead of asking questions, should I simply state that you're contradicting BoI, have no idea what you're talking about, and ought to reread it more carefully? And add that I've seen the same misconceptions with many other beginners. And add that people who read books quietly on their own often come away with huge misunderstandings, so what you really need to do is join the Fallible Ideas discussion group and post public critical analysis as you go along (not non-specific doubts after finishing the book). It's important to discuss the parts of BoI you disagree with – using specific quotes while having the context fresh in memory – and it's important to do this with BoI's best advocates who are willing to have public discussions (they can be found on FI list, which was created by merging BoI list, TCS list, and a few others). If I was more pushy like this, would that help? I'm capable of a variety of styles and approaches, but have had difficulty soliciting information about what would actually be helpful to you, or what you want. This style involves less rumination, drawn-out discussion, etc. I'm guessing you won't appreciate it or want to refute its claims. What would you like? Tell me.
You might want to read Popper's essay "The Myth of the Framework”.
I might, but on the other hand I might consider the time taken to do so to be a case of excessive rumination.
What would it take to persuade you of Elliotism or interest you in reading about epistemology? What would convince you Aubreyism is mistaken?

For example, will the sampling issue get your attention? Or will you just say to sample arbitrarily using unstated (and thereby shielded from criticism) subjective intuition? You've already recommended doing things along those lines and don't seem to mind, so what would you mind?
You could tell me which things you considered false from what I said, and why. I don't know which are Aubreyism-compatible and which contradict Aubreyism. And you could tell me how you think persuasion should work. It takes more communication.
Quite - maybe, excessively more.
How am I supposed to answer your objections if you don't tell them to me? Or if I'm not to answer them, what do you expect or want to happen?
What I was asking was, can you concisely summarise a particular, concrete thing about which your mind was changed? - a specific question (ideally a yes/no question) that you answer differently now that you did before you encountered DD and his ideas. And then can you summarise (as concisely as possible) how you came to view his position as superior to yours. I’m presuming that the thing will be a thing about how to make decisions, so your answer to the second question needs to be couched in terms of the decision-making method that you favoured prior to changing your mind.
Yes/no question: Is recycling a good idea? The typical residential stuff where you sort your former-trash for pickup.

My old position: yes.

DD's position: no.

What happened? A few arguments, like pointing out the human cost of the sorting. Links to some articles discussing issues like how much energy recycling plants use and how some recycling processes are actually destroying wealth. Answers to all questions and criticisms I had about the new position (I had some at the time, but don't remember them now).

Another thing I would do is take an idea I learned and then argue it with others who don't know it. Then sometimes I'd find I could win the argument no problem. But other times I'd run into some further issue to ask DD about.

In other words: arguments and discussion. That's it. There's no magic formula. You seem to think there are lessons to be learned from my past experience and want to know what they are. But I already incorporated them into Elliotism (and into my explanation of how persuasion can happen) to the extent that I know what they are. To the extent I missed something, I will be unable to tell you that part of my experience, even if I remember it, because I don't know it's important and I can't write everything down including every event I regard as unimportant.

If you want raw data, so you can find the parts you think are important, there are archives available. But if you want summary from me, then it's going to contain what I regard as the important parts, basically discussion, answering all criticisms and questions, reading supplementary material, etc, all the stuff I've been talking about.

The story regarding epistemology is similar to above, except spread out over many questions and over years. And it involves a lot of mixing of issues, rather than going one topic at a time. E.g. discussing parenting and education, or politics. Epistemology has implications for those fields, *and vice versa*.

One thing I can add, that I think was really helpful, is reading lots of stuff DD wrote (anywhere, to me or not). That provided a good examples and showed what level of precise answering of all the issues is reasonably achievable. Though not fully at first – it takes a lot of skill not to miss 95% of what he's doing and getting right. And it takes skill to ask the right questions or otherwise find out more than his initial statement (there's always much more, though many people don't realize that). Early on, even if one isn't very good at this, one can read discussions he had with others and see what questions and counter-arguments they tried and see what happened, and see how DD always has further answers, and see what sorts of replies are productive, and so on. One can gradually get a better feel for these things and build up skill.


By an effort, people can understand each other and reality better. There's no shortcut. That's the principle, and it's my history. If you want to learn philosophy, you can do that. If you'd rather continue with ideas about how life is full of losing in arbitrary ways and induction, which are refuted in writing you'd rather skip reading, you can do that instead.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 19

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Hi Elliot - I’m in a busy phase right now so apologies for brevity. To me the purpose of our debate is to answer the question “Is Aubrey coming to substantively incorrect conclusions about what to do or say (such as about cryonics) as a result of using epistemologically invalid methods of reasoning?”. I’m not interested in the question “Is Aubrey’s method of reasoning epistemologically invalid?” except insofar as it can be shown that I would come to different conclusions (but in the same amount of time) if I adopted a different strategy. Similarly, I’m not interested in the question "Is Aubrey coming to incorrect conclusions about what to do or say (such as about cryonics) as a result of having incomplete information/understanding about things OTHER than what method of reasoning is best?” (which seems to be what happened to you in relation to recycling,
Sort of. If I'd had a better approach to reasoning, I could have found out about recycling sooner. If I hadn't already been learning a better method of reasoning, I might have stayed in favor of recycling after seeing those articles, and many other people have done. I think you're trying to create a distinction I disagree with, where you don't give reasoning methods credit in most of life, even though they are involved with everything.
and was also what happened to me in relation to my career change from computer science), because such examples consist only in switching to triage at a point that turned out to be premature (I could have discovered in my teens that biologists were mostly not interested in aging, which is all I needed to know in order to decide that I should work on aging rather than AI, but I didn’t consider that possibility), not in having a triage step per se. I’m quite sure that epistemology is hard, but I’m not interested in what’s epistemologically valid unless there is some practical result for my choices.
OK I see where you're coming from better now.
It’s the same as my attitude to the existence of God: I am agnostic, not because I’ve cogitated a lot and decided that the theist and atheist positions are too close to call, but because I know I’m already doing God’s work for reasons unrelated to my beliefs, hence it makes no difference to my life choices what my beliefs are. I’m perfectly happy to believe that induction can be robustly demonstrated to be epistemologically invalid - in fact, as I said before, I already think it seems to be - but why should I care? - you haven’t told me.
Because misunderstanding how knowledge is created (in science and more generally) blocks off ways of making progress. It makes it harder to learn anything. It slows down biology and every other field. More below.
I’m surprised at your statement about random sampling - I mean, clearly the precision of the fairness will be finite, but equally clearly the precision can be arbitrarily good, so again I don’t see why it bothers you - but again, I also don't see why I should care that I don’t see, because you haven’t given me a practical reason to care, i.e. a reason to suspect that continuing the debate may lead to my coming to different conclusions about what to do or say in the future (about cryonics or anything else).
I don't know how you propose to do arbitrarily good sampling, or anything that isn't terrible. That isn't clear to me at all, nor to several people I asked. I think it's a show-stopper problem (one of many) demonstrating the way you actually think is nothing like your claims.

I don't know how many steps I can skip for this and still be understood. You seem bored with this issue, so let's try several. I think you're assuming you have a fair ordering, and that arbitrarily fair/accurate information occurs early in the ordering. And you decide what's a fair ordering by knowing in advance what answer you want, so the sampling is pointless.
I’ll just answer this specific point quickly:
We're trying to decide what to get for dinner. I propose salmon sushi or tuna sushi. You propose pizza. We get sushi with 67% odds. Is that how it's supposed to work? (Note I only know the odds here because I have a full list of the ideas.)

But wait. I don't care what God's favorite natural number is; that's irrelevant. So there's infinite sushi variants like, "Get salmon sushi, and God's favorite natural number is 5" (vary the number).

Now what? Each idea just turned into infinite variants. Do we now say there are 2*infinity variants for sushi, and 1*infinity for pizza? And get sushi with what odds?
Sory for over-brevity there. What we do is we put the numbers in some order, and for each number N we double the number of variants for each of sushi and pizza by adding “God’s favourite number is N” and “God’s favourite number is not N" - so the ratio of numbers of variants always stays at 2. I can’t summon myself to care about the difference between countably and uncountably infinite classes, in case that was going to be your next question.
I think you missed some of the main issues here, e.g. that getting sushi with 67% odds is a stupid way to handle that situation. It doesn't deal with explanations or criticism (why should we get which food? does anyone mind or strongly object? stuff like that is important). And it's really really arbitrary, like I could mention two more types of sushi and now it's 80% odds? Why should the odds depend on how many I mention like that? That's a bad way of making decisions. I was trying to find out what you're actually proposing to do that'd be more reasonable.

Also sampling in the infinite case is irrelevant here because you knew you wanted a 67% result beforehand (and your way of dealing with infinity here consists of just doing something with it that gets your predetermined answer).

I do think the different classes of infinity matter, because your approach implies they matter. You're the one who wanted numbers of variants to be a major issue. That brings up issues like powersets, like it or not. I think the consequences of fixing your approach to fully resolve that issue are far reaching, e.g. no longer looking at numbers of ideas. And then trying to figure out what to do instead.
More generally, you’re absolutely right that I’m making this up as I go along - I’m figuring out why what I do works. What do I mean by “works”? - I simply mean, I’ve found over the years that I rarely (though certainly not never) make decisions or form opinions that I later revise, and that as far as I can see, that’s not because I’m not open to persuasion or because I move to triage too soon, but because I have a method for forming opinions that really truly is quite good at getting them right, and in particular that it’s a good balance (pretty much as good as it can be) between reliability of the decision and time to make it.
From my perspective, you're describing methods that couldn't work. So whether you were a good thinker or a bad one, you wouldn't be describing what you actually do. This matters to the high-value possibility of critical discussion and improvement of your actual methods.

BTW here is another argument that you don't think the way you claim: What you're claiming is standard stuff, not original. But we agree you think better than most people. So wouldn't you be doing something different than them? But your statements about how you think don't capture the differences.
Take this debate. I’ve given you ample opportunity to come up with reasons why my advocacy for signing up for cryopreservation is mistaken. Potential reasons fall into two classes: data that I didn’t have (or didn’t realise I had) that affects the case, and flaws in my reasoning methods that have resulted in my drawing incorrect conclusions from the data I did have. You’ve been focusing me on the latter, and I’ve given you extended opportunity to make your case, because you’re (a) very smart and articulate and fun to talk to and (b) aligned with someone else I greatly admire. But actually all you’ve ended up doing is being frustrated by the limited amount of time I’m willing to allocate to the debate (even though for someone as busy as me it wasn’t very limited at all). That’s not actually all you’ve done, of course - from my POV, the main thing you’ve done is reinforce my confidence that the way I make decisions works well, by failing to show me a practical case where it doesn’t.
I'm not frustrated. I like you. I'm trying to speak to important issues unemotionally.

If I were to be frustrated, it would not be by you. I talk to a lot of people. I bet you can imagine that most are much more frustrating than you are.

Suppose I were to complain that people don't want to learn to think better, don't want to contribute to philosophy, don't want to learn the philosophy that would let them go be effective in other fields, don't want to stop approximately destroying the minds of approximately all children, etc.

Would I be complaining about you? No, you'd be on the bottom of the list. You're already doing something very important, and doing it well enough to make substantial progress. For the various non-SENS issues, others ought to step up.

Further, I don't know that talking with me will help with SENS progress. On the one hand, bad philosophy has major practical consequences (more below). But on the other hand, if you see things more my way, it will give you less common ground with your donors and colleagues. One fights the war on aging with the army he has, now not later. If the general changes his worldview, but no one else does, that can cause serious problems.

Maybe you should stay away from me. Reason is destabilizing (and seductive), and maybe you – rightly – have higher priorities. While there are large practical benefits available (more below), maybe they shouldn't be your priority. People went to space and built computers while having all sorts of misconceptions. If you think current methods are enough to achieve some specific SENS goals, perhaps you're right, and perhaps it's good for someone to try it that way.

So no I'm not frustrated. I can't damn you, whatever you do. I don't know what you should do. All I can do is offer things on a voluntary basis.

The wrong way of thinking slows progress in fields. Some examples:

The social sciences keep doing inadequately controlled, explanationless, correlation studies because they don't understand the methods of making scientific progress. They're wasting their time and sharing false results.

Quantum physicists are currently strongly resisting the best explanation (many worlds). Then they either try to rationalize very bad explanations (like Copenhagen theory) or give up on explanations (i.e. shut up and calculate). This puts them in a very bad spot to improve physics explanations.

AI researchers don't understand what intelligence is or how knowledge can be created. They don't understand the jump to universality, conjectures and refutations, or the falseness of induction and justificationism. They're trying to solve the wrong problems and the field has been stuck for decades.

Philosophers mostly have terrible ideas and make no progress. And spread those bad ideas to other fields like the three examples above.

Feynman offers some examples:

http://neurotheory.columbia.edu/~ken/cargo_cult.html
I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person--to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.
Repeating experiments is wasting time? What a stupid field that isn't going to figure anything out (and indeed it hasn't). And Feynman goes on to discuss how someone figured out how to properly control rat maze running by putting in sand so they can't hear their footsteps – and that got ignored and everyone just kept doing inadequately controlled rat studies.


What about medicine or biology? I don't know the field very well but I've seen articles saying things like:

http://articles.mercola.com/sites/articles/archive/2012/07/12/drug-companies-on-scientific-fraud.aspx
Former drug company researcher Glenn Begley looked at 53 papers in the world's top journals, and found that he and a team of scientists could NOT replicate 47 of the 53 published studies—all of which were considered important and valuable for the future of cancer treatments!
Stuff like this worries me that perhaps current methods are not good enough for SENS to work. But somehow despite problems like this, tons of medicine does work. Maybe it's OK, somehow. More on this below.

http://www.ahrp.org/cms/content/view/846/94/
Many journals don’t even have retraction policies, and the ones that do publish critical notices of retraction long after the original paper appeared—without providing explicit information as to why they are being retracted.
The article has various unpleasant stats about retractions.
It is worth noting that the results of *most negative clinical trials are never published*—neither are they disclosed anywhere, except in sponsors’ confidential files and FDA marketing submissions.
95% confidence is useless if there were 19 unpublished failures. Even one unpublished negative result matters a lot. Not publishing negative results is a huge problem.

http://www.retractionwatch.com/2014/11/03/shigeaki-kato-up-to-28-retractions-with-three-papers-cited-nearly-700-times/
Former University of Tokyo researcher Shigeaki Kato has notched his 26th, 27th, and 28th retractions, all in Nature Cell Biology. The three papers have been cited a total of 677 times.
Note how much work is built partly on top of falsehoods. Lots more retraction info on that blog; it's not pretty.

Note that all of these examples are relevant to fighting aging, not just the medical stuff.

You never know when a physics breakthrough will have an implication for chemistry which has an implication for biology.

You never know when progress in AI could lead to uploading people into computers and making backup copies.

Better social sciences or psychology work could have led to better ways to handle the pro-aging trance or better ways to deal with people to get large donations for SENS.

So many academic papers are so bad. I've checked many myself and found huge problems with a majority of them. And there's the other problems I talked about above. And the philosophy errors I claim matter a lot.

So, how does progress happen despite all this?

How come you're making progress while misunderstanding thinking methods? Does it matter?

Here's my perspective.

Humans are much better, more awesome, powerful and rational things than commonly thought. Fallible Gods. Really spectacular. And this is why humans can still be effective despite monumental folly. Humans are so effective that even e.g. losing 99% of their effectiveness to folly (on average, with many people being counterproductive) leaves them able to make progress and even create modern civilization.

And it's a testament to the human spirit. So many people suffer immensely, grit their teeth, and go on living – and even producing – anyway. Others twist themselves up to lie to themselves that they aren't suffering while somehow not knowing they're doing this, which is hugely destructive to their minds, and yet they go on with life too.

I think it's like Ayn Rand wrote:
"Don't be astonished, Miss Taggart," said Dr. Akston, smiling, "and don't make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They're something much greater and more astounding than that: they're normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world's doctrines, the accumulated evil of centuries—to remain human, since the human is the rational."
John Galt is a normal man. That is what's possible. You fall way short of him. Philosophy misconceptions and related issues drop your effectiveness by a large factor, but you lack examples of people doing better so the problem is invisible to you. Most people are considerably worse off than you.

The world doesn't have to be the way it is. So much better is possible. BoI says the same thing in several ways, some subtle, I don't know if you would have noticed.

People do so much stuff wrong, drop their effectiveness massively, and then have low expectations about what humans can do.

It's important to understand that if you have problems, even huge ones, you won't automatically or presumably notice them. And actually you should expect to have all sorts of problems, some huge, some unnoticed – you're fallible and only at the beginning of infinity (of infinite progress). This makes it always important to work on philosophy topics like how problems are found and solved. It should be a routine part of every life to work on that kind of thing, because problems are part of life.

Here's a specific example. The Mitochondrial Free Radical Theory of Aging, by Aubrey de Grey, p 85:
In gerontology, as in any field of science, the development of a hypothesis involves a perpetual oscillation between creative and analytical thinking. Advances of understanding are rarely achieved by purely deductive analysis of existing data; instead, scientists formulate tentative and incomplete generalisations of that data, which allow them to identify which questions are useful to ask by further observation or experiment. ...

The above is, in fact, so universally accepted as a cornerstone of the scientific method that some may wonder why I have chosen to belabor it. I have three reasons.
This is all wrong. Tons of errors despite being short and – as you say – widely accepted.

Does it matter? Well, you wouldn't have written it if you didn't think it mattered.

Since your current concern is whether my claims matter, I'm going to focus on why they do, rather than arguing why they are true. So let's just assume I'm right about everything for a minute. What are the consequences of that?

One mistake in the passage is the deduction/data false dichotomy for approaches. This has big practical consequences because people look for progress in two places, both wrong. That they figure anything out anyway is a testament – as above – to how amazing humans are.

It also speaks to how much people's actual methods differ from their stated methods. People routinely do things like say they are doing induction, like you imply in the passage. Even though induction impossible and has never been used to figure anything out a single time in human history. So then what you must actually do is think a different way, get an answer, and then credit induction for the answer.

Is this harmless? No! Lots of times they try to do induction or some other wrong method and end up with no answer. There are so many times they didn't figure anything out, but could have. People get stuck on problems all the time. Not consciously or explicitly understanding how to think is a big aspect of these failures.

Knowing the right philosophy for how to think allows one to better compare what one is doing to the right way. Everyone deviates some and there's room for improvement. Most people deviate a lot, so there's tons of room for improvement.

And understanding what you're doing exposes it to criticism better. The more thinking gets done in a hidden and misunderstood way, the more it's shielded from criticism.

Understanding methods correctly also allows a much better opportunity to come up with potentially better methods and try different things out. You could improve the state of the art. Or if someone else makes a breakthrough, then if you understand what's going on then you would be in a much better position to use his innovation.

You have an idea about a pro-aging trance. It's a sort of philosophical perspective on society, far outside your scientific expertise. How are you to know if it's right? By doing all the philosophy yourself? That'd be time consuming, and you've acknowledged philosophy is a serious and substantive field and you don't have as much expertise to judge this kind of question as you could. Could you outsource the issue? Consult an expert? That's tough. How do you know who really is a philosophy expert, and who isn't, without learning the whole field yourself? Will you take Harvard or Cambridge's word for it? I really wouldn't recommend that. Many prestigious philosophers are terrible.

What if you asked me? I think whether I said you're right or wrong about the pro-aging trance, either way, you wouldn't take my word for it. That's fine. This kind of thing is really hard to outsource and trust an answer without understanding yourself. Whatever I said, I could give some abbreviated explanations and it's possible you'd understand, but also quite possible you wouldn't understand my abbreviated explanations and we'd have to discuss underlying issues like epistemology details.

And the issue isn't just whether your pro-aging trance idea is right or not. Maybe it's a pretty good start but could be improved using e.g. an understanding of anti-rational memes.

And if it's right, what should be done about it? Maybe if you read "How Does One Lead a Rational Life in an Irrational Society?" by Ayn Rand, you'd understand that better. (Though that particular essay is hard to understand for most people. It clashes with lots of their background knowledge. To understand it, they might need to study other Rand stuff, have discussions, etc. But then when one does understand all that stuff, it matters, including in many practical ways.)

I think millions of people won't shift mindsets as abruptly as you hope. One reason is because of anti-life philosophies, which you don't address. Which I don't think you know what those are, as I mean them.

One aspect of this is that lots of people don't like their lives. They aren't happy, they aren't having a good time. Most of them won't admit this and lie about it. And it's not like they only dislike their lives, they like some parts too, it's mixed. Anyway they don't want to admit this to themselves (or others). Aging gives them an excuse, a way out, without having to face that they don't like their lives (and also without suicide, which is taboo, and it's hard for people to admit they'd rather be dead).

There's other stuff too, which could be explained much faster if you had certain philosophical background knowledge I could reference. The point for now is there's a bunch of philosophical issues here and getting them right matters to SENS. You basically say people are rationalizing not having effective anti-aging technology, and that does happen some, but there's other things going on too. Your plan as you present it is focused on addressing the doubts that anti-aging technology is ready, but not other obstacles.

Does it matter if you're right about the pro-aging trance? Well, you think so, or you wouldn't bring it up. One reason it matters is because if the pro-aging trance doesn't end, it could prevent large-scale funding and effort from materializing. And some other things besides doubts about SENS effectiveness may also need to be addressed.

For example, there's bad parenting. This does major harm to the minds of children, leaving them less able to want and enjoy life, less able to think rationally, and so on. Dealing with these problems – possibly by Taking Children Seriously, or something focused on helping adults, or a different way – may be important to SENS getting widespread acceptance and funding. It's also important to the quality of scientists available to keep working on SENS, beyond the initial stages, as each new problem at later ages is found.

Part of what the pro-aging trance idea is telling people is there's this one major issue which people are stuck on and have a coping strategy for. And you even present this coping as like a legitimate reasonable way to deal with a tough situation. This underplays how irrational people are, which is encouraging to donors by being optimistic. As mentioned earlier, sometimes people succeed at stuff, somehow, despite big problems, so SENS stuff could conceivably work anyway. But it may be that some of the general irrationality issues with society are going to really get in the way of SENS and need more addressing.

(And people learning epistemology is a big help in dealing with those. If people understand better how they are thinking, and how they should think, that's a big step towards improving their thinking.)

Ending Aging by Aubrey de Grey:
The most immediately obvious actions would be to lobby for more funding for rejuvenation research, and for the crucial lifting of restrictions on federal funding to embryonic stem cell research in the United States, by writing letters to your political representatives, demanding change.
The very questionable wisdom of government science is a philosophical issue with practical consequences like whether people should actually do this lobbying. Perhaps it'd help more to lobby for lower taxes and for government+science separation instead. Or maybe it'd be better to create a high quality Objectivist forum which can teach many people about the virtues of life, of science, of separating the government from science, and more.

This is an example of a philosophical issue important to SENS. Regardless of whether you're right in this case, getting philosophical issues like this correct at a higher rate is valuable to SENS.
I’ve had a fairly difficult time convincing my colleagues in biogerontology of the feasibility of the various SENS components, but in general I’ve been successful once I’ve been given enough time to go through the details. When it comes to LEV, on the other hand, the reception to my proposals can best be described as blank incomprehension. This is not too surprising, in hindsight, because the LEV concept is even further distant from the sort of scientific thinking that my colleagues normally do than my other ideas are: it’s not only an area of science that’s distant from mainstream gerontology, it’s not even science at all in the strict sense
Here you're trying to use philosophical skills to advance SENS. You're trying to do things like understand why people are being irrational and how to deal with it. Every bit of philosophical skill could help you do this better. Elliotism contains valuable ideas addressing this kind of problem.

OK, so, big picture. The basic thing is if you know the correct thinking methods, instead of having big misconceptions about how you think, you can think better. This has absolutely huge practical consequences, like getting more right answers to SENS issues. I've gone through some real life examples. Here are some simplified explanations to try to get across how crucially important epistemology is.

Say you're working on some SENS issue and the right thinking method in that situation involves trying five different things to get an answer. You try three of them. Since you don't know the list of things to do, you don't realize you missed two. So 40% of the time you get stuck on the issue instead of solve it.

Later you come up with a bad idea and think it over and look for flaws. You find two but don't recognize them as flaws due to philosophy misconceptions. You miss another flaw because you don't try a flaw-finding method you could have. Even if you knew that method, you still might skip it because you don't understand how thinking works, how you're thinking about an issue, and when to use that method.

Meanwhile, whenever you think about stuff, you spend 50% of your time on induction, justificationism, and other dead ends. Only half your thinking time is productive. That could easily be the case. The ratio could easily be worse than that.

And you have no experiences which contradict these possibilities. How would you know what it's like to think way more effectively, or that it's possible, from your past experiences? That you've figured out some stuff tells you nothing about what kind of efficiency rate you're thinking at. Doing better than some other people also does not tell you the efficiency rate.

These problems are the kinds of things which routinely happen to people. They can easily happen without being noticed. Or if some of the negative consequences are noticed, they can be attributed to the wrong thing. That's common. Like if a person believes he does thinking by some series of false and irrelevant steps, he'll try to figure out which of those steps has the problem and try some adjustments to those steps. Whereas if he knew how he actually thought, he'd have a much better opportunity to find and fix his actual problems.

You may find these things hard to accept. The point is, they are the situation if I'm right about philosophy. So it does matter.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)