mac has global autocomplete!!! and more about cocoa hotkeys

omg, opt-esc, global hotkey for cocoa text fields to try to complete the current word, try it!

there's also tons of other stuff like ctrl-k and ctrl-y (like cut/paste, but with a separate buffer, and cuts to end of line instead of using a selection). or ctrl-a and ctrl-e (move cursor to start/end of paragraph). and you can also make your own hotkeys including ones to do multiple things, like i made one to duplicate the current line by piecing together several commands. they also allow hotkey sequences as triggers.

i also just found out about the esc based shortcuts in terminal (press esc, let go, then hit key). esc-d, esc-delete, esc-f, esc-b :-D (those are forward-delete-word, backward-delete-word, forward word, and backward word)

apple does a great job with details like this, when they try. i hope OS X gets more love soon (though i accept that iphone os is more important to their business atm)


# in terminal
bind -p
open /System/Library/Frameworks/AppKit.framework/Resources/StandardKeyBinding.dict

Elliot Temple | Permalink | Comments (0)

Whirlwind Tour of Justificationism

From an email thread about free will:

Once upon a time (624 BC) Thales was born. Thus began philosophy.

Thales invented criticism. Instead of telling his followers what to believe, he made suggestions, and asked that they think for themselves and form their own ideas.

A little later, Xenophanes invented fallibility and the idea of seeking the truth to improve our knowledge without finding the final truth. He also identified and criticized parochialism.

In the tradition of Thales and Xenophanes came Socrates, the man who was wise for admitting his vast ignorance (among other things).

But only two generations after Socrates, philosophy was changed dramatically by Aristotle. Aristotle invented justificationism which has been the dominant school of philosophy since, and which opposes the critical, fallibilist philosophies which preceded him (and which were revived by Popper and Deutsch).

Aristotle's way of thinking had some major strands such as:

1) he wanted episteme -- objectively true knowledge.
2) he wanted to guarantee that he really had episteme -- he wanted justified, true knowledge. he rejected doxa (conjecture).
3) he thought he had episteme -- he was "the man who knows"
4) he thought he had justification
5) in relation to this, he invented induction as a method of justifying knowledge

Thus Aristotle rejected the fallibilist, uncertain ethos of striving to improve that preceded him, and replaced it with an authoritarian approach seeking guarantees and to establish existing knowledge against doubt.

Induction, as well as all other attempts, were unable to justify knowledge. Nothing can guarantee that some idea is episteme, so all attempts to do it failed.

Much later, Bacon attached induction to science and empiricism. And some people like Hume noticed it didn't work. But they didn't know what to do without it because they were still focussed on the same problem situation Aristotle had laid out: that we should justify our knowledge and find guarantees. So without induction they still had to figure out how to do that, and salvaging induction seemed easier than starting over. Hence the persistent interest in reviving induction.

What Popper did is go back to the old pre-Aristotle philosophical tradition which favors criticism and fallibilism, and which has no need for justification. Popper accepted that doxa (conjectures) have value, as Xenophanes had, and he explained how we can improve our knowledge without justification. He also refuted a bunch of justificationist ideas.

Then David Deutsch wrote "A Conversation About Justification" in _The Fabric of Reality_.

So how does that relate to free will? The basic argument against free will goes like this, "There is no way to justify free will, or guarantee it exists, therefore it's nonsense." The primary argument against free will is nothing but a demand for justification in the Aristotelian style.

As an example, one might say free will is nothing but a conjecture without an empirical evidence. To translate, that means free will is merely doxa, and hasn't got any empirical justification. This is essentially true, but not actually a problem.

Arguments against free will take many guises, but justificationist thinking is the basic theme giving them appeal.

Elliot Temple | Permalink | Comments (12)


I made a new philosophy website.

It won't look right in Internet Explorer.

Elliot Temple | Permalink | Comments (6)

Programming and Epistemology

Organizing code is an instance of organizing knowledge. Concepts like being clear, and putting things in sections, apply to programming and philosophy both.

DRY and YAGNI aren't just programming principles. They also apply to thinking and knowledge in general. It's better to recognize a general case, then think of several cases in separate, repetitive ways. And it's better to come up with solutions for actual problems and not create a bunch of over-engineered theory that may never have a purpose.

The programming methodology of starting with the minimum thing that will work, and then making lots of little improvements to it until its awesome -- based in part on actual feedback and experience with the early versions -- is also a good general method of thinking connected to gradualist, evolutionary epistemology. It's also how, say, political change should be done: don't design a utopia and then try to implement it (like the French Revolution), instead look for smaller steps so it's possible to change course mid way once you learn more about it, so you get some immediate benefit and to reduce risk.

Programmers sometimes write articles about how evil rewrites are, and how they lead to vaporware. Nothing is ever perfect, but existing products have a lot of useful work put into them, so don't start over (you'll inevitably run into new, unforeseen problems) but instead try to improve what you have. Similarly, philosophically, there are three broad schools of thought:

1) the conservative approach where you try to prevent any changes.

2) the liberal approach where you try to improve what you have.

3) the radical approach, where you say existing ideas/knowledge/traditions are broken and worthless, and should be tossed out and recreated from scratch.

The liberal, non-revolutionary approach is the right one not just for code rewrites but also in philosophy in general (and in politics).

Consider two black boxes which take input and give output according to some unknown code inside. You try them out, and both boxes give identical output for all possible inputs. You wonder: are the boxes identical? Are they the same, for all practical intents and purposes? Must they even be similar?

Programmers, although they don't usually think about it this way, already know the answer. Code can be messy, disorganized, and unreadable, or not. Code can have helpful comments, or not. One can spend a day refactoring or deleting code, and make sure all the tests pass, so it does exactly the same thing as before, but now it's better. Some code can be reused in other projects, and some isn't set up for that. Some code has tests, and some doesn't. One box could be written in C, and another in lisp.

None of these things matter if you only treat code as a black box and just want to use it. But if you ever have to change the code, like adding new features, doing maintenance or doing bug fixes, then all these differences which don't affect the code's output are important.

I call what the code actually does its "denotation" and the other aspects its "structure", and I call this field structural epistemology. Programming is the best example of where it comes up, but it also has more philosophical relevance. One interesting question is if/how/why evolution creates good structure in genetic code (I think it does, but I'm not so clear on what selection pressure caused it). Another example is that factories have knowledge structure issues: you can have two factories both making toys, with the same daily output, but one is designed so it's easier to convert it to a car factory later.

Elliot Temple | Permalink | Comments (7)

Mises on Force and Persuasion

Liberalism in the Classical Tradition by Ludwig von Mises, p 51
Repression by brute force is always a confession of the inability to make use of the better weapons of the intellect
This is similar to Godwin:
If he who employs coercion against me could mould me to his purposes by argument, no doubt he would. He pretends to punish me because his argument is strong; but he really punishes me because his argument is weak.

Elliot Temple | Permalink | Comments (0)

Milton Friedman was a Statist

Now you know.


In the interview, he expresses disagreement with Ayn Rand and her view that the State is bad because it uses force against its citizens. He does not provide any argument that she's mistaken, or that his view is better.

Milton also, for example, advocated a negative income tax. That means if you contribute a sufficiently small amount to the economy then the State takes money by force from other citizens and gives it to you.

The purpose of this post is simply to inform people about how a libertarian icon is a blatant Statist. (And, by the way, he's not the only one.)

Elliot Temple | Permalink | Comments (3)

Beyond Criticism?

The Retreat To Commitment, by William Warren Bartley III, p 123:
There may, of course, be other nonlogical considerations which lead one to grant that it would be pointless to hold some particular view as being open to criticism. It would, for instance, be a bit silly for me to maintain that I held some statements that I might make—e.g., "I am over two years old"—open to criticism and revision.

Yet the fact that some statements are in some sense like this "beyond criticism" is irrelevant to our problems of relativism, fideism, and scepticism.
The claim that some statements are beyond criticism is anti-fallibilist and anti-Popperian.

It is not at all silly to maintain that the example statement is open to criticism. It's essential. Not doing so would be deeply irrational. We can make mistakes, and denying that has consequences, e.g. we'll wonder: how do we know which things we can't be mistaken about? And that question begs for an authoritarian, as well as false, answer.

You may be thinking, "Yes, Elliot, but you are over two years old, and we both know it, and you can't think of a single way that might be false." But I can.

For example, my understanding of time could contain a mistake. Is that a ridiculous possibility? It is not. Most people today have large mistakes in their understanding of time (and of space)! Einstein and other physicists discovered that and space are connected and it's weird and doesn't follow common sense. For example, the common sense concept of two things happening simultaneously at different places is a mistake: what appears simultaneous actually depends where you watch from. If some common sense notions of time can be mistaken, why laugh off the possibility that our way of keeping track of how much time has passed contains a mistake?

Another issue is when you start counting. At conception? Most people would say at birth. But why birth? Maybe we should start counting from the time Bartley was a person. That may have been before or after birth. According to many people, brain development doesn't finish until age 20 or so. In that case, a 21 year old might only have been a full person for one year.

Of course there are plenty of other ways the statement could be mistaken. We must keep an open mind to them so that when someone has a new, counter-intuitive idea we don't just laugh at him but listen. Sure the guy might be a crank, but if we ignore all such ideas that will include the good ones.

Elliot Temple | Permalink | Comments (38)

Another Problem Related To Critical Preferences

X is a good trait. A has more of X than B does. Therefore A is better than B.

That is a non sequitur.

You can add, "All other things being equal" and it's still a non sequitur.

X being a good or desirable trait does not mean all things with more X are better. There being all sorts of reasons X is amazing does not mean X is amazing in all contexts and in relation to all problems.

You'd need to say X is universally good, and all other things are equal. In other words, you're saying the only difference between A and B is amount of something that is always good. With the premises that strong, then the claim works. However, it's now highly unrealistic.

It's hard to find things that are universally good to have more of. Any medicine or food will kill you if you overdose enough. Too much money would crush us all, or can get you mugged. An iPhone is amazing, but an iPhone that's found by a hostage taker who previously asked for everyone's phones can get you killed.

You can try claims like "more virtue is universally good". That is true enough, but that's because the word "virtue" is itself already context sensitive. It's also basically a tautology and immune to criticism, because whatever is good to do is what's virtuous to do. And it's controversial how to act virtuously or judge virtue. If you try to get specific like, "helping the needy is universally good," then you run into the problem that it's false. For example, if Obama spent too much time working in soup kitchens, that wouldn't leave him enough time to run the country well, so it'd turn out badly.

You could try "more error correction is a universal good thing" but that's false too. Some things are good enough, and more error correction would be an inefficient use of effort.

You might try to rescue things by saying, "X is good in some contexts, and this is one of those contexts." Then you'll need to give a fallible argument for that. That is an improvement on the original approach.

Now for the other premise, "all other things being equal." They never are. Life is complicated and there's almost always dozens of relevant factors. Even if they were equal, we wouldn't know it, because we can never observe all other things to check for their equality. We could guess they are equal, which would hold if we didn't miss anything. But the premise "all other things being equal, unless I think of some possible relevant factor" isn't so impressive. You might as well just say directly, "A is better than B, unless I'm mistaken."

Elliot Temple | Permalink | Comment (1)

Examples of Accepting Contradicting Ideas

People commonly say things like, "That's a good point, but alone it's insffucient for me to change my position."

In a debate club meeting, or a Presidential debate, most of the non-partisan audience usually comes away thinking both sides made some good points.

Debaters think an idea can suffer a few setbacks, but still be a good idea. They aren't after perfection but just trying to get the better of their debating opponent.

These are examples of the same mistake underlying critical preferences: simultaneously accepting two conflicting ideas (such as a position, and a criticism of that position).

PS Notice that "simultaneously accepting two conflicting ideas (and making a decision about the issue)" would be a passable definition of coercion for TCS to use. This highlights the connection between coercion and epistemology. The concept of coercion in TCS is about when rational processes in a mind break down. The TCS theory of coercion tries to answer questions like: What happens then? (Suffering; a big mess.) What causes the breakdown to happen? (Different parts of the mind in conflict and the failure to resolve this by creating one single idea of how to proceed.) What's a description of what the mind looks like when it happens? (It contains conflicting, active theories.)

Elliot Temple | Permalink | Comments (0)

Weak Theory Example

T1 is a testable, scientific theory to solve problem P. T2 is a significantly less testable theory to solve P. In Popper's view, barring some important other consideration, if both T1 and T2 are non-refuted then we must prefer T1 and say it's better.

But T1 might not be better. You could easily choose T1 so it's false and T2 so it's true as best we know today, without contradicting the situation description.

You can assert that T1 is better, as far as we know, given the current state of knowledge. But is it? Where is the argument that it is? This looks to me like both explanationless philosophy and positive philosophy (T1 is supported by its testability, and T2 isn't). T2 is losing out without any criticism of it.

What we should do is not say T1 is better, but say: T2 needs to be testable to be a viable theory because X. X can be a generic reason such as scientific theories should be testable and P is a scientific problem. Once we say this, we are now making a criticial argument: we're criticizing T2. This offers T2 the chance to defend itself, which never came up in the original analysis.

It's now up to T2 to offer a reason that it doesn't need to be more testable, or actually is more testable. T2 can criticize the criticism of it, or be refuted. (BTW if T2 didn't already contain this reason, and it has to be invented, then T2 is refuted and T2b is now standing, where T2b consists of the content of T2 plus the new content that criticizes this criticism of T2.)

Then if the testibility criticism is criticized, it can either be refuted or be ammended to include a criticism of that criticism. And so on. This approach takes seriously the idea that we only learn from criticism. That makes sense because criticisms are error-correcting statements: they explain a flaw in something, which helps us avoid a mistake.

Elliot Temple | Permalink | Comment (1)

Using False Theories

C&R by Popper p 306
we are, in many cases, quite well served by theories which are known to be false.
This is a mistake! Consider a theory of motion, say, which we'll call T. We know T is false, but it's also a good approximation to the truth in common and well defined circumstances.

We do not use theory T. We use theory U which consists of what I said in the first paragraph: that theory T is an approximation, and useful in certain circumstances. Theory U contains in it theory T, but also some other ideas including the refutation of T. Theory U is a way of approximating motion in certain circumstances, it's useful, and it's not known to be false. Theory U is just plain better.

If we can't create a true variant of T or any other false theory, like we did with U, then T is not actually useful at all. Refuted theories can only be useful via non-refuted theories that make reference to them, not on their own.

Elliot Temple | Permalink | Comments (2)

Critical Preferences and Strong Arguments

The following is intended as a statement of my position but does not attempt to argue for it in detail.

The concept of a critical preference makes the common sense assumption that there are strong and weak arguments, or in other words that arguments or ideas can be evaluated on a continuum based on their merit.

The merit of an idea is often metaphorically stated in terms of its weight (e.g. Popper wrote "weighty though inconclusive arguments", Objective Knowledge p 41). It's also commonly stated in terms of probability or likeliness. And it's also stated in terms of ranking or scoring ideas to see which is best.

Ideas do have merit, and they can be closer or further from the truth (more or less truthlike, if you prefer). However, we never know how much merit an idea has. We can't evaluate ideas that way.

(BTW suppose we could evaluate how much merit ideas have. A second assumption is that doing so would be useful and that it would make sense to prefer the idea with more merit. That should not be assumed uncritically.)

Popper did not give detailed arguments for the idea that we can or should evaluate arguments by their strength or amount of merit. That's why I call it an assumption. I think he uncritically took it for granted without discussion, as have most (all?) other philosophers.

In the strength based approach, an idea could score a 1, or a 2, or a 20. In Popper's view, the numbers don't have an absolute meaning; they can only be compared with the scores of other ideas. Or in other words, we never know how close to the truth we've come on an absolute scale. In this approach, an idea can have infinitely many different evaluations.

In my approach, an idea can only have three possible evaluations. An idea can be unproblematic (non-refuted), problematic (refuted), or we're unsure. Ignoring the possibility of not taking a stance, which isn't very important, an idea gets a boolean evaluation: it's either OK or not OK.

If we see a problem with an idea, then it's no good, it's refuted. We should never accept, or act on, ideas we know are flawed. Or in other words, if we know about an error it's irrational to continue with the error anyway.

On the other hand, if we have two ideas and we can see no problem with either, then we can have no reason to prefer one over the other. This way of assessing ideas does not allow for the middle ground of "weighty though inconclusive arguments".

If an idea is flawed, it may have a close variant which is unproblematic. Whenever we refute an idea, we should look for variants of the idea which have not been refuted. There may be good parts which can be rescued.

My approach is in significant agreement with Popper's epistemology because it does not allow for the possibility of ideas having support. Some people would say we can differentiate non-refuted ideas by how much support each has, but I follow Popper in denying that.

Popper's alternative to support is criticism. I accept the critical approach. Where I differ is in not allowing an idea to be both criticized and non-refuted. I don't think it makes sense to simultaneously accept a criticism of an idea, and accept the idea. We should make up our mind (keeping open the possibility of changing our mind at any time), or say we aren't sure.

As I see it, a criticism either points out a flaw in an idea or it doesn't. And we either have a criticism of the criticism, or we don't. A criticism can't contradict a theory and be itself non-refuted, but also fail to be decisive. On what grounds would it fail to be decisive, given we see no flaw in it?

Let's now consider the situation where we have conflicting non-refuted ideas, which is the problem that critical preferences try to solve. How should we approach such a conflict? We can make progress by criticizing ideas. But it may take us a while to think of a criticism, and we may need to carry on with life in the meantime. In that case, the critical preferences approach attempts to compare the non-refuted ideas, evaluate their merit, and act on the best one.

My approach to solving this problem is to declare the conflict (temporarily) undecided (pending a new idea or criticism) and then to ask the question, "Given the situation, including that conflict being undecided, what should be done?" Answering this new question does not depend on resolving the conflict, so it gets us unstuck.

When approaching this new question we may get stuck again on some other conflict of ideas. Being stuck is always temporary, but temporary can be a long time, so again we'll need to do something about it. What we can do is repeat the same method as before: declare that conflict undecided and consider what to do given that the undecided conflicts are undecided.

A special case of this method is discussed here. It discusses avoiding coercion. Coercion is an active conflict between ideas within one mind with relevance to a choice being made now. But the method can be applied in the general case of any conflict between ideas.

My approach accepts what we do not know, and seeks a good explanation of how to proceed given our situation. It is always possible to find such an explanation. It may sound difficult, but actually you already do it dozens of times per day without realizing it. Just like people must use conjectures and refutations to understand each other in English conversations (and must use them in all their thinking), and when they first hear that idea it sounds bizarre, but they already do it quickly, reliably, and without realizing what they are doing.

Elliot Temple | Permalink | Comments (13)