Objectivists accuse Popperians of being skeptics. Popperians accuse Objectivists of being infallibilists. Actually, both philosophies are valuable and largely compatible. I present here some integrating ideas and then a mistake that both philosophies made.
Knowledge is certain, absolute, contextual, conclusive and progressive. The standard of knowledge is conclusiveness not infallibility, perfection or omniscience.
Certain means we should act on it instead of hesitating. We should follow its implications and use it, rather than sitting around doubting, wondering, scared it might be wrong. Certain also means that it is knowledge, as opposed to non-knowledge; it denies skepticism.
Absolute means no contradictions, compromises or exceptions are allowed.
Contextual means that knowledge must be considered in context. A good idea in one context may not be a good idea when transplanted into another context. No knowledge could hold up against arbitrary context switches and context dropping.
Further, knowledge is problem oriented. Knowledge needs some problem(s) or question(s) for context, which it addresses or solves. Knowledge has to be knowledge about something, with some purpose. This implies: if you have an answer to a question, and then in the future you learn more, the old answer still answers the old question. It's still knowledge in its original, intended context.
Consider blood types. People wanted to know which blood transfusions were safe (among other questions) and they created some knowledge of A, B, AB and O blood types. Later they found out more. Actually there is A+, A-, B+, B-, AB+, AB-, O+ and O-. It was proper to act on the earlier knowledge in its context. It would not be proper to act on it today; now we know that some B type blood is incompatible with some other B type blood. Today's superior knowledge of blood types is also contextual. Maybe there will be a new medical breakthrough next year. But it's still knowledge in today's context, and it's proper to act on it.
One thing to learn here is that a false idea can be knowledge. The idea that all B type blood is compatible is contextual knowledge. It was always false, as a matter of fact, and the mistake got some people killed. Yet it was still knowledge. How can that be?
Perfection is not the standard of knowledge. And not all false ideas are equally good. What matters is the early idea about blood types had value, it had useful information, it helped make many correct decisions, and no better idea was available at the time. That value never goes away even when we learn about a mistake. That original value is still knowledge, considered contextually, even though the idea as a whole is now known to be false.
Conclusive means the current context only allows for one rational conclusion. This conclusion is not infallible, but it's the only reasonable option available. All the alternative ideas have known flaws; they are refuted. There's only one idea left which is not refuted, which could be true, is true as far as we know (no known flaws), and which we should therefore accept. And that is knowledge.
None of this contradicts the progressive character of knowledge. Our knowledge is not frozen and final. We can learn more and better – without limit. We can keep identifying and correcting errors in our ideas and thereby achieve better and better knowledge. (One way knowledge can be better is that it is correct in more contexts and successfully addresses more problems and questions.)
Peikoff says that certainty (meaning conclusive knowledge) is when you get to the point that nothing else is possible. He means that, in the current context, there are no other options. There's just one option, and we should accept it. All the other ideas have something wrong with them, they can't be accepted. This is fine.
Peikoff also says that before you have certainty you have a different situation where there are multiple competing ideas. Fine. And that's not certainty, that's not conclusive knowledge, it's a precursor stage where you're considering the ideas. Fine.
But then Peikoff makes what I think is an important mistake. He says that if you don't have knowledge or certainty, you can still judge by the weight of the evidence. This is a standard view held by many non-Objectivists too. I think this is too compromising. I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice.
The weight of the evidence is no good. Either you have knowledge or you don't. If it's not knowledge, it's not worth anything. You need to come up with a good idea – no compromises, no contradictions, no known problems – and use that. If you can't or won't do that, all you have left is the irrationality of acting on and believing arbitrary non-knowledge.
I think we can always act on knowledge without contradictions. Knowledge is always possible to man. Not all knowledge instantly, but enough knowledge to act, in time to act. We may not know everything – but we don't need to. We can always know enough to continue life rationally. Living and acting by reason and knowledge is always possible.
(How can we always do this? That will be the subject of another essay. I'm not including any summary or hints because I think it's too confusing and misleading without a full explanation. Edit: here is the follow up essay.)
Knowledge doesn't allow contradictions. Suppose you're considering two ideas that contradict each other. And you don't have a conclusive answer, you don't have knowledge of which is right. Then using or believing either one is irrational. No "weight of the evidence" or anything else can change this.
Don't pick a side when you know there is a contradiction but have not rationally resolved it. Resolve it; create knowledge; learn; think; figure it out. Neither idea being considered is good enough to address the contradiction or refute the other idea – so you know they are both flawed. Don't hope or pray that acting on a known-to-be-flawed idea will work out anyway. Irrationality doesn't work.
That's not good enough. If you discover a contradiction, you should resolve it rationally. If you fail at that – fail at the use of reason – then that's bad, that's a disaster, that's not OK.
Karl Popper made the same mistake in a different form. He said that we critically analyze competing ideas and the one that best survives criticism should be acted on. Again this is too compromising. Either exactly one idea survives criticism, or else there is still a contradiction. "Best survives criticism", and "weight of the evidence", are irrational ways of arbitrarily elevating one flawed idea over another, instead of using reason to come up with a correct idea.
(For some further discussion about weighing ideas, see also the choices chapter of The Beginning of Infinity by David Deutsch.)