This is a reply to an FI post.
What are some examples people might give [of judging inconclusive arguments and assigning them appropriate weights, choosing both whether it’s positive or negative as well as the size] and what’s wrong with those?
People usually don't give numeric ranges for argument weights, but they may talk about the amount of weight in words, e.g. using the kind of scale Peikoff came up with (I think they were words such as "likely", "probable", "unlikely", etc.). One problem with this is that there's no way to combine those fuzzy weights to get a meaningful total.
People talk about the sign of an argument's weight in terms of whether the argument supports or undermines the idea in question. For example, the idea that the sun has risen every day for the last million years (or whatever) might be said to support the idea that the sun will rise tomorrow. One problem with this is that no one has ever explained what it means for one idea to support another idea.
Someone might try to define "support" more precisely by saying that idea Y supports idea X just when P(X|Y) > P(X) (that is, when knowing that Y is the case makes X more likely than X would be if you didn't know whether or not Y was the case). However, this kind of probabilistic justification suffers from a regress problem, as explained in http://curi.us/1594-regress-problems .
How does one compare:
1) the probability that socialism is a good idea
2) the probability that socialism is a good idea, given that Trump is a good president
btw i assume that both statements have an unstated “given the laws of logic, the laws of physics, and a bunch of standard background info like basic facts”.
let’s try a simpler example and see if it helps us figure this out:
1) the probability that Joe has cancer
2) the probability that Joe has cancer, given he took one test for cancer and it came out positive
so you consider all possible worlds that fit the conditions (which include basic background facts like Joe being alive, Joe being the same age he is now, Joe being roughly the same person, the world being roughly the same, same laws of physics, same laws of logic, etc) and then you count how many times Joe has cancer and doesn’t have cancer. which is infinity of each but you figure our proportions anyways like how 10% of the positive integers are divisible by 10 even though infinity are and infinity aren’t.
so it’s kinda like: Joe is age 42 and American. 0.3% of americans that age have cancer (which you estimate based on some published statistics). the cancer is randomly distributed among everyone in the set of all possible worlds so Joe has it in 0.3% of those worlds. you can make it more accurate by considering more factors like whether joe smokes.
but if the test says he has cancer, well the false positive rate is only 10%, so the chance he has cancer is like 50% (that’s just a wild guess, i didn’t bother doing the math, and it depends on numbers i didn’t give like how many people without cancer get the test).
so 50% > 0.3% so "he took one test for cancer and it came out positive” is evidence that Joe has cancer.
there are some things wrong with this, and i’m skipping some steps, and it can’t eliminate explanation and criticism, but there is also some value in it. this kind of method isn’t worthless. (though we do need critical thinking to figure out when and how to use it – without critical thinking, ~everything is worthless).
but this is limited to certain types of scenarios. you look at all possible worlds (given same physics and logic, and if you want a similar number of ppl on earth living in similar countries with similar technologies and so on) and in how many of them is socialism a good idea? i say zero cuz socialism conflicts with physics and logic. the point is, this isn’t a statistical issue. most things people want to know aren’t statistical issues.
one of the worst things the Bayesians do is they can’t seem to tell the difference between pulling colored marbles out of a bag (statistics) and whether Stoicism or Objectivism is a better philosophy (not statistics). they don’t do much to try to find the limits of statistics and avoid going outside their domain of expertise.
lots of stuff isn’t statistics. should i sign up for cryonics? quite possibly zero people who get cryonics with current technology will have a successful outcome. you can’t use statistics to figure out whether it can work at all or not. and saying “well let’s consider how often it works over a range of conceivable laws of physics and logic”, in order to try to more unambiguously get a probability above zero, isn’t going to fix this. how do you count how many different laws of physics current cryo tech works in? how do you put the different laws of physics into a well defined ordering and then iterate over a range of them? no one has any idea how to do any such thing, and i doubt it’s possible at all.
is Trump a good president? i think a lot of them would call that statistical. run a trillion simulations of Earth with Trump as president, see what the outcomes are on some metrics like global wealth, number of people alive, number and severity of wars, etc. And then run some control simulations, i guess just try a million other people as president and average their results..? then see how often trump does better or worse than the control average default metric numbers. and lump together scores on each metric into an overall score. does anyone’s thinking on the matter really resemble this monte carlo method, as an approximation? and how do you know which metrics matter how much, or how to measure them, or how to get them into the same units and weight their importance to combine into a single total? those things are not statistical issues (right?) so even if you could do the simulations the answer you got would depend on a bunch of your non-statistical ideas.
some of the difficulties with combining multiple metrics into a single final score are explained here btw: https://www.newyorker.com/magazine/2011/02/14/the-order-of-things
This is nothing like a complete explanation, just some stuff. Feel free to take it further.