I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.
People bet their carers on various premises, outside their own expertise, e.g. AGI (alignment) researchers commonly bet on some epistemology without being experts on epistemology who actually read Popper and concluded, in their own judgment, that he’s wrong.
So you might expect them to be interested in criticism of those premises. Shouldn’t they want to investigate the risk?
But that depends on what you value about your career.
If you want money and status, and not to have to make changes, then maybe it’s safer to ignore critics who don’t seem likely to get much attention.
If you want to do productive work that’s actually useful, then your career is at risk.
People won’t admit it, but many of them don’t actually care that much about whether their career is productive. As long as they get status and money, they’re satisfied.
Also, a lot of people lack confidence that they can do very productive work whether or not their premises are wrong.
Actually, having wrong but normal/understandable/blameless premises has big advantages: you won’t come up with important research results but it’s not your fault. If it comes out that your premises were wrong, you did the noble work of investigating a lead that many people believed promising. Science and other types of research always involve investigating many leads that don’t turn out to be important. So if you find a lead people want investigated and then do nothing useful, and it turns out to be an important lead, then some other investigators outcompeted you. People could wonder why you didn’t figure out anything about the lead you worked on. But if the lead you work on turns out to be a dead end, then the awkward questions go away. So there’s an advantage to working on dead-ends as long as other people think it’s a good thing to work on.