[Previous] Prices, Decision Factors and Time Will Run Back | Home

Getting Stuck in Discussions; Meta Discussion

This is a followup on my post Rationally Ending Discussions. It also relates to gjm’s reply and Dagon’s reply. gjm said (my paraphrase) that he’s more concerned about people putting ongoing effort into conversations providing low value than about people missing out on important discussions; staying too long is a bigger concern than staying too briefly. Dagon said (my interpretation which I think he’d deny): intellectual norms are not the norm when human beings interact; human interaction follows social norms and it’s weird of you to even bring up rationality outside of special cases.

This post is fairly long and meandering. It's not a direct response to gjm or Dagon, just some thoughts on these general topics. I think it's interesting and relevant but it's not focused and won't provide talking point responses.

First I'll give brief but direct replies:

For gjm: If people are systematically missing out on the best, most important discussions, so those never take place, that is a huge problem for humanity's progress and innovation worldwide. I don't think most people are doing a good job of more efficiently (meaning with less time and energy) doing error correction elsewhere when they avoid a lot of low quality discussions but also avoid crucial error-correcting discussions. There aren't easy, better alternatives.

Also, there are lots of ways to deal with low value discussions better without being closed to discussion and missing out on great discussions. People can improve at discussion instead of giving up and focusing only on flawed, problematic alternatives that also need improvement. It's possible to identify and end low value discussions quickly and rationally, and to develop and use rational filters before discussion that improve the signal/noise ratio without blocking error correction like typical filtering methods today do.

Also, most of the people avoiding low quality discussion and blaming others don't know how to participate in a high quality discussion themselves and should practice more and try to fix their own errors instead of complaining about others. If you haven't spent thousands of hours on discussion (and on post mortems, analysis of discussion, trying to improve), why think you're so great and blame everyone else when things don't work out? Even if you spent a ton of time discussing you might still be bad at it, but if you haven't put in the time why be confident that you're discussing well? Even many prestigious intellectuals with multiple books and tons of fans have not actually spent a lot of time actually having intellectual discussions and trying to get better at discussing rationally (and aren't very good at it). Even if they've done a lot of podcast discussions, that's going to be a lot fewer hours than people can get doing written discussion on forums, verbal discussion enables lower precision and followups over time than written discussion, and it will also leave them inexperienced at discussing with the many types of people who don't get invited on podcasts.

For Dagon: I think it's reasonable for rationality-oriented forums to exist online, and he was posting on one of them. Like science labs, rationality-oriented forums are the special cases where rationality may be discussed today, or at least they are trying to be even if they sometimes fail. Let's try to make rational spaces work better instead of attacking that goal.


People sometimes stay in discussions they don’t want to have. This is often bad for all parties. They tend to write low quality messages when they don’t want to be there. And they write things which are misleading about their intentions, values, goals, preferences, etc. – e.g. which suggest they will want to continue discussing this a bunch in the future, as a major project, rather than suggesting they have one foot out the door already. When they do leave, it’s often abrupt and surprising for the other person.

Sometimes there are hints the person may be leaving soon but they aren’t explicit statements like “I am losing interest and may leave soon”. Often the guy’s explicit statements deny there’s a problem. Instead, the hints are things like being a bit insulting. If the guy is insulting you, you can infer that maybe he’s less interested in talking to you. Or not. There are alternatives. Maybe he’s just a jerk who isn’t very skilled at calm, truth-seeking discussion, and you can be tolerant and charitable by ignoring the insults. Maybe the rational thing to do is refuse to take a hint. That’s certainly encouraged sometimes in some ways.

Sometimes people think I’m insulting them when I didn’t intend to. What people consider insulting varies by subculture, temperament, etc. A thick skin is a virtue sometimes.

If someone insults you and you say “Is this a sign you want to end the discussion?” often they will deny it. They might even apologize and say they’ll stop insulting you. That kind of statement is often a more effective way to get an apology than directly asking for an apology is.

Why do people deny wanting to end the discussion? One reason is they think they should discuss or debate their ideas. Another reason is they think you’re asking “Do you give up?” They want to defend certain ideas, not give up on defending those ideas. They want to be able to claim that their side can stand up to debate, not concede. So when they end the discussion, they want to blame you or external circumstances, so it has nothing to do with their arguments failing. If they stop discussing because they’re tilted and they insulted you, it’s like admitting they don’t have reasonable arguments. So they want to reply like “No, I really do have good arguments; let me do a rewrite.”

The same sort of thing happens with milder stuff than insults, in which case it’s even harder to deal with and it’s less clear what’s going on.

Insults are one of the ways people communicate that they think you’re being dumb. They see your position and reasoning as low quality. Why, then, are they engaging? It’s usually not some sort of rational, intellectual tolerance, nor a belief that the world is inadequate so high quality stuff more or less doesn’t exist (so there’s no better discussion available). It’s often that they want to change and control you – someone is wrong on the internet and they can’t just let him stay wrong! Or they want a victory for themselves. Or they want to embarrass and attack your tribe (the group of people with similar beliefs that they see you as representing). Or if you have obscure, non-standard or unique beliefs, often they want to suppress the heresy. How can you believe something so ridiculous or sinful? That seems maybe fixable in debate if anything is. If you can’t even correct a guy who believes something that’s flat-Earth level dumb (or pro-adultery level immoral), can you ever expect to correct anyone on anything? And one might think that correcting people with unpopular views should be especially easy (but actually people usually hold unpopular views for a reason, whereas if they hold a popular view they might never have thought about it much and not have a strong or informed opinion). If reason can work, surely this is one of the cases where it’ll work? (Nah, it’s actually easier to persuade people more similar to yourself who are already pretty good at reason. You could see them as more challenging opponents but persuasion isn’t just a battle; they have to think and learn something from what you said in order to substantively change their mind. Persuasion is partly about their skill and their willingness to be reasonable voluntarily.)

The easiest people to persuade in general have a bit less skill and knowledge than you, but not a lot less. So you can correct them on some stuff but they actually understand you pretty well and they know some stuff about learning and being reasonable. Amount of skill (or knowledge) isn’t a single dimension though so it’s more complicated (there exist many different skills).

Someone more skilled than you, even a lot more, can be easy to persuade if you’re right – if you actually know something they don’t and have some decent reasons (not just blind luck; you can explain some reasoning that makes sense). But that’s uncommon. Usually when you try to debate them you’ll be wrong. They already thought of more stuff than you did. But if you’re right, it can be easy, because they’ll help argue your side and fill in gaps for you. And they may quickly recognize you’re saying something they haven’t addressed before and take an interest in it, rather than seeing it as “the opposing tribe”.

There’s no great way to know who is more or less skilled than you. We can approximate that some. We can guess. A lot of guesses are really just using social status as a proxy. Some ways of guessing are better, e.g. if I read an intellectual book and it really impresses me, then I may guess the author is high skill: probably at least near my own skill, or else above me, not way below me. Part of what happens in debate is you can find out who is higher skill and more knowledgeable. It’s a bit of a side issue (the main issue is evaluating specific ideas) but people do gain some visibility into it (though they often find it rude for the higher skilled person to say anything about it, and also in acrimonious debates usually the people both think they’re higher skill than the other person).

In some ways it’s not important to evaluate people’s skill levels, but it’s not useless and it can help with mentally modeling the world. Imagine if a young child didn’t realize he was lower skill than the adults he was debating, when he really was way lower skill. That arrogance could make learning harder. It can be hard to do everything in the maximally rational way of just evaluating every idea on its merits. Realizing you’re outmatched and should try listening, even when you don’t fully understand why you’re wrong about everything (and may not be wrong in every case) can help too.


People can exit discussions more easily when the stakes are lower, when they have a good reason/excuse, when the guy they are debating agrees with their reason/excuse for leaving, when their ego isn’t on the line, or when they won’t be assumed to have lost the debate because they left.

It’s hard to exit discussions when you’re trying to prove you’re open minded or open to debate. We all know (at least in communities with some interest in rationality), in some sense, that we’re supposed to be willing to argue our points instead of just assert claims and then ignore dissent.

Having low stakes, no debate methodology, no claim that debate matters to rationality, etc., clashes with the goal of transparency and with anti-bias procedures. Cheaply exiting discussions without explanation makes it easy for bias to ruin most of most discussions. Whenever something important comes up that relates to an anti-rational meme or bias, people can just take their free option to end the discussion with zero accountability. It’s unlimited, consequence-free evasion.

How do you control evasion, rationalization, bias, dodging questions, etc?

Similarly, when people decide the other guy is an idiot, or the discussion is low quality or unproductive due to the other guy, then approximately half the time they are wrong. It’s important that there be some error correction mechanisms: when you blame the other guy, how would you find out if actually you’re wrong? If you tried to construct a length 5 impasse chain, it’d often reveal the truth: it’d become clear to reasonable people whether you were right or wrong (sometimes this will even work for you even though you’re biased: what actually happened in the debate can be so clear it overcomes your rationalizations when you try to actually write it out).

Standard discussion procedure largely avoids meta discussion. If someone says something I regard as low quality, some of my main options are:

  1. Don’t reply. Don’t explain why to the other guy or to the audience.
  2. Ignore the low quality part and try to reply to whatever I think has value. Often this means ignoring most of what they said and focusing on the topic itself. This often results in complaints from people who don’t think you’re engaging with them … which makes sense because you’re intentionally not engaging with some of what they said.
  3. Steel man it by charitably interpreting them as meaning something different than what they said. Try to guess a good idea similar to their bad idea and engage with that. But sometimes you can't think of any good ideas similar to the (from your point of view) confused nonsense they just said... And sometimes they don't mean or understand a better idea than what they said or they would have said something better.

What’s not socially permitted in general is:

  1. Explain why I think their message is low quality. This would invite a correction or an end to the discussion.

The first difficulty is that the other guy will get defensive and the audience will read it as socially aggressive. You’re not allowed to openly talk about most discussion problems and try to address them head on. You’re typically supposed to sort of pretend everyone is good at discussion and doing nothing wrong, and discussions fade out blamelessly because people have differing interests and because productive discussion is hard and doesn’t always happen even when people make good tries at it.

For certain types of problems, you’re not supposed to attempt cooperative problem-solving. You’re allowed to assume and guess about what’s going on and make adjustments unilaterally (some of which are wrong and make things worse, which often would have been easy to figure out if you’d communicated).

Continuing to speak generally not personally: If I try to talk about a discussion problem, and the guy responds defensively and fights back, what happens next? Will I learn from his negative comments? No. I will see it as flaming or shoddy argument. I won’t find out I was wrong. This happens often. Even if he gave a great rebuttal, the typical result is I’d still be biased and not get corrected. There’s nothing here that causes me to overcome my bias. I had a negative viewpoint, then I stated a problem, and he responded, and then why would I learn I’m wrong? What is going to make that amazing result happen? It’s a harder thing to be corrected about than a regular topic. Maybe. Is it? If I made a specific accusation and he gave a specific, short, clear response which engaged with those specific words, that’s fairly easy to be corrected by. Not easy but easy relative to most stuff. But more ego is involved. Maybe. People are really tribalist about most of the stuff they care enough about to talk about. If they don’t have some sort of emotional investment or bias or whatever – if they don’t care – then they tend not to talk about it on forums (they’ll talk about the weather in small talk without really caring though).

Do people care about things without being biased? Do they have tentative positions which they think are correct but they’d like to upgrade with better views? Not usually.


What do you do in a world where the dominant problem is people staying in discussions they don’t want to be in, sabotaging the hell out of them? And the concept of actually adding anti-bias procedures is a pipe dream that’ll threaten and pressure people into even more bad behavior? What a nightmare that is if straightforward rationality stuff actually makes things worse? What can be done?

This is a rationality is a harsh mistress type problem. (In my opinion, The Moon Is a Harsh Mistress by Robert Heinlein is a good book, though it's not relevant to this essay other than me borrowing words from the title.) People find rationality itself pressuring. Rationality is demanding and people self-pressure to try to live up to it. They can experience discussion norms about rational methods as escalating that pressure.

And how will they respond to analysis like this? They’ll find it pressuring, condescending, wrong, hostile, etc. Or they might grant it applies to most (other) people. But they generally won’t face head on the situation they are actually in and talk about how to handle it and what could work for them. That’d admit too much.

So … don’t talk to almost everyone? But that lets fakers get to the top of the intellectual world since they aren’t really held to any standards and aren’t accountable in any way. But if you publicly broadcast standards, as I do, it pressures people; it’s seen as a challenge to them (it basically is a challenge to the public intellectuals).

Most people on Less Wrong (LW) don’t take themselves seriously or think they matter, but also won’t admit that. They don't expect to come up with any great intellectual innovations and don't think their forum discussions are important to humanity's intellectual progress. It’s hard to ask for the people who think they matter to come forward and chat. People will pretend they are in that category who aren’t.

One of the things you can do is speak differently at different places. I’ve tried posting meta discussion at my own forum but not on another forum like LW. This doesn’t communicate about discussion problems with the people you’re discussing with, so it mostly isn’t a solution. But at least some people – those who care more – can discuss the discussion. It also has the risk that someone from the forum where you’re more guarded finds your less guarded comments and gets mad. But most people don’t look around enough to find out. There’s a self-selection issue. People who find the less guarded comments aren’t a random sample. They’re people who are more willing to explore who are more likely to take the comments well.


Eliezer Yudkowsky doesn’t like to have discussions with his own community. He doesn’t post on Less Wrong anymore. My experience is they aren’t much like him (where “him” = his writing, which has a variety of good stuff, but I think he’s actually worse than that in discussion; people rarely live up to their best work on an ongoing basis). His fans mostly don’t seem to understand or remember a bunch of his published ideas. Plus they’re generally pretty flawed. Not entirely bad though.

One of the hard parts with LW is people read random bits of my posts. I posted a bunch of related stuff, mostly in sequence, and then people come in the middle and don’t understand what’s going on.

I can’t explain everything at once in the first post and also no one seems to be following along or be interested in a sequence of posts that builds up to something bigger. And they are pretty openly willing to skim and complain about post length when something is 4000 words. Saying “this is part 3 of 6 in a series” and linking the other stuff doesn’t help much. Most people just ignore that and won’t go visit part 1. Even if they only wanted to read one, I’d rather they go read part 1 not the new part they just saw, but they usually won’t. Most people have strong recency bias and a strong bias against clicking links.


It’s hard to signal the right things to connect with the right people when people are putting up a bunch of fake signals.

The people staying in discussions they don’t want to be in are communicating false information and causing trouble. This is a major problem that makes it hard for the people who want rational discussion to find each other. Instead of viewing them as victims of social pressure (which they are), you can view them as trolls who are lying and sabotaging (which they also are).

When I signal (or very explicitly state) what I want, a bunch of people join the discussion who don’t want it. They don’t admit they don’t want it. They make it hard to figure out who actually wants it because they’re all pretending to want it.

What can be done about this false signaling problem? I’m pretty good at spotting false signalers. I can sometimes tell quickly now. That used to be way harder for me but I know more signs now. And I can point them out explicitly and do analysis instead of just asserting it. But the analysis sometimes involves a bunch of prerequisites and advanced logic so other people don't follow it. I could also explain the prerequisites but then it’s a big, long learning process that could take years.

But what do I do when I spot fakers? Telling them to go away tends to offend people and get requests for reasoning that they will find insulting and unwanted. They aren’t really open to hearing objective analysis of their flaws. And this can also get multiple offended people to gang up on me. Doing the critical analysis and reasoning without telling them to go away gets a similar result; they don’t want criticism.

I can ignore the fakers but then they’ll imply that I don’t want discussion since I’m ignoring lots of people without explaining. That’s one of the issues. There are social pressures to reply. People make assumptions if you don’t. Willingness to defend your ideas in debate is already judged conventionally; that’s not a new thing I made up.

I’m not socially allowed to just ignore over 90% of people that reply to me on forums because I don't think their claims to be interested in discussion are genuine. And I’m not socially allowed to say I don’t believe them; that’s offensive. And I’m not socially allowed to explain why I don’t believe them and criticize their integrity. And I don’t know how to create productive discussion with them. And I don’t know how to explain that I’m looking for a small minority of people on their forum and get the wrong people to actually stop pretending they qualify.

I do know some stuff about how to proceed properly in discussion. I could pretend the person wants a real discussion and do what I’d do in that case. The result is catching them out and showing some example of what they’re doing wrong, since they never discuss right. But then they just stop talking or muddy the waters. No one learns any lessons. I’ve done it many times. Maybe some of my fans learn something since they’ve seen it a bunch and now it informs their understanding of what the world is like and what forums are like (or maybe they just cargo cult some of my behaviors and treat people badly while thinking they’re being rational). But the people at the forum I’m visiting, or the people new to my forum, don’t learn about general trends from examples like that. Because they don’t want to actually discuss the trends.

People’s hostility to meta discussion makes rational discussion pretty impossible. That’s the key.

The general pattern of what ruins everything is:

  1. Problem. This is OK so far. Problems are inevitable.
  2. Something to suppress problem solving.

Part 2 is called "irrationality". Working against error correction or problem solving is what irrationality is.

And suppressing meta discussion means suppressing problem solving. Discussions run into problems but you aren’t allowed to talk about them and try to solve them because then you’re discussing the discussion, discussing the behavior of the participants, changing the topic (people wanted to talk about the original, object topic, not this new topic), etc. People are interested in talking about minimum wage or global warming, not about whether a particular paragraph they posted fits some particular aspects of rationality or logic or not. People generally don't want to discuss whether their writing is unclear and that is symptomatic of a pattern where their subconscious writing automatizations aren’t good enough to productively deal with the advanced topic they want to talk about.

If you try to do X (any activity or project including a discussion), and then you run into a problem, and then you talk about that problem, that is meta discussion. You’re talking about the activity and how to do the activity and that sort of thing, rather than doing the activity. How to do X is a meta level above X. People do put up with that sometimes. Mostly in learning contexts. If you go to a cooking class you’re allowed to talk about how to cook. But if you’re just cooking with your friend, commonly you’re supposed to assume you both already know how to cook and don’t talk about how to do it, just do it.

Some stuff has problem solving built in. If you’re playing a video game, talking about strategies (limited parts of how to play) may be considered part of the game activity. If you go to an escape room, talking about how to solve the puzzles is normal.

What people object to is one meta level above whatever they expect or want. Which is often exactly what you need for problem solving. Whatever the highest level of abstraction or meta that they are OK with, if you have a problem at that level, then talking about that problem and trying to solve it is one meta level too far.

If the goal is to learn cooking, then a problem at that level is a learning problem (not a cooking problem, which is a lower level). And talking about learning problems would be viewed as off topic, out of bounds, etc. So you can’t solve the learning problem.

In general, people can only learn one meta level below what they are willing to deal with. If you’re willing to talk about learning to cook, then you can learn to cook (one meta level lower) but you can’t learn to learn to cook (same level you’re willing to deal with). Learning about X requires going one meta level above/past X so you can talk about X and talk about problems regarding X.

But it’s actually harder than that. That’s something of a best case scenario. Sometimes your meta discussion has problems and needs a higher level of meta discussion.

With cooking, some people are willing to talk about learning to cook while trying to cook. They’re open to two different levels at once. But typical philosophy discussion doesn’t offer two levels to enable learning about the lower one because people aren’t openly trying to learn. They’ll try to talk about AGI or free will or atheism and that’s the one single level the whole discussion is supposed to take place at. Just do it. You aren’t discussing how to do anything, or directly trying to learn, so you don’t learn. People will set out to learn to cook but it’s uncommon to see anyone on a philosophy forum trying to learn. You can find people on Reddit (mostly students who are taking philosophy classes at university, but some hobbyists too) asking for reading recommendations to learn from, but they don’t normally actually try to learn in online discussion. On some subreddits (like AskHistorians or AskPhilosophy) people ask questions and try to learn from the answers without having discussions. People tend to try to do their learning by themselves with some books and maybe lectures and then when they talk to other people (in a back-and-forth discussion not just asking a question or two) they’re trying to be philosophers who say wise things.

And people will claim that of course learning from debate or saying ideas is one of the goals; but it usually isn’t really, not in a serious way; their focus is on being clever and saying things they think are right and they aren’t talking about actually learning to do a skill in the way people will try to learn to cook. They’re always saying like “I think X is true because Y” not “I need to figure out how to analyze this. What are some good techniques I could use? I better double check my books to make sure I do those techniques correctly.” Whereas with cooking people will ask how to cook something, and maybe double check some information source to remind themselves how to use a tool, which is more learning oriented than philosophy questions people ask like “What is a good answer to [complex, hard issue]?” When they ask for ready-made answers to tough topics, they aren’t learning to create those answers themselves; they aren’t learning all the steps to invent such an answer, compare it to other answers, and reach a good conclusion. With cooking, people often ask for enough information that they can do it themselves, so it’s more connected to actually learning something.


Elliot Temple on July 4, 2025

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)