Philosophy Side Quests

People get stuck for years on the philosophy main quest while refusing to do side quests. That is not how you play RPGs. Side quests let you get extra levels, gear and practice which make the main quest easier to make progress on.

An example of a side quest would be speedrunning a Mario or Zelda game. That would involve some goal-directed activity and problem solving. It’d be practice for becoming skilled at something, optimizing details, and correcting mistakes one is making.


Elliot Temple | Permalink | Messages (10)

Do Primarily Easy Things – Increasing The Productivity Of Your Intellectual Labor Vs. Consumption

When you do productive labor (like at a job), you are able to use what you produce (or some negotiated amount of payment related to what you produce). How you use your income can be broadly viewed in two ways: investment and consumption.

Investment fundamentally means capital accumulation – putting your income towards the accumulation of capital goods which raise the productivity of labor and thereby create a progressing economy which offers exponentially (like compound interest) more and more production each year. The alternative is to consume your income – spend it on consumer's goods like food, video games, lipstick, cars, etc.

People do a mix of savings/investment and consumption. the proportion of the mix determines how good the future is. A high rate of capital accumulation quickly leads to a much richer world which is able to support far more consumption than before while still maintaining a high rate of investment (the pie gets larger. Instead of consuming 80% of the original pie, one could soon be consuming 20% of a much larger pie which is also growing much faster, and that 20% of the larger pie will be more than 80% of the smaller pie.)

For more info on the economics of this, see the diagrams on pages 624 and 625 of George Reisman's book Capitalism: A Treatise on Economics and read some of the surrounding text.

The situation with your intellectual labor parallels the situation with laboring at a job for an income. Your intellectual labor is productive and this production can be directed in different ways – towards consumption, towards increasing the productivity of intellectual labor, or a mix. The more the mix favors increasing the productivity of your intellectual labor, the brighter your future.

Consumption in this case refers to things where you aren't investing in yourself and your education – where you aren't learning more and otherwise becoming more able to produce more in the future. For example, you might put a great deal of effort into writing a book which you hope will impress people, which you are just barely capable of writing. It takes a ton of intellectual labor while being only a little bit educational for you. Most of your intellectual labor is consumed and the result is the book. If you had foregone the book in the short term and invested more in increasing your productivity of intellectual labor, you could have written it at a later date while consuming a much smaller proportion of your intellectual output. This is because you'd be outputting more and even more so because your output would be more efficient – you'd be able to get more done per hour of intellectual labor (one of the biggest factors here would be making fewer mistakes, so you'd spend less labor redoing things). A good question to ask is whether you produced an intellectual work in order to practice or if instead you put a lot of work into polishing it so other people would like it more (that polishing is an inefficient way to learn). It's sad when people who don't know much put tons of effort into polishing what little they do know instead of learning more – and this is my description of pretty much everyone. (If you think you already know so much that you're largely done with further educating yourself, or at least ready to make education secondary, please contact me. I expect to be able to point out that you're mistaken, especially if you're under 50 years old.)

Consumption (rather than investment), in the realm of intellectual labor, primarily relates to going out of your way to try to accomplish things, to do things – like persuading people or creating finished works. It is possible to learn by doing, but it's also possible not to learn much by doing. If you're doing for the sake of learning, great. If you're doing for the sake of an accomplishment, that is expensive, especially if you're young, and you may be dramatically underestimating the expense while also fooling yourself about how educational it is (because you do learn something, but much less than you could have learned if you instead studied e.g. George Reisman's Program of Self-Education in the Economic Theory and Political Philosophy of Capitalism or my Fallible Ideas recommended reading list.)

Broadly, I see many people try to produce important intellectual works when they don't know much. They spend a lot of intellectual labor and produce material which is bad. They would have been far better served by learning more now, and producing more output (like essays) later on when they are able to make valuable intellectual products with a considerably lesser effort. This explains the theme I've stated elsewhere and put in the title of this piece: you should intellectually do (consume) when it's relatively easy and cheap, but be very wary of expensive intellectual projects which take tons of resources away from making intellectual progress.

Some people doubt the possibility of an accumulation of intellectual capital or its equivalent. They don't think they can increase the productivity of their intellectual labor substantially. These same people, by and large, haven't learned speed reading (or speed watching or speed listening). Nor have they fully learned the ideas of great intellectuals like Ayn Rand and Ludwig von Mises. Equipped with these great ideas, they'd avoid going down intellectual dead ends, and otherwise create high quality outputs from their intellectual labor. Even if the process of increasing the productivity of one's intellectual labor runs into limits which result in diminishing returns at some point, that is no excuse for stopping such educational self-investment long before reaching any such limits.

In the long run, the ongoing increase in the productivity of one's intellectual labor requires the ongoing creation of new and improved intellectual tools and methods, and supporting technologies. It requires ongoing philosophical progress. I believe philosophical progress can be unbounded if we work at it (without diminishing returns), but regardless of the far future there is massive scope for productive educational self-investment today. Unless you've exhausted what's already known about philosophy – that is, you are at the forefront of the field – and also spent some time unsuccessfully attempting to pioneer new philosophy ... then you have no excuse to stop investing in increasing the productivity of your intellectual labor (primarily with better and better methods of thinking – philosophy – but also with other things like learning to read faster). Further, until you know what is already known about philosophy, you are in no position to judge the far future of philosophical progress and its potential or lack of potential.

Note: the biggest determinants of the productivity of your intellectual labor are your rate of errors and your ability to find and correct errors. Doing activities where your error rate is below your error correction capacity is much more efficient and successful. You can increase your error correction effectiveness by devoting an unusually large amount of resources to it, but there are diminishing returns on that, so it's typically an inefficient (resource expensive) shortcut to doing a slightly more difficulty project slightly sooner.


The article is itself an example of what I can write in a few minutes without editing or difficulty. It's the fruits of my previous investment in better writing ability in order to increase the productivity of my intellectual labor. I aim primarily to get better at writing this way (cheaply and efficiently), rather than wishing to put massive polishing effort into a few works.


Update (2018-05-18):

What I say in this post is, to some extent, well known common sense. People get an education first and do stuff like a career second. Maybe they aren't life-long learners, but they have the general idea right (learn how to think/do/problem-solve/etc first, do stuff second after you're able to do it well and efficiently).

What goes wrong then? Parenting and schooling offer a bad, ineffective education. This discourages further education (the painfulness and uselessness of their education is the biggest thing preventing life-long learners). And it routinely puts people in a bad situation: trying to do things which they have been educated to be able to do well, but in fact cannot do well. The solution is not to give up on education, but to figure out how to pursue education effectively. A reasonable place to start would be the books of humanity's best thinkers since the start of western civilization. Some people have been intellectually successful and effective (as you can see from the existence of iPhones); you could look into what they did, how they thought, etc.

FI involves ideas that are actually good and effective, as against rivals offering similar overall stuff (rational ideas) but which are incorrect. FI faces the following major challenges: 1) people are so badly educated they screw up when trying to learn FI ideas 2) people are so badly educated they don't know how to evaluate if FI is working, how it compares to rivals, the state of debate between FI ideas and alternative ideas, etc.


Elliot Temple | Permalink | Messages (13)

Changing Minds About Inequality

people have lots of bad ideas they don’t understand much about, like that “inequality” is a major social problem.

what would it take to change their mind? not books with arguments refuting the books they believe. they didn’t get their ideas from structured arguments in serious books. they don’t have a clear idea in their mind for a refutation to point out the errors in. non-interactive refutation (like a book, essay, article) is very, very hard when you have to first tell people what they think (in a one-size-fits-all way, despite major variance between people) before trying to refute it. Books and essays work better to address clearly defined views, but not so well when you’re trying to tell the other side what they think b/c they don’t even know (btw that problem comes up all the time with induction).

to get someone to change their mind about “inequality”, what’d really help is if they thoughtfully considered things like:

what is “inequality”? why is it bad? are we talking about all cases of inequality being equally bad, or does the degree of badness vary? are we talking about all cases of inequality being bad at all, or are some neutral or even good? if the case against inequality isn’t a single uniform thing, applying equally to all cases, then what is the principle determining which cases are worse and why? what’s the reasoning for some inequality being evaluated differently than other inequality?

whatever one’s answers, what happens if we consider tons of examples? are the evaluations of all the examples satisfactory, do they all make sense and fit your intuitions, and reach the conclusions you intended? (cuz usually when people try to define any kind of general formula that says what they think, it gives answers they do not think in lots of example cases. this shows the formula is ad hoc crap, and doesn’t match their actual reasoning, and therefore they don’t even know what their reasoning is. so they are arguing for reasoning they don’t understand or misunderstand, which must be due to bias and irrationality, since you can’t reach a conscious, rational, positive evaluation of your ideas when you don’t even know what they are. you can sometimes reach a positive meta-evaluation where you acknowledge your confusion about the specifics of the ideas, but that’s different.).

anyway, the point is if people would actually think through the issue of inequality it’d change some of their minds. that’d be pretty effective at improving the situation. what stops this? the minor issue is: there are a lack of discussion partners to ask them good questions, guide them, push them for higher standards of clarity, etc. the major issue is: they don’t want to.

why don’t people want to think about “inequality”? broadly, they don’t want to think. also, more specifically, they accepted anti-inequality ideas for the purpose of fitting in. thinking about it may result in them changing their mind in some ways, big or small, which risks them fitting in less well. thinking threatens their social conformity which is what their “beliefs” about “inequality” are for in the first place.

this relates to overreaching. people’s views on inequality are too advanced for their ability to think through viewpoints. the views have a high error rate relative to their holder’s ability to correct error.


Elliot Temple | Permalink | Message (1)

Passivity as a Strategic Excuse

How much of the "passivity" problems people have – about learning FI and all throughout life elsewhere as well – are that they don't want to do something and don't want to admit that they don't want to? How much is passivity a disguise used to hide disliking things they won't openly challenge?

Using passivity instead of openly challenging stuff is beaten into children. They learn not to say "no" or "I don't want to" to their parents. They learn they are punished less if they "forget" than if they refuse on purpose. They are left alone more if they are passive than if they talk about their reasoning for not doing what the parent wants them to do.

Typical excuses for passivity are being lazy or forgetful. Those are traits which parents and teachers commonly attribute to children who don't do what the parent or teacher wants. Blaming things on a supposed character flaw obscures the intellectual or moral disagreement. (Also, character flaws are a misconception – people don't have an innate character; they have ideas!)

The most standard adult excuse for passivity is being busy. "I'm not passive, I'm actively doing something else!" This doesn't work as well for children because their parents know their whole schedule.

Claiming to be busy is commonly combined with the excuse of privacy to shield what one is busy with from criticism. Privacy is a powerful shield because it's a legitimate, valuable concept – but it can also be used as an anti-criticism tool. It's hard to figure out when privacy is being abused, or expose the abuses, because the person choosing privacy hides the information that would allow evaluating the matter.

Note: Despite people's efforts to prevent judgment, there are often many little hints of irrationality. These are enough for me to notice and judge, but not enough to explain to the person – they don't want to understand, so they won't, plus it takes lots of skill to evaluate the small amount of evidence (because they hid the rest of the evidence). Rather than admit I'm right (they have all the evidence themselves, so they could easily see it if they wanted to), they commonly claim I'm being unreasonable since I didn't have enough information to reach my conclusions (because a person with typical skill at analysis wouldn't be able to do it, not because they actually refute my line of reasoning).

Generic Example

Joe (an adult) doesn't like something about Fallible Ideas knowledge and activities (FI), and doesn't want to say what it is. And/or he likes some other things in life better than FI and wants to hide what they are. Instead of saying why he doesn't pursue FI more (what's bad about it, what else is better), Joe uses the passivity strategy. Joe claims to want to do FI more, get more involved, think, learn, etc, and then just doesn't.

Joe doesn't claim to be lazy or forgetful – some of the standard excuses for passivity which he knows would get criticized. Instead, Joe doesn't offer any explanation for the passivity strategy. Joe says he doesn't know what's going on.

Or, alternatively, Joe says he's busy and that the details are private, and he'd like to discuss it, he just doesn't know how to solve the privacy problem. To especially block progress, Joe might say he doesn't mind having less privacy for himself, but there are other people involved and he couldn't possibly say anything that would reduce their privacy. Never mind that they share far more information with their neighbors, co-workers, second cousins, and Facebook...


Elliot Temple | Permalink | Messages (5)

Backbone, Pushback, Standing Up For Your Ideas

You need to be sturdy to do well in FI philosophy discussions or anywhere. Don’t be pushed around or controlled by people who weren’t even trying to push you around, because you’re so weak and fragile almost anything can boss you around without even trying or intending to.

Broadly, people give advice, ideas, criticism, etc.

Some advice can help you right now. Some of it, you don’t understand, you don’t get it, it doesn’t work for you right now. You could ask a question or follow up and then maybe get more advice so it does work, but you still might not get it. It’s good to follow up some sometimes, but that’s another topic.

The point is: you must use your own judgment about which ideas work for you. What do you understand? What makes sense to you?

Filter all the ideas/advice/criticism in this way. Sort it into two categories:

Category 1 (self-ownership and integration of the idea): Do you get it, yourself, in your own understanding, well enough to use it? Are you ready to use it as your own idea, that is yours, that you feel ownership of, and you take full responsibility for the outcome? Would you still use it even if the guy who said it changed his mind (but didn’t tell you why), because it’s now the best idea in your own mind? Would you still use it if all the people advocating it got hit by cars and died, so you couldn't get additional advice?

Category 2 (foreign, non-integrated, confused idea): You don’t get it. Maybe you partly get it, but not fully. Not enough to live it without ever reading FI again, with no followup help. You don’t understand it enough to adapt it when problems come up or the situation changes. You have ideas in your mind which conflict with it. It isn’t natural/intuitive/automated for you. It feels like someone else’s idea, not yours. Maybe you could try doing their advice, but it wouldn’t be your own action.

NEVER EVER EVER ACCEPT OR ACT ON CATEGORY 2 IDEAS.

If you only use category 1, you’re easy to help and safe to talk to. People can give you advice, and there's no danger – if it helps, great, and if it doesn't help, nothing happens. But if you use category 2, you are sabotaging progress and you're hard to deal with.

Note: the standard for understanding ideas needs to be your own standard, not my standard. If you're somewhat confused about all your ideas (by my standards), that doesn't mean everything is category 2 for you. If you learn an idea as well as the rest of your ideas, and you can own it as much as the rest, that's category 1.

Note: Trying out an idea, in a limited way, which you do know how to do (you understand enough to do the trial you have in mind) is a different idea than the original idea. The trial could be category 1 if you know how to do it, know what you're trying to learn, know how to evaluate the results. Be careful though. It's easy to "try" an idea while doing it totally wrong!


But there's a problem here I haven't solved. Most people can't use the two categories because the idea of the two categories itself is in category 2 for them, so it'd be self-contradictory to use it.

To do this categorizing, they'd need to have developed the skill of figuring out what they understand or not. They'd need to be able to tell the difference effectively. But most people don't know how.

They could try rejecting stuff which is category 2 and unconventional, because that's an especially risky pairing. Except they can't effectively judge what's unconventional, and also they don't understand why that pairing matters well enough (so the idea of checking for category-2-and-unconventional is itself a category 2 idea for them; it's also an unconventional suggestion...).


Note: these ideas have been discussed at the FI discussion group. Here’s a good post by Alisa and you can find the rest of the discussion at that link.


Elliot Temple | Permalink | Messages (3)

Discussion Structure

Dagny wrote (edited slightly with permission):

I think I made a mistake in the discussion by talking about more than one thing at once. The problem with saying multiple things is he kept picking some to ignore, even when I asked him repeatedly to address them. See this comment and several comments near it, prior, where I keep asking him to address the same issue. but he wouldn't without the ultimatum that i stop replying. maybe he still won't.

if i never said more than one thing at once, it wouldn't get out of hand like this in the first place. i think.

I replied: I think the structure of conversations is a bigger contributor to the outcome than the content quality is. Maybe a lot bigger.

I followed up with many thoughts about discussion structure, spread over several posts. Here they are:


In other words, improving the conversation structure would have helped with the outcome more than improving the quality of the points you made, explanations you gave, questions you asked, etc. Improving your writing quality or having better arguments doesn't matter all that much compared to structural issues like what your goals are, what his goals are, whether you mutually try to engage in cooperative problem solving as issues come up, who follows whose lead or is there a struggle for control, what methodological rules determine which things are ignorable and which are replied to, and what are the rules for introducing new topics, dropping topics, modifying topics?


it's really hard to control discussion structure. people don't wanna talk about it and don't want you to be in control. they don't wanna just answer your questions, follow your lead, let you control discussion flow. they fight over that. they connect control over the discussion structure with being the authority – like teachers control discussions and students don't.

people often get really hostile, really fast, when it comes to structure stuff. they say you're dodging the issue. and they never have a thought-out discussion methodology to talk about, they have nothing to say. when it comes to the primary topic, they at least have fake or dumb stuff to say, they have some sorta plan or strategy or ideas (or they wouldn't be talking about). but with stuff about how to discuss, they can't discuss it, and don't want to – it leads so much more quickly and effectively to outing them as intellectual frauds. (doesn't matter if that's your intent. they are outed because you're discussing rationality more directly and they have nothing to say and won't do any of the good ideas and don't know how to do the good ideas and can't oppose them either).

sometimes people are OK with discussion methodology stuff like Paths Forward when it's just sounds-good vague general stuff, but the moment you apply it to them they feel controlled. they feel like you are telling them what to do. they feel pressured, like they have to discuss the rational way. so they rebel. even just direct questions are too controlling and higher social status, and people rebel.


some types of discussion structure. these aren’t about controlling the discussion, they are just different ways it can be organized. some are compatible with each other and some aren’t (you can have multiple from the list, but some exclude each other):

  • asking and answering direct questions
  • addressing unstated, generic questions like “thoughts on what i just said?”
  • one person questioning the other who answers vs. both people asking and answering questions vs. some ppl ignoring questions
  • arguing points back and forth
  • saying further thoughts related to what last person said (relevance levels vary, can be like really talking past each other and staying positive, or can be actual discussion)
  • pursuing a goal stated by one person
  • pursuing a goal stated by two people and mutually agreed on
  • pursuing different and unstated goals
  • 3+ person discussion
  • using quotes of the other discussion participants or not
  • using cites/links to stuff outside the discussion or not
  • long messages, short messages, or major variance in message length
  • talking about one thing at a time
  • trying to resolve issues before moving on vs. just rushing ahead into new territory while there are lots of outstanding unresolved points
  • step by step vs. chaotic
  • people keeping track of the outline or just running down rabbit holes

i’ve been noticing structure problems in discussions more in the last maybe 5 years. Paths Forward and Overreaching address them. lots of my discussions are very short b/c we get an impasse immediately b/c i try to structure the discussion and they resist.

like i ask them how they will be corrected if they’re wrong (what structural mechanisms of discussion do they use to allow error correction) and that ends the discussion.

or i ask like “if i persuade you of X, will you appreciate it and thank me?” before i argue X. i try to establish the meaning X will have in advance. why bother winning point X if they will just deny it means anything once you get there? a better way to structure discussion is to establish some stakes around X in advance, before it’s determined who is right about X.

i ask things like if they want to discuss to a conclusion, or what their goal is, and they won’t answer and it ends things fast

i ask why they’re here. or i ask if they think they know a lot or if they are trying to learn.

ppl hate all those questions so much. it really triggers the fuck out of them

they just wanna argue the topic – abortion or induction or whatever

asking if they are willing to answer questions or go step by step also pisses ppl off

asking if they will use quotes or bottom post. asking if they will switch forums. ppl very rarely venue switch. it’s really rare they will move from twitter to email, or from email to blog comments, or from blog comments to FI, etc

even asking if they want to lead the discussion and have a plan doesn’t work. it’s not just about me controlling the discussion. if i offer them control – with the caveat that they answer some basic questions about how they will use it and present some kinda halfway reasonable plan – they hate that too. cuz they don’t know how to manage the discussion and don’t want the responsibility or to be questioned about their skill or knowledge of how to do it.

structure/rules/organization for discussion suppresses ppl’s bullshit. it gives them less leeway to evade or rationalize. it makes discussion outcomes clearer. that’s why it’s so important, and so resisted.


the structure or organization of a discussion includes the rules of the game, like whether people should reply more tomorrow or whether it's just a single day affair. the rules for what people consider reasonable ways of ending a discussion are a big deal. is "i went to sleep and then chose not to think about it the next day, or the next, or the next..." a reasonable ending? should people actually make an effort to avoid that ending, e.g. by using software reminders?

should people take notes on the discussion so they remember earlier parts better? should they quote from old parts? should they review/reread old parts?

a common view of discussion is: we debate issue X. i'm on side Y, you're on side Z. and ppl only say stuff for their side. they only try to think about things in a one-sided, biased way. they fudge and round everything in their favor. e.g. if the number is 15, they will say "like 10ish" or "barely over a dozen" if a smaller number helps their side. and the other guy will call it "around 20" or "nearly 18".

a big part of structure is: do sub-plots resolve? say there's 3 things. and you are trying to do one at a time, so you pick one of the 3 and talk about that. can you expect to finish it and get back to the other 2 things, or not? is the discussion branching to new topics faster than topics are being resolved? are topics being resolved at a rate that's significantly different from zero, or is approximately nothing being resolved?

another part of structure is how references/cites/links are used. are ideas repeated or are pointers to ideas used? and do people try to make stuff that is suitable for reuse later (good enough quality, general purpose enough) or not? (a term similar to suitable for reuse is "canonical").


I already knew that structural knowledge is the majority of knowledge. Like a large software project typically has much more knowledge in the organization than the “payload” (aka denotation aka direct purpose). “refactoring" refers to changing only the structure while keeping the function/content/payload/purpose/denotation the same. refactoring is common and widely known to be important. it’s an easy way for people familiar with the field to see that significant effort goes into software knowledge structure cuz that is effort that’s pretty much only going toward structure. software design ideas like DRY and YAGNI are more about structure than content. how changeable software is is a matter of structure ... and most big software projects have a lot more effort put into changes (like bug fixes, maintenance and new features) than into initial development. so initial development should focus more effort on a good structure (to make changes easier) than on the direct content.

it does vary by software type. games are a big exception. most games they have most of their sales near release. most games aren’t updated or changed much after release. games still need pretty good structure though or it’d be too hard to fix enough the bugs during initial development to get it shippable. and they never plan the whole game from the start, they make lots of changes during development (like they try playing it and think it’s not fun enough, or find a particular part works badly, and change stuff to make it better), so structure matters. wherever you have change (including error correction), structure is a big deal. (and there’s plenty of error correction needed in all types of software dev that make substantial stuff. you can get away with very little when you write one line of low-risk code directly into a test-environment console and aren’t even going to reuse it.)

it makes sense that structure related knowledge is the majority of the issue for discussion. i figured that was true in general but hadn’t applied it enough. knowledge structure is hard to talk about b/c i don’t really have people who are competent to discuss it with me. it’s less developed and talked through than some other stuff like Paths Forward or Overreaching. and it’s less clear in my mind than YESNO.

so to make this clearer:

structure is what determines changeability. various types of change are high value in general, including especially error correction. wherever you see change, especially error correction, it will fail without structural knowledge. if it’s working ok, there’s lots of structural knowledge.

it’s like how the capacity to make progress – like being good at learning – is more important than how much you know how or how good something is now. like how a government that can correct mistakes without violence is better than one with fewer mistakes today. (in other words, the structure mistake of needing violence to correct some categories of mistake is a worse mistake than the non-structure mistake of taxing cigarettes and gas. the gas tax doesn’t make it harder to make changes and correct errors, so it’s less bad of a mistake in the long run.)


Intro to knowledge structure (2010):

http://fallibleideas.com/knowledge-structure

Original posts after DD told me about it (2003)

http://curi.us/988-structural-epistemology-introduction-part-1
http://curi.us/991-structural-epistemology-introduction-part-2

The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.

“It” can be software, an object like a hammer, ideas, or processes (like the processes factory workers use). Different software designs are easier to add features to than others. You can imagine some hammer designs being easier to convert into a shovel than others. Some ideas are easier to change than others. Or imagine two essays arguing equally effectively for the same claim, and your task is to edit them to argue for a different conclusion – the ease of that depends on the internal design of the essays. And for processes, for example the more the factory workers have each memorized a single task, and don’t understand anything, the more difficult a lot of changes will be (but not all – you could convert the factory to build something else if you came up with a way to build it with simple, memorizable steps). Also note the ease of change often depends on what you want to change to. Each design makes some sets of potential changes harder or easier.

Back to the ongoing discussion (which FYI is exploratory rather than having a clear conclusion):

“structure” is the word DD used. Is is the right word to use all the time?

Candidate words:

  • structure (DD’s word)
  • design
  • organization
  • internal design
  • internal organization
  • form
  • layout
  • style
  • plan
  • outline

I think “design” and “organization” are good words. “Form” can be good contextually.

What about words for the non-structure part?

  • denotation (DD’s word)
  • content
  • function
  • payload
  • direct purpose
  • level one purpose
  • task
  • main point
  • subject matter

The lists help clarify the meaning – all the words together are clearer than any particular one.


What does a good design offer besides being easier to change?

  • Flexibility: solves a wider range of relevant problems (without needing to change it, or with a smaller/easier change). E.g. a car that can drive in the snow or on dry roads, rather than just one or the other.

  • Easier to understand. Like computer code that’s easier to read due to being organized well.

  • Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).

  • Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)

  • Internal reuse (references, pointers, links) rather than new copies. (This is usually but not always better. In general, it means the knowledge is present that two instances are actually the same thing instead of separate. It means there’s knowledge of internal groupings.)

Good structures are set up to do work (in a certain somewhat generic way), and can be told what type of work, what details. Bad structures fail to differentiate what is parochial details and what is general purpose.

The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.

In general, the line between function and design is approximate. What about the time it takes to work, or the energy use, or the amount of waste heat? What are those? You can do the same task (same function) in different ways, which is the core idea of different structures, and get different results for time, energy and heat use. They could be considered to be related to design efficiency. But they could also be seen as part of the task: having to wait too long, or use too much energy, could defeat the purpose of the task. There are functionality requirements in these areas or else it would be considered not to work. People don’t want a car that overheats – that would fail to address the primary problem of getting them from place to place. It affects whether they arrive at their destination at all, not just how the car is organized.

(This reminds me of computer security. Sometimes you can beat security mechanisms by looking at timing. Like imagine a password checking function that checks each letter of the password one by one and stops and rejects the password if a letter is wrong. That will run more slowly based on getting more letters correct at the start. So you can guess the password one letter at a time and find out when you have it right, rather than needing to guess the whole thing at once. This makes it much easier to figure out the password. Measuring power usage or waste heat could work too if you measured precisely enough or the difference in what the computer does varied a large enough amount internally. And note it’s actually really hard to make the computer take exactly the same amount of time, and use exactly the same amount of power, in different cases that have the same output like “bad password”.)

Form and function are related. Sometimes it’s useful to mentally separate them but sometimes it’s not helpful. When you refactor computer code, that’s about as close to purely changing the form as it gets. The point of refactoring is to reorganize things while making sure it still does the same thing as before. But refactoring sometimes makes code run faster, and sometimes that’s a big deal to functionality – e.g. it could increase the frame rate of a game from non-playable to playable.

Some designs actively resist change. E.g. imagine something with an internal robot that goes around repairing any damage (and its programmed to see any deviation or difference as damage – it tries to reverse all change). The human body is kind of like this. It has white blood cells and many other internal repair/defense mechanisms that (imperfectly) prevent various kinds of changes and repair various damage. And a metal hammer resists being changed into a screwdriver; you’d need some powerful tools to reshape it.


The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.

Sometimes programmers make a complicated design in anticipation of possible future changes that never happen (instead it's either no changes, other changes, or just replaced entirely without any reuse).

It's hard to predict in advance which changes will be useful to make. And designs aren't just "better at any and all changes" vs. "worse at any and all changes". Different designs make different categories of changes harder or easier.

So how do you know which structure is good? Rules of thumb from past work, by many people, doing similar kinds of things? Is the software problem – which is well known – just some bad rules of thumb (that have already been identified as bad by the better programmers)?

  • Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).

this is related to the desire for FI emails to be self-contained (have some independence/autonomy). this isn't threatened by links/cites cuz those are a loose coupling, a loose way to connect to something else.

  • Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)

but, as above, there are different ways to reuse something and you don't just optimize all of them at once. you need some way to judge what types of reuse are valuable, which partly seems to depend on having partial foresight about the future.

The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.

sometimes the customer treats something as a black box, but the design still matters a lot for:

  • warranty repairs (made by the company, not by the customer)
  • creating the next generation production
  • fixing problems during development of the thing
  • the ability to pivot into other product lines (additionally, or instead of the current one) and reuse some stuff (be it manufacturing processes, components from this product, whatever)
  • if it's made out of components which can be produced independently and are useful in many products, then you have the option to buy these "commodity parts" instead of making your own, or you can sell your surplus parts (e.g. if your factory manager finds a way to be more efficient at making a particular part, then you can either just not produce your new max capacity, or you could sell them if they are useful components to others. or you could use the extra parts in a new product. the point was you can end up with extra capacity to make a part even if you didn't initially design your factory that way.)

In general, the line between function and design is approximate.

like the line between object-discussion and meta-discussion is approximate.

as discussion structure is crucial (whether you talk about it or not), most stuff has more meta-knowledge than object-knowledge. here's an example:

you want to run a small script on your web server. do you just write it and upload? or do you hook it into existing reusable infrastructure to get automatic error emails, process monitoring that'll restart the script if it's not running, automatic deploys of updates, etc?

you hook it into the infrastructure. and that infrastructure has more knowledge in it than the script.

when proceeding wisely, it's rare to create a ton of topic-specific knowledge without the project also using general purpose infrastructure stuff.

Form and function are related.

A lot of the difference between a smartphone and a computer is the shape/size/weight. That makes them fit different use cases. An iPhone and iPad are even more similar, besides size, and it affects what they're used for significantly. And you couldn't just put them in an arbitrary form factor and get the same practical functionality from them.

Discussion and meta-discussion are related too. No one ever entirely skips/omits meta discussion issues. People consider things like: what statements would the other guy consent to hear and what would be unwanted? People have an understanding of that and then don't send porn pics in the middle of a discussion about astronomy. You might complain "but that would be off-topic". But understanding what the topic is, and what would be on-topic or off-topic is knowledge about the discussion, rather than directly being part of the topical discussion. "porn is off topic" is not a statement about astronomy – it is itself meta discussion which is arguably off topic. you need some knowledge about the discussion in order to deal with the discussion reasonably well.

Some designs actively resist change.

memes resist change too. rational and static memes both resist change, but in different ways. one resists change without reasons/arguments, the other resists almost all change.


Discussion and meta-discussion are related too.

Example:

House of Sunny podcast. This episode was recommended for Trump and Putin info at http://curi.us/2041-discussion#c10336

https://youtu.be/Id2ZH_DstyY

  • starts with music
  • then radio announcer voice
  • voice says various introductory stuff. it’s not just “This is the house of Sunny podcast.” It says some fluff with social connotations about the show style, and gives a quick bio of the host (“comedian and YouTuber”)
  • frames the purpose of the upcoming discussion: “Wanna know what Sunny and her friends are thinking about this week?”
  • tries to establish Sunny as a high status person who is worthy of an introduction that repeats her name like 4 times (as if her name matters)
  • applause track
  • Sunny introduces herself, repeating lots of what the intro just said
  • Sunny uses a socially popular speaking voice with connotations of: young, pretty, white, adult, female. Hearing how she speaks, for a few seconds, is part of the introduction. It’s information, and that information is not about Trump and Putin.
  • actual content starts 37 seconds in

This is all meta so far. It’s not the information the show is about (Trump and Putin politics discussion). It’s about the show. It’s telling you what kind of show it’s going to be, and who the host is. That’s just like discussing what kind of discussion you will have and the background of a participant.

The intro also links the show to a reusable show structure that most listeners are familiar with. People now know what type of show it is, and what to expect. I didn’t listen to much of the episode, but for the next few minutes the show does live up to genre expectations.

I consider the intro long, heavy-handed and blatant. But most people are slower and blinder, so maybe it’s OK. I dislike most show intros. Offhand I only remember liking one on YouTube – and he stopped because more fans disliked it than liked it. It’s 15 seconds and I didn’t think it had good info.

KINGmykl intro: https://www.youtube.com/watch?v=TrN5Spr1Q4A

One thing I notice, compared to the Sunny intro, is it doesn’t pretend to have good info. It doesn’t introduce mykl, the show, or the video. (He introduces his videos non-generically after the intro. He routinely asks how your day is going, says his is going great, and quickly outlines the main things that will be in the video cuz there’s frequently multiple separate topics in one video. Telling you the outline of the upcoming discussion is an example of useful meta discussion.)

The Sunny intro is so utterly generic I found it boring the first time I heard it. I’ve heard approximately the same thing before from other shows! I saw the mykl intro dozens of times, and sure I skipped it sometimes but not every time, and I remember it positively. It’s more unique, and I don’t understand it as well (it has some meaning, but the meaning is less clear than in the Sunny intro.) I also found the Sunny intro to scream “me too, I’m trying hard to fit in and do this how you’re supposed to” and the mykl intro doesn’t have that vibe to me. (I could pretty easily be wrong though, maybe they both have a fake, tryhard social climber vibe in different ways. Maybe i’m just not familiar enough with other videos similar to mykl’s and that’s why I don’t notice. I’ve watched lots of gaming video content, but a lot of that was on Twitch so it didn’t have a YouTube intro. I have seen plenty of super bland gamer intros. mykl used to script his videos and he recently did a review of an old video. He pointed out ways he was trying to present himself as knowing what he’s talking about, and found it cringey now. He mentioned he stopped scripting videos a while ago.)

Example 2: Chef Heidi Teaches Hoonmaru to Cook Korean Short Rib

https://www.youtube.com/watch?v=EwosbeZSSvY

  • music
  • philly fusion overwatch league team intro (FYI hoonmaru is a fusion twitch streamer, not a pro player)
  • slow mo arrival
  • hoonmaru introducing what’s going on (i think he lied when he said that he thought of this activity)
  • hoonmaru talking about his lack of cooking experience
  • hoonmaru says he’ll answer fan questions while cooking
  • says “let’s get started”
  • music and scene change
  • starts introducing the new seen by showing you visuals of hoonmaru in an apron
  • now we see Chef Heidi and she does intro stuff, asks if he’s ready to cook, then says what they’ll be doing.

The last three are things after “let’s get started” that still aren’t cooking. Cooking finally starts at 48s in. But after a couple seconds of cooking visuals, hoonmaru answers an offtopic fan question before finally getting some cooking instruction. Then a few seconds later hoonmaru is neglecting his cooking, and Heidi fixes it while he answers more questions. Then hoonmaru says he thinks the food looks great so far but that he didn’t do much. This is not a real cooking lesson, it’s just showing off Heidi’s cooking for the team and entertaining hoonmaru fans with his answers to questions that aren’t really related to overwatch skill.

Tons of effort goes into setting up the video. It’s under 6 minutes and spent 13.5% on the intro. I skipped ahead and they also spend 16 seconds (4.5%) on the ending, for a total of 18% on intro and ending. And there’s also structural stuff in the middle, like saying now they will go cook the veggies while the meat is cooking – that isn’t cooking itself, it’s structuring the video and activities into defined parts to help people understand the content. And they asked hoonmaru what he thought of the meat on the grill (looks good... what a generic question and answer) which was ending content for that section of the video.

off topic, Heidi blatantly treats hoonmaru like a kid. at 4:45 she’s making a dinner plate combining the foods. then she asks if he will make it, and he takes that as an order (but he hadn’t realized in advance he’d be doing it, he just does whatever he’s told without thinking ahead). and then the part that especially treats him like a kid is she says she’s taking away the plate she made so he can’t copy it, he has to try to get the right answer (her answer) on his own, she’s treating it like a school test. then a little later he’s saying his plating sucks and she says “you did a great job, it’s not quite restaurant”. there’s so much disgusting social from both of them.


Elliot Temple | Permalink | Message (1)

Project Planning Discussion

This is a discussion about rational project planning. The major theme is that people should consider what their project premises are. What claims are they betting their project success on the correctness of? And why? This matter requires investigation and consideration, not just ignoring it.

By project I mean merely a goal-directed activity. It can be, but doesn't have to be, a business project or multi-person project. My primary focus is on larger projects, e.g. projects that take more than one day to finish.

The first part is discussion context. You may want to skip to the second part where I write an article/monologue with no one else talking. It explains a lot of important stuff IMO.


Gavin Palmer:

The most important problem is The Human Resource Problem. All other problems depend on the human resource problem. The Human Resource Problem consists of a set of smaller problems that are related. An important problem within that set is the communication problem: an inability to communicate. I classify that problem as a problem related to information technology and/or process. If people can obtain and maintain a state of mind which allows communication, then there are other problems within that set related to problems faced by any organization. Every organization is faced with problems related to hiring, firing, promotion, and demotion.

So every person encounters this problem. It is a universal problem. It will exist so long as there are humans. We each have the opportunity to recognize and remember this important problem in order to discover and implement processes and tools which can facilitate our ability to solve every problem which is solvable.

curi:

you haven't explained what the human resource problem is, like what things go in that category

Gavin Palmer:

The thought I originally had long ago - was that there are people willing and able to solve our big problems. We just don't have a sufficient mechanism for finding and organizing those people. But I have discovered that this general problem is related to ideas within any organization. The general problem is related to ideas within a company, a government, and even those encountered by each individual mind. The task of recruiting, hiring, firing, promoting, and demoting ideas can occur on multiple levels.

curi:

so you mean it like HR in companies? that strikes me as a much more minor problem than how rationality works.

Gavin Palmer:

If you want to end world hunger it's an HR problem.

curi:

it's many things including a rationality problem

curi:

and a free trade problem and a governance problem and a peace problem

curi:

all of which require rationality, which is why rationality is central

Gavin Palmer:

How much time have you actually put into trying to understand world hunger and the ways it could end?

Gavin Palmer:

How much time have you actually put into building anything? What's your best accomplishment as a human being?

curi:

are you mad?

GISTE:

so to summarize the discussion that Gavin started. Gavin described what he sees as the most important problem (the HR problem), where all other problems depend on it. curi disagreed by saying that how rationality works is a more important problem than the HR problem, and he gave reasons for it. Gavin disagreed by saying that for the goal of ending world hunger, the most important problem is the HR problem -- and he did not address curi's reasons. curi disagreed by saying that the goal of ending world hunger is many problems, all of which require rationality, making rationality the most important problem. Then Gavin asked curi about how much time he has spent on the world hunger problem and asked if he built anything and what his best accomplishments are. Gavin's response does not seem to connect to any of the previous discussion, as far as I can tell. So it's offtopic to the topic of what is the most important problem for the goal of ending world hunger. Maybe Gavin thinks it is on topic, but he didn't say why he thinks so. I guess that curi also noticed the offtopic thing, and that he guessed that Gavin is mad. then curi asked Gavin "are you mad?" as a way to try to address a bottleneck to this discussion. @Gavin Palmer is this how you view how the discussion went or do you have some differences from my view? if there are differences, then we could talk about those, which would serve to help us all get on the same page. And then that would help serve the purpose of reaching mutual understanding and agreement regarding whether or not the HR problem is the most important problem on which all other problems depend.

GISTE:

btw i think Gavin's topic is important. as i see it, it's goal is to figure out the relationships between various problems, to figure out which is the most important. i think that's important because it would serve the purpose of helping one figure out which problems to prioritize.

Gavin Palmer:

Here is a google doc linked to a 1-on-1 I had with GISTE (he gave me permission to share). I did get a little angry and was anxious about returning here today. I'm glad to see @curi did not get offended by my questions and asked a question. I am seeing the response after I had the conversation with GISTE. Thank you for your time.

https://docs.google.com/document/d/1XEztqEHLBAJ39HQlueKX3L4rVEGiZ4GEfBJUyXEgVNA/edit?usp=sharing

GISTE:

to be clear, regarding the 1 on 1 discussion linked above, whatever i said about curi are my interpretations. don't treat me as an authority on what curi thinks.

GISTE:

also, don't judge curi by my ideas/actions. that would be unfair to him. (also unfair to me)

JustinCEO:

Curi's response tells me he does not know how to solve world hunger.

JustinCEO:

Unclear to me how that judgment was arrived at

JustinCEO:

I'm reading

JustinCEO:

Lowercase c for curi btw

JustinCEO:

But I have thought about government, free trade, and peace very much. These aren't a root problem related to world hunger.

JustinCEO:

curi actually brought those up as examples of things that require rationality

JustinCEO:

And said that rationality was central

JustinCEO:

But you don't mention rationality in your statement of disagreement

JustinCEO:

You mention the examples but not the unifying theme

JustinCEO:

GISTE:

curi did not say those are root problems.

JustinCEO:

Ya 🙂

JustinCEO:

Ya GISTE got this point

JustinCEO:

I'm on phone so I'm pasting less than I might otherwise

JustinCEO:

another way to think about the world hunger problem is this: what are the bottlenecks to solving it? first name them, before trying to figure out which one is like the most systemic one.

JustinCEO:

I think the problem itself could benefit from a clear statement

GISTE:

That clear statement would include causes of (world) hunger. Right ? @JustinCEO

JustinCEO:

I mean a detailed statement would get into that issue some GISTE cuz like

JustinCEO:

You'd need to figure out what counts and what doesn't as an example of world hunger

JustinCEO:

What is in the class of world hunger and what is outside of it

JustinCEO:

And that involves getting into specific causes

JustinCEO:

Like presumably "I live in a first world country and have 20k in the bank but forgot to buy groceries this week and am hungry now" is excluded from most people's definitions of world hunger

JustinCEO:

I think hunger is basically a solved problem in western liberal capitalist democracies

JustinCEO:

People fake the truth of this by making up concepts called "food insecurity" that involve criteria like "occasionally worries about paying for groceries" and calling that part of a hunger issue

JustinCEO:

Thinking about it quickly, I kinda doubt there is a "world hunger" problem per se

GISTE:

yeah before you replied to my last comment, i immediately thought of people who choose to be hungry, like anorexic people. and i think people who talk about world hunger are not including those situations.

JustinCEO:

There's totally a Venezuela hunger problem or a Zimbabwe hunger problem tho

JustinCEO:

But not really an Ohio or Kansas hunger problem

JustinCEO:

Gavin

I try to be pragmatic. If your solution depends on people being rational, then the solution probably will not work. My solution does depend on rational people, but the number of rational people needed is very small

GISTE:

There was one last comment by me that did not get included in the one on one discussion. Here it is. “so, you only want people on your team that already did a bunch of work to solve world hunger? i thought you wanted rational people, not necessarily people that already did a bunch of work to solve world hunger.”

JustinCEO:

What you think being rational is and what it involves could probably benefit from some clarification.

Anyways I think society mostly works to the extent people are somewhat rational in a given context.

JustinCEO:

I regard violent crime for the purpose of stealing property as irrational

JustinCEO:

For example

JustinCEO:

Most people agree

JustinCEO:

So I can form a plan to walk down my block with my iPhone and not get robbed, and this plan largely depends on the rationality of other people

JustinCEO:

Not everyone agrees with my perspective

JustinCEO:

The cop car from the local precinct that is generally parked at the corner is also part of my plan

JustinCEO:

But my plan largely depends on the rationality of other people

JustinCEO:

If 10% or even 5% of people had a pro property crime perspective, the police could not really handle that and I would have to change my plans

Gavin Palmer:

World hunger is just an example of a big problem which depends on information technology related to the human resource problem. My hope is that people interested in any big problem could come to realize that information technology related to the human resource problem is part of the solution to the big problem they are interested in as well as other big problems.

Gavin Palmer:

So maybe "rationality" is related to what I call "information technology".

JustinCEO:

the rationality requirements of my walking outside with phone plan are modest. i can't plan to e.g. live in a society i would consider more moral and just (where e.g. a big chunk of my earnings aren't confiscated and wasted) cuz there's not enough people in the world who agree with me on the relevant issues to facilitate such a plan.

JustinCEO:

anyways regarding specifically this statement

JustinCEO:

If your solution depends on people being rational, then the solution probably will not work.

JustinCEO:

i wonder if the meaning is If your solution depends on [everyone] being [completely] rational, then the solution probably will not work.

Gavin Palmer:

There is definitely some number/percentage I have thought about... like I only need 10% of the population to be "rational".

GISTE:

@Gavin Palmer can you explain your point more? what i have in mind doens't seem to match your statement. so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.

JustinCEO:

@Gavin Palmer based on the stuff you said so far and in the google doc regarding wanting to work on important problems, you may appreciate this post

JustinCEO:

https://curi.us/2029-the-worlds-biggest-problems

JustinCEO:

Gavin says

A thing that is sacred is deemed worthy of worship. And worship is based in the words worth and ship. And so a sacred word is believed to carry great worth in the mind of the believer. So I can solve world hunger with the help of people who are able and willing. Solving world hunger is not an act done by people who uphold the word rationality above all other words.

JustinCEO:

the word doesn't matter but the concept surely does for problem-solving effectiveness

JustinCEO:

people who don't value rationality can't solve much of anything

nikluk:

Re rationality. Have you read this article and do you agree with what it says, @Gavin Palmer ?
https://fallibleideas.com/reason

GISTE:

So maybe "rationality" is related to what I call "information technology".
can you say more about that relationship? i'm not sure what you have in mind. i could guess but i think it'd be a wild guess that i'm not confident would be right. (so like i could steelman your position but i could easily be adding in my own ideas and ruin it. so i'd rather avoid that.) @Gavin Palmer

Gavin Palmer:

so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.
I think the image of the elephant rider portrayed by Jonathan Haidt is closer to the truth when it comes to some word like rationality and reason. I actually value something like compassion above a person's intellect: and I really like people who have both. There are plenty of idiots in the world who are not going to try and steal from you or murder you. I'm just going to go through these one by one when able.

Gavin Palmer:

https://curi.us/2029-the-worlds-biggest-problems
Learning to think is very important. There were a few mistakes in that article. The big one in my opinion is the idea that 2/3 of the people can change things. On the contrary our government systems do not have any mechanism in place to learn what 2/3 of the people actually want nor any ability to allow the greatest problem solvers to influence those 2/3 of the people. We aren't even able to recognize the greatest problem solvers. Another important problem is technology which allows for this kind of information sharing so that we can actually know what the people think and we can allow the greatest problem solvers to be heard. We want that signal to rise above the noise.

The ability to solve problems is like a muscle. For me - reading books does not help me build that muscle - they only help me find better words for describing the strategies and processes which I have developed through trial and error. I am not the smartest person - I learn from trial and error.

curi:

To answer the questions: I have thought about many big problems, such as aging death, AGI, and coercive parenting/education. Yes I've considered world hunger too, though not as a major focus. I'm an (experienced) intellectual. My accomplishments are primarily in philosophy research re issues like how learning and rational discussion work. I do a lot of educational writing and discussion. https://elliottemple.com

curi:

You're underestimating the level of outlier you're dealing with here, and jumping to conclusions too much.

Gavin Palmer:

https://fallibleideas.com/reason
It's pretty good. But science without engineering is dead. That previous sentence reminds me of "faith without works is dead". I'm not a huge fan of science for the sake of science. I'm a fan of engineering and the science that helps us do engineering.

curi:

i don't thikn i have anything against engineering.

Gavin Palmer:

I'm just really interested in finding people who want to help do the engineering. It's my bias. Even more - it's my passion and my obsession.

Gavin Palmer:

Thinking and having conversations is fun though.

Gavin Palmer:

But sometimes it can feel aimless if I'm not building something useful.

curi:

My understanding of the world, in big picture, is that a large portion of all efforts at engineering and other getting-stuff-done type work are misdirected and useless or destructive.

curi:

This is for big hard problems. The productiveness of practical effort is higher for little things like making dinner today.

curi:

The problem is largely not the engineering itself but the ideas guiding it – the goals and plan.

Gavin Palmer:

I worked for the Army's missile defense program for 6 years when I graduated from college. I left because of the reason you point out. My hope was that I would be able to change things from within.

curi:

So for example in the US you may agree with me that at least around half of political activism is misdirected to goals with low or negative value. (either the red tribe or blue tribe work is wrong, plus some of the other work too)

Gavin Palmer:

Even the ones I agree with and have volunteered for are doing a shit job.

curi:

yeah

curi:

i have found a decent number of people want to "change the world" or make some big improvement, but they can't agree amongst themselves about what changes to make, and some of them are working against others. i think sorting that mess out, and being really confident the projects one works on are actually good, needs to come before implementation.

curi:

i find most people are way too eager to jump into their favored cause without adequately considering why people disagree with it and sorting out all the arguments for all sides.

Gavin Palmer:

There are many tools that don't exist which could exist. And those tools could empower any organization and their goal(s).

curi:

no doubt.

curi:

software is pretty new and undeveloped. adequate tools are much harder to name than inadequate ones.

Gavin Palmer:

adequate tools are much harder to name than inadequate ones.
I don't know what that means.

curi:

we could have much better software tools for ~everything

curi:

"~" means "approximately"

JustinCEO:

Twitter can't handle displaying tweets well. MailMate performance gets sluggish with too many emails. Most PDF software can't handle super huge PDFs well. Workout apps can't use LIDAR to tell ppl if their form is on point

curi:

Discord is clearly a regression from IRC in major ways.

Gavin Palmer:

🤦‍♂️

JustinCEO:

?

JustinCEO:

i find your face palm very unclear @Gavin Palmer; hope you elaborate!

Gavin Palmer:

I find sarcasm very unclear. That's the only way I know how to interpret the comments about Twitter, MailMate, PDF, LIDAR, Discord, IRC, etc.

curi:

I wasn't being sarcastic and I'm confident Justin also meant what he said literally and seriously.

Gavin Palmer:

Ok - thanks for the clarification.

JustinCEO:

ya my statements were made earnestly

JustinCEO:

re: twitter example

JustinCEO:

twitter makes it harder to have a decent conversation cuz it's not good at doing conversation threading

JustinCEO:

if it was better at this, maybe people could keep track of discussions better and reach agreement more easily

Gavin Palmer:

Well - I have opinions about Twitter. But to be honest - I am also trying to look at what this guy is doing:
https://github.com/erezsh/portal-radar

It isn't a good name in my opinion - but the idea is related to having some bot collect discord data so that there can be tools which help people find the signal in the noise.

curi:

are you aware of http://bash.org ? i'm serious about major regressions.

JustinCEO:

i made an autologging system to make discord chat logs on this server so people could pull information (discussions) out of them more easily

JustinCEO:

but alas it's a rube goldberg machine of different tools running together in a VM, not something i can distribute

Gavin Palmer:

Well - it's a good goal. I'm looking to add some new endpoints in a pull request to the github repo I linked above. Then I could add some visualizations.

Another person has built a graphql backend (which he isn't sharing open source) and I have created some of my first react/d3 components to visualize his data.
https://portal-projects.github.io/users/

Gavin Palmer:

I think you definitely want to write the code in a way that it can facilitate collaboration.

curi:

i don't think this stuff will make much difference when people don't know what a rational discussion is and don't want one.

curi:

and don't want to use tools that already exist like google groups.

curi:

which is dramatically better than twitter for discussion

Gavin Palmer:

I'm personally interested in something which I have titled "Personality Targeting with Machine Learning".

Gavin Palmer:

My goal isn't to teach people to be rational - it is to try and find people who are trying to be rational.

curi:

have you identified which philosophical schools of thought it's compatible and incompatible with? and therefore which you're betting on being wrong?

curi:

it = "Personality Targeting with Machine Learning".

Gavin Palmer:

Ideally it isn't hard coded or anything. I could create multiple personality profiles. Three of the markets I have thought about using the technology in would be online dating, recruiting, and security/defense.

curi:

so no?

Gavin Palmer:

If I'm understanding you - a person using the software could create a personality that mimics a historical person for example - and then parse social media in search of people who are saying similar things.

Gavin Palmer:

But I'm not exactly sure what point you are trying to make.

curi:

You are making major bets while being unaware of what they are. You may be wrong and wasting your time and effort, or even being doing something counterproductive. And you aren't very interested in this.

Gavin Palmer:

Well - from my perspective - I am not making any major bets. What is the worst case scenario?

curi:

An example worst case scenario would be that you develop an AGI by accident and it turns us all into paperclips.

Gavin Palmer:

I work with a very intelligent person that would laugh at that idea.

curi:

That sounds like an admission you're betting against it.

curi:

You asked for an example seemingly because you were unaware of any. You should be documenting what bets you're making and why.

Gavin Palmer:

I won't be making software that turns us all into paperclips.

curi:

Have you studied AI alignment?

Gavin Palmer:

I have been writing software for over a decade. I have been using machine learning for many months now. And I have a pretty good idea of how the technology I am using actually works.

curi:

So no?

Gavin Palmer:

No. But if it is crap - do you want to learn why it is crap?

curi:

I would if I agreed with it, though I don't. But a lot of smart people believe it.

curi:

They have some fairly sophisticated reasons, which I don't think it's reasonable to bet against from a position of ignorance.

Gavin Palmer:

Our ability to gauge if someone has understanding on a given subject is relative to how much understanding we have on that subject.

curi:

Roughly, sure. What's your point?

Gavin Palmer:

First off - I'm not sure AGI is even possible. I love to play with the idea. And I would love to get to a point where I get to help build a god. But I am not even close to doing that at this point in my career.

curi:

So what?

Gavin Palmer:

You think there is a risk I would build something that turns humans into paperclips.

curi:

I didn't say that.

Gavin Palmer:

You said that is the worst case scenario.

curi:

Yes. It's something you're betting against, apparently without much familiarity with the matter.

curi:

Given that you don't know much about it, you aren't in a reasonable position to judge how big a risk it is.

curi:

So I think you're making a mistake.

curi:

The bigger picture mistake is not trying to figure out what bets you're making and why.

curi:

Most projects have this flaw.

Gavin Palmer:

My software uses algorithms to classify input data.

curi:

So then, usually, somewhere on the list of thousands of bets being made, are a few bad ones.

curi:

Does this concept make sense to you?

Gavin Palmer:

Love is most important in my hierarchy of values.

Gavin Palmer:

If I used the word in a sentence I would still want to capitalize it.

curi:

is that intended to be an answer?

Gavin Palmer:

Yes - I treat Love in a magical way. And you don't like magical thinking. And so we have very different world views. They might even be incompatible. The difference between us is that I won't be paralyzed by my fears. And I will definitely make mistakes. But I will make more mistakes than you. The quality and quantity of my learning will be very different than yours. But I will also be reaping the benefits of developing new relationships with engineers, learning new technology/process, and building up my portfolio of open source software.

curi:

You accuse me of being paralyzed by fears. You have no evidence and don't understand me.

curi:

Your message is not loving or charitable.

curi:

You're heavily personalizing while knowing almost nothing about me.

JustinCEO:

i agree

JustinCEO:

also, magical thinking can't achieve anything

curi:

But I will also be reaping the benefits of developing new relationships with engineers

curi:

right now you seem to be trying to burn a bridge with an engineer.

curi:

you feel attacked in some way. you're experiencing some sort of conflict. do you want to use a rational problem solving method to try to address this?

curi:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.

doubtingthomas:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.
Good observation. Are you going to start taking these considerations into account in future conversations?

curi:

I knew that years ago. I already did take it into account.

curi:

please take this tangent to #fi

GISTE:

also, magical thinking can't achieve anything
@JustinCEO besides temporary nice feelings. Long term its bad though.

doubtingthomas:

yeah sure

JustinCEO:

ya sure GISTE, i meant achieve something in reality

curi:

please stop talking here. everyone but gavin

Gavin Palmer:

You talked about schools of philosophy, AI alignment, and identifying the hidden bets. That's a lot to request of someone.

curi:

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

curi:

Is that what you mean?

Gavin Palmer:

I don't see how my premises are controversial or risky.

curi:

Slow down. Is that what you meant? Did I understand you?

Gavin Palmer:

I am OK with people thinking about premises and risks of an idea and discussing those. But in order to have that kind of discussion you would need to understand the idea. And in order to understand the idea - you have to ask questions.

curi:

it's hard to talk with you because of your repeated unwillingness to give direct answers or responses.

curi:

i don't know how to have a productive discussion under these conditions.

Gavin Palmer:

I will try to do better.

curi:

ok. can we back up?

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

did i understand you, yes or no?

Gavin Palmer:

no

curi:

ok. which part(s) is incorrect?

Gavin Palmer:

The words controversial and civilizational are not conducive to communication.

curi:

why?

Gavin Palmer:

They indicate that you think you understand the premises and the risks and I don't know that you understand the idea I am trying to communicate.

curi:

They are just adjectives. They don't say what I understand about your project.

Gavin Palmer:

Why did you use them?

curi:

Because you should especially think about controversial premises rather than all premises, and civilizational risks more than all risks.

curi:

And those are the types of things that were under discussion.

curi:

A generic, unqualified term like "premises" or "risks" would not accurately represent the list of 3 examples "schools of philosophy, AI alignment, and identifying the hidden bets"

Gavin Palmer:

I don't see how schools of philosophy, AI alignment, and hidden bets are relevant. Those are just meaningless words in my mind. The meaning of those words in your mind may contain relevant points. And I would be willing to discuss those points as they relate to the project. But (I think) that would also require that you have some idea of what the software does and how it is done. To bring up these things before you understand the software seems very premature.

curi:

the details of your project are not relevant when i'm bringing up extremely generic issues.

curi:

e.g. there is realism vs idealism. your project takes one side, the other, or is compatible with both. i don't need to know more about your project to say this.

curi:

(or disagrees with both, though that'd be unusual)

curi:

it's similar with skepticism or not.

curi:

and moral relativism.

curi:

and strong empiricism.

curi:

one could go on. at length. and add a lot more using details of your project, too.

curi:

so, there exists some big list. it has stuff on it.

curi:

so, my point is that you ought to have some way of considering and dealing with this list.

curi:

some way of considering what's on it, figuring out which merit attention and how to prioritize that attention, etc.

curi:

you need some sort of policy, some way to think about it that you regard as adequate.

curi:

this is true of all projects.

curi:

this is one of the issues which has logical priority over the specifics of your project.

curi:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

do you think you understand what i'm saying?

Gavin Palmer:

I think I understand this statement:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

ok. do you agree with that?

Gavin Palmer:

I usually jump into the details. I'm not saying you are wrong though.

curi:

ok. i think looking at least a little at the big picture is really important, and that most projects lose a lot of effectiveness (or worse) due to failing to do this plus some common errors.

curi:

and not having any conscious policy at all regarding this issue (how to think about the many premises you are building on which may be wrong) is one of the common errors.

curi:

i think being willing to think about things like this is one of the requirements for someone who wants to be effective at saving/changing/helping the world (or themselves individually)

Gavin Palmer:

But I have looked at a lot of big picture things in my life.

curi:

cool. doesn't mean you covered all the key ones. but maybe it'll give you a head start on the project planning stuff.

Gavin Palmer:

So do you have an example of a project where it was done in a way that is satisfactory in your mind?

curi:

hmm. project planning steps are broadly unpublished and unavailable for the vast majority of projects. i think the short answer is no one is doing this right. this aspect of rationality is ~novel.

curi:

some ppl do a more reasonable job but it's really hard to tell what most ppl did.

curi:

u can look at project success as a proxy but i don't think that'll be informative in the way you want.

Gavin Palmer:

I'm going to break soon, but I would encourage you to think about some action items for you and I based around this ideal form of project planning. I have real-world experience with various forms of project planning to some degree or another.

curi's Monologue

curi:

the standard way to start is to brainstorm things on the list

curi:

after you get a bunch, you try to organize them into categories

curi:

you also consider what is a reasonable level of overhead for this, e.g. 10% of total project resource budget.

curi:

but a flat percentage is problematic b/c a lot of the work is general education stuff that is reusable for most projects. if you count your whole education, overhead will generally be larger than the project. if you only count stuff specific to this project, you can have a really small overhead and do well.

curi:

stuff like reading and understanding/remembering/taking-notes-on/etc one overview book of philosophy ideas is something that IMO should be part of being an educated person who has appropriate background knowledge. but many ppl haven't done it. if you assign the whole cost of that to a one project it can make the overhead ratio look bad.

curi:

unfortunately i think a lot of what's in that book would be wrong and ignore some more important but less famous ideas. but at least that'd be a reasonable try. most ppl don't even get that far.

curi:

certainly a decent number of ppl have done that. but i think few have ever consciously considered "which philosophy schools of thought does my project contradict? which am i assuming as premises and betting my project success on? and is that a good idea? do any merit more investigation before i make such a bet?" ppl have certainly considered such things in a disorganized, haphazard way, which sometimes manages to work out ok. idk that ppl have done this by design in that way i'm recommending.

curi:

this kind of analysis has large practical consequences, e.g. > 50% of "scientific research" is in contradiction to Critical Rationalist epistemology, which is one of the more famous philosophies of science.

curi:

IMO, consequently it doesn't work and the majority of scientists basically waste their careers.

curi:

most do it without consciously realizing they are betting their careers on Karl Popper being wrong.

curi:

many of them do it without reading any Popper book or being able to name any article criticizing Popper that they think is correct.

curi:

that's a poor bet to make.

curi:

even if Popper is wrong, one should have more information before betting against him like that.

curi:

another thing with scientists is the majority bet their careers on a claim along the lines of "college educations and academia are good"

curi:

this is a belief that some of the best scientists have disagreed with

curi:

a lot of them also have government funding underlying their projects and careers without doing a rational investigation of whether that may be a really bad, risky thing.

curi:

separate issue: broadly, most large projects try to use reason. part of the project is that problems come up and people try to do rational problem solving – use reason to solve the problems as they come up. they don't expect to predict and plan for every issue they're gonna face. there are open controversies about what reason is, how to use it, what problem solving methods are effective or ineffective, etc.

curi:

what the typical project does is go by common sense and intuition. they are basically betting the project on whatever concept of reason they picked up here and there from their culture being adequate. i regard this as a very risky bet.

curi:

and different project members have different conceptions of reason, and they are also betting on those being similar enough things don't fall apart.

curi:

commonly without even attempting to talk about the matter or put their ideas into words.

curi:

what happens a lot when people have unverbalized philosophy they picked up from their culture at some unknown time in the past is ... BIAS. they don't actually stick to any consistent set of ideas about reason. they change it around situationally according to their biases. that's a problem on top of some of the ideas floating around our culture being wrong (which is well known – everyone knows that lots of ppl's attempts at rational problem solving don't work well)

curi:

one of the problems in the field of reason is: when and how do you rationally end (or refuse to start) conversations without agreement. sometimes you and the other guy agree. but sometimes you don't, and the guy is saying "you're wrong and it's a big deal, so you shouldn't just shut your mind and refuse to consider more" and you don't want to deal with that endlessly but you also don't want to just be biased and stay wrong, so how do you make an objective decision? preferably is there something you could say that the other guy could accept as reasonable? (not with 100% success rate, some people gonna yell at you no matter what, but something that would convince 99% of people who our society considers pretty smart or reasonable?)

curi:

this has received very little consideration from anyone and has resulted in countless disputes when people disagree about whether it's appropriate to stop a discussion without giving further answers or arguments.

curi:

lots of projects have lots of strife over this specific thing.

curi:

i also was serious about AI risk being worth considering (for basically anything in the ballpark of machine learning, like classifying big data sets) even though i actually disagree with that one. i did consider it and think it merits consideration.

curi:

i think it's very similar to physicists in 1940 were irresponsible if they were doing work anywhere in the ballpark of nuclear stuff and didn't think about potential weapons.

curi:

another example of a project management issue is how does one manage a schedule? how full should a schedule be packed with activities? i think the standard common sense ways ppl deal with this are wrong and do a lot of harm (the basic error is overfilling schedules in a way which fails to account for variance in task completion times, as explained by Eliyahu Goldratt)

curi:

i meant there an individual person's schedule

curi:

similarly there is problem of organizing the entire project schedule and coordinating people and things. this has received a ton of attention from specialists, but i think most ppl have an attitude like "trust a standard view i learned in my MBA course. don't investigate rival viewpoints". risky.

curi:

a lot of other ppl have no formal education about the matter and mostly ... don't look it up and wing it.

curi:

even riskier!

curi:

i think most projects managers couldn't speak very intelligently about early start vs. late start for dependencies off the critical path.

curi:

and don't know that Goldratt answered it. and it does matter. bad decisions re this one issue results in failed and cancelled projects, late projects, budget overruns, etc.

curi:

lots of ppl's knowledge of decision making processes extends about as far as pro/con lists and ad hoc arguing.

curi:

so they are implicitly betting a significant amount of project effectiveness on something like "my foundation of pro/con lists and ad hoc arguing is adequate knowledge of decision making processes".

curi:

this is ... unwise.

curi:

another generic issue is lying. what is a lie? how do you know when you're lying to yourself? a lot of ppl make a bet roughly like "either my standard cultural knowledge + random variance about lying is good or lying won't come up in the project".

curi:

similar with bias instead of lying.

curi:

another common, generic way projects go wrong is ppl never state the project goal. they don't have clear criteria for project success or failure.

curi:

related, it's common to make basically no attempt to estimate the resources needed to complete the project successfully and estimating the resources available and comparing those two things.

curi:

goals and resource budgeting are things some ppl actually do. they aren't rare. but they're often omitted, especially for more informal and non-business projects.

curi:

including some very ambitious change-the-world type projects, where considering a plan and what resources it'll use is actually important. a lot of times ppl do stuff they think is moving in the direction of their goal without seriously considering what it will take to actually reach their goal.

curi:

e.g. "i will do X to help the environment" without caring to consider what breakpoints exist for helping the environment that make an important difference and how much action is required to reach one.

curi:

there are some projects like "buy taco bell for dinner" that use low resources compared to what you have available (for ppl with a good income who don't live paycheck to paycheck), so you don't even need to consciously think through resource use. but a lot of bigger ones one ought to estimate e.g. how much time it'll take for success and how much time one is actually allocating to the project.

curi:

often an exploratory project is appropriate first. try something a little and see how you like it. investigate and learn more before deciding on a bigger project or not. ppl often don't consciously separate this investigation from the big project or know which they are doing.

curi:

and so they'll do things like switch to a big project without consciously realizing they need to clear up more time on their schedule to make that work.

curi:

often they just don't think clearly about what their goals actually are and then use bias and hindsight to adjust their goals to whatever they actually got done.

curi:

there are lots of downsides to that in general, and it's especially bad with big ambitious change/improve the world goals.

curi:

one of the most egregious examples of the broad issues i'm talking about is political activism. so many people are working for the red or blue team while having done way too little to find out which team is right and why.

curi:

so they are betting their work on their political team being right. if their political team is wrong, their work is not just wasted but actually harmful. and lots of ppl are really lazy and careless about this bet. how many democrats have read one Mises book or could name a book or article that they think refuses a major Mises claim?

curi:

how many republicans have read any Marx or could explain and cite why the labor theory of value is wrong or how the economic calculation argument refutes socialism?

curi:

how many haters of socialism could state the relationship of socialism to price controls?

curi:

how many of them could even give basic economic arguments about why price controls are harmful in a simple theoretical market model and state the premises/preconditions for that to apply to a real situation?

curi:

i think not many even when you just look at people who work in the field professionally. let alone if you look at people who put time or money into political causes.

curi:

and how many of them base their dismissal of solipsism and idealism on basically "it seems counterintuitive to me" and reject various scientific discoveries about quantum mechanics for the same reason? (or would reject those discoveries if they knew what they were)

curi:

if solipsism or idealism were true it'd have consequences for what they should do, and people's rejections of those ideas (which i too reject) are generally quite thoughtless.

curi:

so it's again something ppl are betting projects on in an unreasonable way.

curi:

to some extent ppl are like "eh i don't have time to look into everything. the experts looked into it and said solipsism is wrong". most such ppl have not read a single article on the topic and could not name an expert on the topic.

curi:

so their bet is not really on experts being right – which if you take that bet thousands of time, you're going to be wrong sometimes, and it may be a disaster – but their bet is actually more about mainstream opinion being right. whatever the some ignorant reporters and magazine writers claimed the experts said.

curi:

they are getting a lot of their "expert" info fourth hand. it's filtered by mainstream media, talking heads on TV, popular magazines, a summary from a friend who listened to a podcast, and so on.

curi:

ppl will watch and accept info from a documentary made by ppl who consulted with a handful of ppl who some university gave expert credentials. and the film makers didn't look into what experts or books, if any, disagree with the ones they hired.

curi:

sometimes the info presented disagrees with a majority of experts, or some of the most famous experts.

curi:

sometimes the film makers have a bias or agenda. sometimes not.

curi:

there are lots of issues where lots of experts disagree. these are, to some rough approximation, the areas that should be considered controversial. these merit some extra attention.

curi:

b/c whatever you do, you're going to be taking actions which some experts – some ppl who have actually put a lot of work into studying the matter – think is a bad idea.

curi:

you should be careful before doing that. ppl often aren't.

curi:

politics is a good example of this. whatever side you take on any current political issue, there are experts who think you're making a big mistake.

curi:

but it comes up in lots of fields. e.g. psychiatry is much less of an even split but there are a meaningful number of experts who think anti-psychotic drugs are harmful not beneficial.

curi:

one of the broad criteria for areas you should look into some before betting your project on are controversial areas. another is big risk areas (it's worse if you're wrong, like AI risk or e.g. there's huge downside risk to deciding that curing aging is a bad cause).

curi:

these are imperfect criteria. some very unpopular causes are true. some things literally no one currently believes are true. and you can't deal with every risk that doesn't violate the laws of physics. you have to estimate plausibility some.

curi:

one of the important things to consider is how long does it take to do a good job? could you actually learn about all the controversial areas? how thoroughly is enough? how do you know when you can move on?

curi:

are there too many issues where 100+ smart ppl or experts think ur initial plan is wrong/bad/dangerous, or could you investigate every area like that?

curi:

relying on the opinions of other ppl like that should not be your whole strategy! that gives you basically no chance against something your culture gets systematically wrong. but it's a reasonable thing to try as a major strategy. it's non-obvious to come up with way better approaches.

curi:

you should also try to use your own mind and judgment some, and look into areas you think merit it.

curi:

another strategy is to consider things that people say to you personally. fans, friends, anonymous ppl willing to write comments on your blog... this has some merits like you get more customized advice and you can have back and forth discussion. it's different to be told "X is dangerous b/c Y" from a book vs. a person where you can ask some clarifying questions.

curi:

ppl sometimes claim this strategy is too time consuming and basically you have to ignore ~80% of all criticism you're aware of with according to your judgment with no clear policies or principles to prevent biased judgments. i don't agree and have written a lot about this matter.

curi:

i think this kind of thing can be managed with reasonable, rational policies instead of basically giving up.

curi:

some of my writing about it: https://elliottemple.com/essays/using-intellectual-processes-to-combat-bias

curi:

most ppl have very few persons who want to share criticism with them anyway, so this article and some others have talked more about ppl with a substantial fan base who actually want to say stuff to them.

curi:

i think ppl should write down what their strategy is and do some transparency so they can be held accountable for actually doing it in addition to the strategy itself being something available for ppl to criticize.

curi:

a lot of times ppl's strategy is roughly "do whatever they feel like" which is such a bias enabler. and they don't even write down anything better and claim to do it. they will vaguely, non-specifically say they are doing something better. but no actionable or transparent details.

curi:

if they write something down they will want it to actually be reasonable. a lot of times they don't even put their policies into words into their own head. when they try to use words, they will see some stuff is unreasonable on their own.

curi:

if you can get ppl to write anything down what happens next is a lot of times they don't do what they said they would. sometimes they are lying pretty intentionally and other times they're just bad at it. either way, if they recognize their written policies are important and good, and then do something else ... big problem, even in their own view.

curi:

so what they really need are policies which some clear steps and criteria where it's really easy to tell if they are being done or not. just just vague stuff about using good judgment or doing lots of investigation of alternative views that represent material risks to the project. actual specifics like a list of topic areas to survey the current state of expert knowledge in with a blog post summarizing the research for each area.

curi:

as in they will write a blog post that gives info about things like what they read and what they think of it, rather than them just saying they did research and their final conclusion.

curi:

and they should have written policies about ways critics can get their attention, and for in what circumstances they will end or not start a conversation to preserve time.

curi:

if you don't do these things and you have some major irrationalities, then you're at high risk of a largely unproductive life. which is IMO what happens to most ppl.

curi:

most ppl are way more interested in social status hierarchy climbing than taking seriously that they're probably wrong about some highly consequential issues.

curi:

and that for some major errors they are making, better ideas are actually available and accessible right now. it's not just an error where no one knows better or only one hermit knows better.

curi:

there are a lot of factors that make this kind of analysis much harder for ppl to accept. one is they are used to viewing many issues as inconclusive. they deal with controversies by judging one side seems somewhat more right (or sometimes: somewhat higher social status) instead of actually figuring out decisive, clear cut answers.

curi:

and they think that's just kinda how reason works. i think that's a big error and it's possible to actually reach conclusions. and ppl actually do reach conclusions. they decide one side is better and act on it. they are just doing that without having any reason they regard as adequate to reach that conclusion...

curi:

some of my writing about how to actually reach conclusions re issues http://curi.us/1595-rationally-resolving-conflicts-of-ideas

curi:

this (possibility of reaching actual conclusions instead of just saying one side seems 60% right) is a theme which is found, to a significant extent, in some of the other thinkers i most admire like Eliyahu Goldratt, Ayn Rand and David Deutsch.

curi:

Rand wrote this:

curi:

Now some of you might say, as many people do: “Aw, I never think in such abstract terms—I want to deal with concrete, particular, real-life problems—what do I need philosophy for?” My answer is: In order to be able to deal with concrete, particular, real-life problems—i.e., in order to be able to live on earth.
You might claim—as most people do—that you have never been influenced by philosophy. I will ask you to check that claim. Have you ever thought or said the following? “Don’t be so sure—nobody can be certain of anything.” You got that notion from David Hume (and many, many others), even though you might never have heard of him. Or: “This may be good in theory, but it doesn’t work in practice.” You got that from Plato. Or: “That was a rotten thing to do, but it’s only human, nobody is perfect in this world.” You got it from Augustine. Or: “It may be true for you, but it’s not true for me.” You got it from William James. Or: “I couldn’t help it! Nobody can help anything he does.” You got it from Hegel. Or: “I can’t prove it, but I feel that it’s true.” You got it from Kant. Or: “It’s logical, but logic has nothing to do with reality.” You got it from Kant. Or: “It’s evil, because it’s selfish.” You got it from Kant. Have you heard the modern activists say: “Act first, think afterward”? They got it from John Dewey.
Some people might answer: “Sure, I’ve said those things at different times, but I don’t have to believe that stuff all of the time. It may have been true yesterday, but it’s not true today.” They got it from Hegel. They might say: “Consistency is the hobgoblin of little minds.” They got it from a very little mind, Emerson. They might say: “But can’t one compromise and borrow different ideas from different philosophies according to the expediency of the moment?” They got it from Richard Nixon—who got it from William James.

curi:

which is about how ppl are picking up a bunch of ideas, some quite bad, from their culture, and they don't really know what's going on, and then those ideas effect their lives.

curi:

and so ppl ought to actually do some thinking and learning for themselves to try to address this.

curi:

broadly, a liberal arts education should have provided this to ppl. maybe they should have had it by the end of high school even. but our schools are failing badly at this.

curi:

so ppl need to fill in the huge gaps that school left in their education.

curi:

if they don't, to some extent what they are at the mercy of is the biases of their teachers. not even their own biases or the mistakes of their culture in general.

curi:

schools are shitty at teaching ppl abstract ideas like an overview of the major philosophers and shitty at teaching practical guidelines like "leave 1/3 of your time slots unscheduled" and "leave at least 1/3 of your income for optional, flexible stuff. don't take on major commitments for it"

curi:

(this is contextual. like with scheduling, if you're doing shift work and you aren't really expected to think, then ok the full shift can be for doing the work, minus some small breaks. it's advice more for ppl who actually make decisions or do knowledge work. still applies to your social calendar tho.)

curi:

(and actually most ppl doing shift work should be idle some of the time, as Goldratt taught us.)

curi:

re actionable steps, above i started with addressing the risky bets / risky project premises. with first brainstorming things on the list and organizing into categories. but that isn't where project planning starts.

curi:

it starts with more like

curi:

goal (1 sentence). how the goal will be accomplished (outline. around 1 paragraph worth of text. bullet points are fine)

curi:

resource usage for major, relevant resource categories (very rough ballpark estimates, e.g. 1 person or 10 or 100 ppl work on it. it takes 1 day, 10 days, 100 days. it costs $0, $1000, $1000000.)

curi:

you can go into more detail, those are just minimums. often fine to begin with.

curi:

for big, complicated projects you may need a longer outline to say the steps involved.

curi:

then once u have roughly a goal and a plan (and the resource estimates help give concrete meaning to the plan), then you can look at risks, ways it may fail.

curi:

the goal should be clearly stated so that someone could clearly evaluate potential outcomes as "yes that succeeded" or "no, that's a failure"

curi:

if this is complicated, you should have another section giving more detail on this.

curi:

and do that before addressing risks.

curi:

another key area is prerequisites. can do before or after risks. skills and knowledge you'll need for the project. e.g. "i need to know how wash a test tube". especially notable are things that aren't common knowledge and you don't already know or know how to do.

curi:

failure to succeed at all the prerequisites is one of the risks of a project. the prerequisites can give you some ideas about more risks in terms of intellectual bets being made.

curi:

some prerequisites are quite generic but merit more attention than they get. e.g. reading skill is something ppl take for granted that they have, but it's actually an area where most ppl could get value from improving. and it's pretty common ppl's reading skills are low enough that it causes practical problems if they try to engage with something. this is a common problem with intellectual writing but it comes up plenty with mundane things like cookbooks or text in video games that provides information about what to do or how an ability works. ppl screw such things up all the time b/c they find reading burdensome and skip reading some stuff. or they read it fast, don't understand it, and don't have the skill to realize they missed stuff.)

curi:

quite a few writers are not actually as good at typing as they really ought to be, and it makes their life significantly worse and less efficient.

curi:

and non-writers. cuz a lot of ppl type stuff pretty often.

curi:

and roughly what happens is they add up all these inefficiencies and problems, like being bad at typing and not knowing good methods for resolving family conflicts, and many others, and the result is they are overwhelmed and think it'd be very hard to find time to practice typing.

curi:

their inefficiencies take up so much time they have trouble finding time to learn and improve.

curi:

a lot of ppl's lives look a lot like that.


Elliot Temple | Permalink | Messages (53)

What Is an Impasse?

An impasse is a reason (from the speaker’s pov (point of view)) that the discussion isn’t working.

Impasses take logical priority over continuing the discussion. It doesn’t make sense to keep talking about the original topic when someone thinks that isn’t working.

An impasse chain is an impasse about a discussion of an impasse. The first impasse, about the original topic, is impasse 1. If discussion of impasse 1 reaches an impasse, that’s impasse 2. If discussion of impasse 2 reaches an impasse, that’s impasse 3. And so on.

A chain of impasses is different than multiple separate impasses. In a chain, each link is attached to the previous link. By contrast, multiple separate impasses would be if someone gives several reasons that the original discussion isn’t working. Each of those impasses is about the original discussion, rather than being linked to each other.

When there is a chain of impasses, the most recent (highest number) impasse takes priority over the previous impasses. Impasse 2 is a reason, from the speaker’s pov, that discussion of impasse 1 isn’t working. Responding about impasse 1 at that point doesn’t make sense from his pov. It comes off as trying to ignore him and his pov.

Sometimes people try to solve a problem without saying what they’re doing. Instead of discussing an impasse, they try to continue the prior discussion but make changes to fix the problem. But they don’t acknowledge the problem existed, say what they’re doing to fix it, ask if that is acceptable from the other person’s pov, etc. From the pov of the person who brought up the impasse, this looks like being ignored because the person doesn’t communicate about the impasse and tries to continue the original topic. The behavior looks very similar to a person who thinks the impasse is stupid and wants to ignore it for that reason. And usually when people try to silently solve the problem, they don’t actually know enough about it (since they asked no clarifying questions) in order to get it right on the first try (even if they weren’t confusing the other person by not explaining what they were doing, usually their first guess at a solution to the impasse won’t work).

This non-communicated-problem-solving-attempt problem is visible when people respond at the wrong level of discussion. Call the original topic level 0, the first impasse level 1, the second impasse level 2, the third impasse level 3, and so on. If level 3 has been reached and then someone responds to level 2, 1 or 0, then they’re not addressing the current impasse. They either are ignoring the problem or trying to solve it without explaining what they’re doing. Similarly, if the current level is 1, and someone responds at level 0, they’re making this error.

The above is already explained, in different words with more explanation, in my article Debates and Impasse Chains.


Elliot Temple | Permalink | Messages (7)

IGCs

IGCs are a way of introducing Yes or No Philosophy and Critical Fallibilism. I'm posting this seeking feedback. Does this make sense to you so far? Any objections? Questions? Doubts? Ideas that are confusing?


Ideas cannot be judged in isolation. We must know an idea’s goal or purpose. What problem is it trying to solve? What is it for? And what is the context?

So we should judge IGCs: {idea, goal, context} triples.

The same idea, “run fast”, can succeed in one context (a foot race) and fail in another context (a pie eating contest). And it can succeed at one goal (win the race) but fail at another goal (lose the race to avoid attention).

Think in terms of evaluating IGCs not ideas. A core question in thinking is: Does this idea succeed at this goal in this context? If you change any one of those parts (idea, goal or context) then it’s a different question and you may get a different result.

There are patterns in IGC evaluations. Some ideas succeed at many similar goals in a wide variety of contexts. Good ideas usually succeed at broad categories of goals and are robust enough to work in a fairly big category of contexts. However, a narrow, fragile idea can be valuable sometimes. (Narrow means that the idea applies to a small range of goals, and fragile means that many small adjustments to the context would cause the idea to fail.)

There are infinitely many logically possible goals and contexts. Every idea is in infinitely many IGCs that don’t work. Every idea, no matter how good, can be misused – trying to use it for a goal it can’t accomplish or in a context where it will fail.

Whether there are some universal ideas (like arithmetic) that can work in all contexts is an open question. Regardless, all ideas fail at many goals. And there are many more ways to be wrong than right. Out of all possible IGCs, most won’t work. Totally random or arbitrary IGCs are very unlikely to work (approximately a zero percent chance of working).

Truth is IGC success – the idea works at its purpose. Falsehood or error is an IGC that won’t work. Knowledge means learning about which IGCs work, and why, and the patterns of IGC success and failure.

So far, this is not really controversial. IGCs are not a standard way of explaining these issues, but they’re reasonably compatible with many common views. Many people would be able to present their beliefs using IGC terminology without changing their beliefs. I’ve talked about IGCs because they’re more precise than most alternatives and make it easier to understand my main point.

People believe that we can evaluate both whether an idea succeeds at a goal (in a context) and how well it does. There’s binary success or failure and also degree of success. Therefore, it’s believed, we should reject ideas that will fail and then, among the many that can succeed, choose an idea that will bring a high degree of success and/or a high probability of success.

I claim that this is approach is fundamentally wrong. We can and should use only decisive, binary judgments of success or failure.

The main cause of degree evaluations of ideas is vagueness, especially vague goals.


I'll stop there for now. Please post feedback on what it says so far (rather than on e.g. me not yet explaining vague goals).


Elliot Temple | Permalink | Messages (9)

Some Thoughts on Learning Philosophy

You need to know why you’re learning something in order to know when you’re done. What level of perfection does it need to be learned to? Which details should be learned and which skipped? That depends on its purpose.

At first, you can go by intuition or conventional defaults for how well to learn something, but it’s important at some point to start getting some control over this and making it more intentional and chosen.

To get a grasp on the purpose of learning, you need a tree (or graph). Writing it down helps clarify it in your mind. If you think about it without writing it down, there’s still information in your head that is logically equivalent to a tree (or graph) structure. If you have a goal and something you plan to do that’s related to the goal, then that is a tree: the goal is the root node and the relevant action is a descendent.

A tree can indicate some things you’re hoping to build up to. E.g. the root node is “write well” and then “learn grammar” is one of descendants. But those aren’t specific. How will you know when you succeeded?

It’s OK to sketch out trees with blank parts. You have the root node, then don’t specify everything, and then you get to the grammar node. You don’t have to know exactly what’s in between to know there’s a connection there. Figuring it out is useful though. It’s better to have something pretty generic like “learn mechanics of writing” in between instead of leaving it blank.

If you want to be able to write an article sharing your ideas about dinosaurs so that three of your friends can understand it, that’s more specific. That clearer root node gives more meaning to the “learn grammar” node below it. You can learn just the grammar that’s relevant to the goal. It helps you know when to move on. For example, you can write understandably to your three friends without using any colons or semi-colons. But you will need to understand periods, and you’ll probably want to use a few commas and question marks. And you’ll need to understand what a sentence is – not in full detail but at least the basics.

Another descendent node is “learn vocabulary”. Since the goal relates to dinosaurs, you’ll need some uncommon words like "cretaceous”, but you won’t need to know “sporadically” or “perplexity” (which are sometimes called “SAT words” due to showing up on the SAT college-entrance test – if your goal were to get into more prestigious colleges than you need to learn differently vocabulary).

Bottlenecks and breakpoints are important too. Which areas actually deserve much of your attention? Which are important to your goal and should be focused on? Which aren’t? Why? Usually you can get most stuff to a “good enough” level with little attention and then focus most of your attention on a few areas that will make a big difference to the outcome. If you can’t do that – if there are a lot of hard parts – then the project as a whole is too advanced for you and therefore needs to be divided into more manageable sub-projects. The number of sub-projects you end up with gives you a decent indication of project difficulty. If you have to divide it up into 500 parts to get them into manageable chunks, then it’s a big, hard project overall! If it’s 3 chunks then it’s harder than the average project but not too bad.

A bottleneck is a limiting factor, aka a constraint. If you do better in that area, it translates to a better outcome on the final goal. Most things aren’t bottlenecks. E.g. consider a chain. If you reinforce most links, it won’t make the overall chain stronger, because they weren’t the weakest link anyway. Doing better in that area (that link is stronger) doesn’t translate to more success at the goal (chain holds more weight). But if you find the weakest link – the bottleneck – and reinforce that link, then you’ll actually have a positive impact on the goal.

A breakpoint is a significant, distinguishable improvement. It makes some kinda meaningful difference instead of just being 0.003% better (who cares?). For example, I want to buy something that costs $20. Then there’s a breakpoint at $20. If I have $19 or less, I can’t buy it. If I have $20 or more, I can buy it. The incremental change of gaining $1 from $19 to $20 crosses the breakpoint and makes a big difference (buy instead of can’t buy). But any other $1 doesn’t matter so much. If I go from $15 to $16 or $33 to $34 it doesn’t change the outcome. More resources is generally a good thing, and money is generic enough to use on some other project later, but it’s important to figure out what will make important differences and pursue that. If we optimize things that don’t matter much, we can spend our whole lives without achieving much. There are so many details that we could pay attention to that they could consume all our time if we let them.

More specific goals are easier to achieve. More organized approaches are easier to succeed with. Some amount of organized planning – like connecting something to a clearer goal or sub-goal – helps you figure out what’s important and what’s “good enough”.

If you want to learn much philosophy or be much of a general intellectual, you need to be a decent reader and decent writing so you communication to and from you can happen in writing. And you need some ability to organize ideas and organize your life/time. It doesn’t have to be perfect but it has to work OK. And you need some general competence at most of the common knowledge that most people in our society have. And you need some interest in understanding things and some curiosity. And you need some ability to judge stuff for yourself: Does this make sense to you? Are you satisfied? And you need some ability to change and to consider negative things without getting too emotional. Those things are general purpose enough that it doesn’t really matter what specific types of ideas interest you the most, e.g. epistemology, science or economics, they’re going to be useful regardless.


Elliot Temple | Permalink | Messages (0)