Brains Are Computers

Adding 5+5 is an example of a computation. Why? By definition. "Computation" is a word which refers to calculating stuff like sums, matrix multiplication, binary logic operations, derivatives, etc.

Are all computations math? Sorta. Consider this computation:

(concatenate "cat" "dog")

Which outputs: "catdog"

The format I used is: (function data-1 data-2). That's the best format programmers have invented so far. There can be any number of pieces of data including zero. Quotes indicate a string. And for data you can also put a function which returns data. That's nesting, e.g:

(concatenate "meow" (concatenate "cat" "dog"))

Which outputs: "meowcatdog"

Is that math? It can be done by math. E.g. you can assign each letter a number and manipulate lists of numbers, which is what a Mac or PC would do to deal with this. If you're interested in this topic, you might like reading Godel, Escher, Bach which discusses numeric representations.

But a human might calculate string concatenation in a different way, e.g. by writing each string on a piece of paper and then computing concatenations by taping pieces of paper together.

Humans have a lot of ways to do sums too. E.g. you can compute 5+5 using groups of marbles. If you want to know more about this, you should read David Deutsch's discussion of roman numerals in The Beginning of Infinity, as well as the rest of his books.

Moving on, computation is sorta like math but not exactly. You can think of computation as math or stuff that could be done with math.

A computer is a physical object which can do computations.

We can see that human intelligence involves computation because I can ask you "what is 5+5?" and you can tell me without even using a tool like a calculator. You can do it mentally. So either brains are computers or brains contain computers plus something else. There has to be a computer there somewhere because anything that can add 5+5 is a computer.

But we don't really care about an object which can add 5+5 but which can't compute anything else.

We're interested in computers which can do many different computations. Add lots of different numbers, multiply any matrices, find primes, and even do whatever math or math-equivalent it takes to write and send emails!

We want a general purpose computer. And human intelligence has that too. Humans can mentally compute all sorts of stuff like integrals, factoring, finding the area of shapes, or logic operations like AND, NOT, OR, XOR.

When we say "computer" we normally refer to general purpose computers. Specifically, universal classical computers.

A universal computer is a computer than can compute anything that can be computed. "Classical" refers to computers which don't use quantum physics. Quantum physics allows some additional computations if you build a special quantum computer.

A universal computer sounds really amazing and difficult to create. It sounds really special. But there's something really interesting. All general purpose computers are universal. It only takes a tiny bit of basic functionality to reach universality.

Every iPhone, Android phone, Mac, or PC is a universal computer. Even microwaves and dishwashers use universal computers to control them. The computer in a microwave can do any computation that a $100,000 supercomputer can do. (The microwave computer would take longer and you'd have to plug in extra disks periodically for computations that deal with a lot of data.)

All it takes to be a universal computer is being able to compute one single function: NAND. NAND takes two inputs, each of which is a 1 or 0, and it computes one output, a 1 or 0. NAND stands for "not and" and the rule is: return a 1 if not both inputs are 1.

That's it. You can use NAND to do addition, matrix multiplication, and send emails. You just have to build up the complexity step by step.

There are many other ways to achieve universality. For example, a computer which can compute AND and NOT, individually, is also universal. Being able to do NOT and OR also works. (Again these are simple functions which only have 1's and 0's as inputs and outputs.) If you want to see how they work, there are "truth tables" here which show lists of what the outputs are for all possible inputs: Wikipedia Link.

We can see that the computer aspect of humans is universal because humans can mentally compute NAND, AND and NOT. That's more than enough to indicate universal computation.

To make this more concrete, you can ask me what (AND 1 1) is and I can tell you 1. You can ask me (NOT 0) and I can tell you 1. You can ask me (NAND 1 1) and I can tell you 0. I can do that in my head, no problem. You could too (at least if you learned how). You're capable.

So human thinking works by either:

  1. Universal classical computation; or

  2. Universal classical computation as well as something else.

I don't think there's a something else because there's nothing humans do, think, say, etc, which requires something else to explain how it's possible. And because no one has proposed any something else that makes sense. I don't believe in magical souls, and I'm certainly not going to start believing in them in order to say, "Humans have a universal classical computer plus a soul, which lets them do exactly the same things a universal classical computer with no soul can do.". That'd be silly. And I don't think an iPhone has a soul in the silicon.

The brains of dogs, cats, parrots and monkeys are also universal classical computers. Remember, that's a low bar. It's actually really hard to make a computer do much of anything without making it universal. You can read about Universal Cellular Automata and how little it takes to get universality if you're interested. How easy universality is to achieve, and how there's an abrupt jump to it (rather than there being half-universal computers) is also explained in The Beginning of Infinity.

I won't go into arguing that cat brains are universal computers here. What I will say, briefly, is in what way humans are different than cats. It's kinda like how a PC is different than an iPhone. It has a different operating system and different apps. That's the basic difference between a person and a cat: different software. The hardware is different too, but the hardware fundamentally has the same capabilities, just like iPhones and PCs have different hardware with the same fundamental capabilities: they can do exactly the same computations. Humans have intelligence software of some sort – software which does intelligent thinking. Cats don't.

Elliot Temple | Permalink | Messages (5)


This is a reply to Ed Powell writing about IQ.

I believe IQ tests measure a mix of intelligence, culture and background knowledge.

That's useful! Suppose I'm screening employees to hire. Is a smart employee the only thing I care about? No. I also want him to fit in culturally and be knowledgable. Same thing with immigrants.

The culture and background knowledge measured by IQ tests isn't superficial. It's largely learned in early childhood and is hard to change. It is possible to change. I would expect assimilating to raise IQ scores on many IQ tests, just as learning arithmetic raises scores on many IQ tests for people who didn't know it before.

Many IQ test questions are flawed. They have ambiguities. But this doesn't make IQ tests useless. It just makes them less accurate, especially for people who are smarter than the test creators. Besides, task assignments from your teacher or boss contain ambiguities too, and you're routinely expected to know what they mean anyway. So it matters whether you can understand communications in a culturally normal way.

Here's a typical example of a flawed IQ test question. We could discuss the flaws if people are interested in talking about it. And I'm curious what people think the answer is supposed to be.

IQ tests don't give perfect foresight about an individual's future. So what? You don't need perfectly accurate screening for hiring, college admissions or immigration. Generally you want pretty good screening which is cheap. If someone comes up with a better approach, more power to them.

Would it be "unfair" to some individual that they aren't hired for a job they'd be great at because IQ tests aren't perfect? Sure, sorta. That sucks. The world is full of things going wrong. Pick yourself up and keep trying – you can still have a great life. You have no right to be treated "fairly". The business does have a right to decide who to hire or not. There's no way to making hiring perfect. If you know how to do hiring better, sell them the method. But don't get mad at hiring managers for lacking omniscience. (BTW hiring is already unfair and stupid in lots of ways. They should use more work sample tests and less social metaphysics. But the problems are largely due to ignorance and error, not conscious malice.)

Ed Powell writes:

Since between 60% and 80% of IQ is heritable, it means that their kids won't be able to read either. Jordan Peterson in one of his videos claims that studies show there are no jobs at all in the US/Canadian economies for anyone with an IQ below about 83. That means 85% of the Somalian immigrants (and their children!) are essentially unemployable. No immigration policy of the US should ignore this fact.

I've watched most of Jordan Peterson's videos. And I know, e.g., that the first video YouTube sandboxed in their new censorship campaign was about race and IQ.

I agree that it's unrealistic for a bunch of low IQ Somalians to come here and be productive in U.S. jobs. I think we agree on lots of conclusions.

But I don't think IQ is heritable in the normal sense of the word "heritable", meaning that it's controlled by genes passed on by parents. (There's also a technical definition of "heritable", which basically means correlation.) For arguments, see: Yet More on the Heritability and Malleability of IQ.

I don't think intelligence is genetic. The studies claiming it's (partly) genetic basically leave open the possibility that it's a gene-environment interaction of some kind, which leaves open the possibility that intelligence is basically due to memes. Suppose parents in our culture give worse treatment to babies with black skin, and this causes lower intelligence. That's a gene-environment interaction. In this scenario, would you say that the gene for black skin is a gene for low intelligence? Even partly? I wouldn't. I'd say genes aren't controlling intelligence in this scenario, culture is (and, yes, our culture has some opinions about some genetic traits like skin color).

When people claim intelligence (or other things) are due to ideas, they usually mean it's easy to change. Just use some willpower and change your mind! But memetic traits can actually be harder to change than genetic traits. Memes evolve faster than genes, and some old memes are very highly adapted to prevent themselves from being changed. Meanwhile, it's pretty easy to intervene to change your genetic hair color with dye.

I think intelligence is a primarily memetic issue, and the memes are normally entrenched in early childhood, and people largely don't know how to change them later. So while the mechanism is different, the conclusions are still similar to if it were genetic. One difference is that I'm hopeful that dramatically improved parenting practices will make a large difference in the world, including by raising people's intelligence.

Also, if memes are crucial, then current IQ score correlations may fall apart if there's a big cultural shift of the right kind. IQ test research only holds within some range of cultures, not in all imaginable cultures. But so what? It's not as if we're going to wake up in a dramatically different culture tomorrow...

I don't believe that IQ tests measure general intelligence – which I don't think exists as a single, well-defined thing. I have epistemological reasons for this which are complicated and differ from Objectivism on some points. I do think that some people are smarter than others. I do think there are mental skills, which fall under the imprecise term "intelligence", and have significant amounts of generality.

Because of arguments about universality (which we can discuss if there's interest), I think all healthy people are theoretically capable of learning anything that can be learned. But that doesn't mean they will! What stops them isn't their genes, it's their ideas. They have anti-rational memes from early childhood which are very strongly entrenched. (I also think people have free will, but often choose to evade, rationalize, breach their integrity, etc.)

Some people have better ideas and memes than others. So I share a conclusion with you: some people are dumber than others in important very-hard-to-change ways (even if it's not genetic), and IQ test scores do represent some of this (imperfectly, but meaningfully).

For info about memes and universality, see The Beginning of Infinity.

And, btw, of course there are cultural and memetic differences correlated with e.g. race, religion and nationality. For example, on average, if you teach your kids not to "act white" then they're going to turn out dumber.

So, while I disagree about many of the details regarding IQ, I'm fine with a statement like "criminality is mainly concentrated in the 80-90 IQ range". And I think IQ tests could improve immigration screening.

Read my followup post: IQ 2

Elliot Temple | Permalink | Messages (2)

IQ 2

These are replies to Ed Powell discussing IQ. This follows up on my previous post.

I believe I understand that you’re fed up with various bad counter-arguments about IQ, and why, and I sympathize with that. I think we can have a friendly and productive discussion, if you’re interested, and if you either already have sophisticated knowledge of the field or you’re willing to learn some of it (and if, perhaps as an additional qualification, you have an IQ over 130). As I emphasized, I think we have some major points of agreement on these issues, including rejecting some PC beliefs. I’m not going to smear you as a racist!

Each of these assertions is contrary to the data.

My claims are contrary to certain interpretations of the data, which is different than contradicting the data itself. I’m contradicting some people regarding some of their arguments, but that’s different than contradicting facts.

Just look around at the people you know: some are a lot smarter than others, some are average smart, and some are utter morons.

I agree. I disagree about the details of the underlying mechanism. I don’t think smart vs. moron is due to a single underlying thing. I think it’s due to multiple underlying things.

This also explains reversion to the mean

Reversion to the mean can also be explained by smarter parents not being much better parents in some crucial ways. (And dumber parents not being much worse parents in some crucial ways.)

Every piece of "circumstantial evidence" points to genes

No piece of evidence that fails to contradict my position can point to genes over my position.

assertion that there exists a thing called g

A quote about g:

To summarize ... the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can't tell us where the correlations came from; it always says that there is a general factor whenever there are only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn't distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

Back to quoting Ed:

I just read an article the other day where researchers have identified a large number of genes thought to influence intelligence.

I’ve read many primary source articles. That kind of correlation research doesn’t refute what I’m saying.

What do you think psychometricians have been doing for the last 100 years?

Remaining ignorant of philosophy, particularly epistemology, as well as the theory of computation.

It is certainly true that one can create culturally biased IQ test questions. This issue has been studied to death, and such questions have been ruthlessly removed from IQ tests.

They haven’t been removed from the version of the Wonderlic IQ test you chose to link, which I took my example from.

I think there’s an important issue here. I think you believe there are other IQ tests which are better. But you also believe the Wonderlic is pretty good and gets the roughly same results as the better tests for lots of people. Why, given the flawed question I pointed out (which had a lot more wrong with it than cultural bias), would the Wonderlic results be similar to the results of some better IQ test? If one is flawed and one isn’t flawed, why would they get similar results?

My opinion is as before: IQ tests don’t have to avoid cultural bias (and some other things) to be useful, because culture matters to things like job performance, university success, and how much crime an immigrant commits.

I don't use the term "genetic" because I don't mean "genetic", I mean "heritable," because the evidence supports the term "heritable."

The word "heritable" is a huge source of confusion. A technical meaning of "heritable" has been defined which is dramatically different than the standard English meaning. E.g. accent is highly "heritable" in the terminology of heritability research.

The technical meaning of “heritable” is basically: “Variance in this trait is correlated with changes in genes, in the environment we did the study in, via some mechanism of some sort. We have no idea how much of the trait is controlled by what, and we have no idea what environmental changes or other interventions would affect the trait in what ways.” When researchers know more than that, it’s knowledge of something other than “heritability”. More on this below.

I have not read the articles you reference on epistemology, but intelligence has nothing to do with epistemology, just as a computer's hardware has nothing to do with what operating system or applications you run on it.

Surely you accept that ideas (software) have some role in who is smart and who is a moron? And so epistemology is relevant. If one uses bad methods of thinking, one will make mistakes and look dumb.

Epistemology also tells us how knowledge can and can’t be created, and knowledge creation is a part of intelligent thinking.

OF COURSE INTELLIGENCE IS BASED ON GENES, because humans are smarter than chimpanzees.

I have a position on this matter which is complicated. I will briefly give you some of the outline. If you are interested, we can discuss more details.

First, one has to know about universality, which is best approached via the theory of computation. Universal classical computers are well understood. The repertoire of a classical computer is the set of all computations it can compute. A universal classical computer can do any computation which any other classical computer can do. For evaluating a computer’s repertoire, it’s allowed unlimited time and data storage.

Examples of universal classical computers are Macs, PCs, iPhones and Android phones (any of them, not just specific models). Human brains are also universal classical computers, and so are the brains of animals like dogs, cows, cats and horses. “Classical” is specified to omit quantum computers, which use aspects of quantum physics to do computations that classical computers can’t do.

Computational universality sounds very fancy and advanced, but it’s actually cheap and easy. It turns out it’s difficult to avoid computational universality while designing a useful classical computer. For example, the binary logic operations NOT and AND (plus some control flow and input/output details) are enough for computational universality. That means they can be used to calculate division, Fibonacci numbers, optimal chess moves, etc.

There’s a jump to universality. Take a very limited thing, and add one new feature, and all of a sudden it gains universality! E.g. our previous computer was trivial with only NOT, and universal when we added AND. The same new feature which allowed it to perform addition also allowed it to perform trigonometry, calculus, and matrix math.

There are different types of universality, e.g. universal number systems (systems capable of representing any number which any other number system can represent) and universal constructors. Some things, such as the jump to universality, apply to multiple types of universality. The jump has to do with universality itself rather than with computation specifically.

Healthy human minds are universal knowledge creators. Animal minds aren’t. This means humans can create any knowledge which is possible to create (they have a universal repertoire). This is the difference between being intelligent or not intelligent. Genes control this difference (with the usual caveats, e.g. that a fetal environment with poison could cause birth defects).

Among humans, there are also degrees of intelligence. E.g. a smart person vs. an idiot. Animals are simply unintelligent and don’t have degrees of intelligence at all. Why do animals appear somewhat intelligent? Because their genes contain evolved knowledge and code for algorithms to control animal behavior. But that’s a fundamentally different thing than human intelligence, which can create new knowledge rather than relying on previously evolved knowledge present in genes.

Because of the jump to universality, there are no people or animals which can create 20%, 50%, 80% or 99% of all knowledge. Nothing exists with that kind of partial knowledge creation repertoire. It’s only 100% (universal) or approximately zero. If you have a conversation with someone and determine they can create a variety of knowledge (a very low bar for human beings, though no animal can meet it), then you can infer they have the capability to do universal knowledge creation.

Universal knowledge creation (intelligence) is a crucial capability our genes give us. From there, it’s up to us to decide what to do with it. The difference between a moron and a genius is how they use their capability.

Differences in degrees of human intelligence, among healthy people (with e.g. adequate food) are due to approximately 100% ideas, not genes. Some of the main factors in early childhood idea development are:

  • Your culture’s anti-rational memes.
  • The behavior of your parents.
  • The behavior of other members of your culture that you interact with.
  • Sources of cultural information such as YouTube.
  • Your own choices, including mental choices about what to think.

The relevant ideas for intelligence are mostly unconscious and involve lots of methodology. They’re very hard for adults in our culture to change.

This is not the only important argument on this topic, but it’s enough for now.

This isn’t refuted in The Bell Curve, which doesn’t discuss universality. The concept of universal knowledge creators was first published in 2011. (FYI this book is by my colleague, and I contributed to the writing process).

Below I provide some comments on The Bell Curve, primarily about how it misunderstands heritability research.

There is a most absurd and audacious Method of reasoning avowed by some Bigots and Enthusiasts, and through Fear assented to by some wiser and better Men; it is this. They argue against a fair Discussion of popular Prejudices, because, say they, tho’ they would be found without any reasonable Support, yet the Discovery might be productive of the most dangerous Consequences. Absurd and blasphemous Notion! As if all Happiness was not connected with the Practice of Virtue, which necessarily depends upon the Knowledge of Truth.
EDMUND BURKE A Vindication of Natural Society

This is a side note, but I don’t think the authors realize Burke was being ironic and was attacking the position stated in this quote. The whole work, called a vindication of natural society (anarchy), is an ironic attack, not actually a vindication.

Heritability, in other words, is a ratio that ranges between 0 and 1 and measures the relative contribution of genes to the variation observed in a trait.

This is incomplete because it omits the simplifying assumptions being made. From Yet More on the Heritability and Malleability of IQ:

To summarize: Heritability is a technical measure of how much of the variance in a quantitative trait (such as IQ) is associated with genetic differences, in a population with a certain distribution of genotypes and environments. Under some very strong simplifying assumptions, quantitative geneticists use it to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions. If, despite this, one does want to find out the heritability of IQ for some human population, the fact that the simplifying assumptions I mentioned are clearly false in this case means that existing estimates are unreliable, and probably too high, maybe much too high.

Note that the word “associated” in the quote refers to correlation, not to causality. Whereas the authors of The Bell Curve use the word “contribution” instead, which doesn’t mean “correlation” and is therefore wrong.

Here’s another source on the same point, Genetics and Reductionism:

high [narrow] heritability, which is routinely taken as indicative of the genetic origin of traits, can occur when genes alone do not provide an explanation of the genesis of that trait. To philosophers, at least, this should come as no paradox: good correlations need not even provide a hint of what is going on. They need not point to what is sometimes called a "common cause". They need not provide any guide to what should be regarded as the best explanation.

You can also read some primary source research in the field (as I have) and see what sort of “heritability” it does and doesn’t study, and what sort of limitations it has. If you disagree, feel free to provide a counter example (primary source research, not meta or summary), which you’ve read, which studies a different sort of IQ “heritability” than my two quotes talk about.

What happens when one understands “heritable” incorrectly?

Then one of us, Richard Herrnstein, an experimental psychologist at Harvard, strayed into forbidden territory with an article in the September 1971 Atlantic Monthly. Herrnstein barely mentioned race, but he did talk about heritability of IQ. His proposition, put in the form of a syllogism, was that because IQ is substantially heritable, because economic success in life depends in part on the talents measured by IQ tests, and because social standing depends in part on economic success, it follows that social standing is bound to be based to some extent on inherited differences.

This is incorrect because it treats “heritable” (as measured in the research) as meaning “inherited”.

How Much Is IQ a Matter Genes?

In fact, IQ is substantially heritable. [...] The most unambiguous direct estimates, based on identical twins raised apart, produce some of the highest estimates of heritability.

This incorrectly suggests that IQ is substantially a matter of genes because it’s “heritable” (as determined by twin studies).

Specialists have come up with dozens of procedures for estimating heritability. Nonspecialists need not concern themselves with nuts and bolts, but they may need to be reassured on a few basic points. First, the heritability of any trait can be estimated as long as its variation in a population can be measured. IQ meets that criterion handily. There are, in fact, no other human traits—physical or psychological—that provide as many good data for the estimation of heritability as the IQ. Second, heritability describes something about a population of people, not an individual. It makes no more sense to talk about the heritability of an individual’s IQ than it does to talk about his birthrate. A given individual’s IQ may have been greatly affected by his special circumstances even though IQ is substantially heritable in the population as a whole. Third, the heritability of a trait may change when the conditions producing variation change. If, one hundred years ago, the variations in exposure to education were greater than they are now (as is no doubt the case), and if education is one source of variation in IQ, then, other things equal, the heritability of IQ was lower then than it is now.


Now for the answer to the question, How much is IQ a matter of genes? Heritability is estimated from data on people with varying amounts of genetic overlap and varying amounts of shared environment. Broadly speaking, the estimates may be characterized as direct or indirect. Direct estimates are based on samples of blood relatives who were raised apart. Their genetic overlap can be estimated from basic genetic considerations. The direct methods assume that the correlations between them are due to the shared genes rather than shared environments because they do not, in fact, share environments, an assumption that is more or less plausible, given the particular conditions of the study. The purest of the direct comparisons is based on identical (monozygotic, MZ) twins reared apart, often not knowing of each other’s existence. Identical twins share all their genes, and if they have been raised apart since birth, then the only environment they shared was that in the womb. Except for the effects on their IQs of the shared uterine environment, their IQ correlation directly estimates heritability. The most modern study of identical twins reared in separate homes suggests a heritability for general intelligence between .75 and .80, a value near the top of the range found in the contemporary technical literature. Other direct estimates use data on ordinary siblings who were raised apart or on parents and their adopted-away children. Usually, the heritability estimates from such data are lower but rarely below .4.

This is largely correct if you read “heritability” with the correct, technical meaning. But the assumption that people raised apart don’t share environment is utterly false. People raised apart – e.g. in different cities in the U.S. – share tons of cultural environment. For example, many ideas about parenting practices are shared between parents in different cities.

Despite my awareness of these huge problems with IQ research, I still agree with some things you’re saying and believe I know how to defend them correctly. In short, genetic inferiority is no good (and contradicts Ayn Rand, btw), but cultural inferiority is a major world issue (and correlates with race, which has led to lots of confusion).

As a concrete reminder of what we’re discussing, I’ll leave you with an IQ test question to ponder:

Read my followup post: IQ 3

Elliot Temple | Permalink | Messages (0)

IQ 3

These are replies to Ed Powell discussing IQ. This follows up on my previous posts: IQ and IQ 2.

Thanks for writing a reasonable reply to someone you disagree with. My most important comments are at the bottom and concern a methodology that could be used to make progress in the discussion.

I think we both have the right idea of "heritable." Lots of things are strongly heritable without being genetic.

OK, cool. Is there a single written work – which agrees “heritable” doesn’t imply genetic – which you think adequately expresses the argument today for genetic degrees of intelligence? It’d be fine if it’s a broad piece discussing lots of arguments with research citations that it’s willing to bet its claims on, or if it focuses on one single unanswerable point.

I think you take my analogy of a brain with a computer too far.

It's not an analogy, brains are literally computers. A computer is basically something that performs arbitrary computations, like 2+3 or reversing the letters in a word. That’s not nearly enough for intelligence, but it’s a building block intelligence requires. Computation and information flow are a big part of physics now, and if you try to avoid them you're stuck with alternatives like souls and magic.

I don't pretend to understand your argument above, and so I won't spend time debating it, but you surely realize that human intelligence evolved gradually over the last 5 or so million years (since our progenitors split from the branch that became chimps), and that this evolution did not consist of a mutant ADD Gate gene and another mutant NOT Gate gene.

There are lots of different ways to build computers. I don't think brains are made out of a big pile of NAND gates. But computers with totally different designs can all be universal – able to compute all the same things.

Indeed, if intelligence is properly defined as "the ability to learn", then plenty of animals have some level of intelligence. Certain my cats are pretty smart, and one can, among the thousands of cute cat videos on the internet, find examples of cats reasoning through options to open doors or get from one place to another. Dogs are even more intelligent. Even Peikoff changed his mind on Rand's pronouncement that animals and man are in different distinct classes of beings (animals obey instinct, man has no instinct and thinks) when he got a dog. Who knew that first hand experience with something might illuminate a philosophical issue?

I agree with Rand and I can also reach the same conclusion with independent, Popperian reasons.

I've actually had several dogs and cats. So I'm not disagreeing from lack of first hand experience.

What I would ask if I lacked that experience – and this is relevant anyway – is if you could point out one thing I'm missing (due to lack of experience, or for any other reason). What fact was learned from experience with animals that I don't know, and which contradicts my view?

I think you're not being precise enough about learning, and that with your approach you'd have to conclude that some video game characters also learn and are pretty smart. Whatever examples you provide about animal behaviors, I’ll be happy to provide parallel software examples – which I absolutely don’t think constitute human-like intelligence (maybe you do?).

Rand's belief in the distinct separation between man and animals when it comes to intellect is pretty contrary to the idea that man evolved gradually,

The jump to universality argument provides a way that gradual evolution could create something so distinct.

in the next few years the genetic basis of intelligence will in fact be found and we will no longer have anything to argue about. I don't think there's any real point arguing over this idea.

Rather than argue, would you prefer to bet on whether the genetic basis higher intelligence will be found within the next 5 years? I'd love to bet $10,000 on that issue.

In any case, even if there was such a finding, there’d still be plenty to argue about. It wouldn’t automatically and straightforwardly settle the issues regarding the right epistemology, theory of computation, way to understand universality, etc.

We all know a bunch of really smart people who are in some ways either socially inept or completely nuts.

Yes, but there are cultural explanations for why that would be, and I don't think genes can control social skill (what exactly could the entire mechanism be, in hypothetical-but-rigorous detail?).

I know a number of people smarter than myself who have developed some form of mental illness, and it's fairly clear that these things are not unrelated.

Tangent: I consider the idea of "mental illness" a means of excusing and legitimizing the initiation of force. It's used to subvert the rule of law – both by imprisoning persons without trial and by keeping some criminals out of jail.

Link: Thomas Szasz Manifesto.

The point of IQ tests is to determine (on average) whether an individual will do well in school or work, and the correspondence between test results and success in school and work is too close to dismiss the tests as invalid, even if you don't believe in g or don't believe in intelligence at all.

Sure. As I said, I think IQ tests should be used more.

The tests are excellent predictors, especially in the +/- 3 SD area

Yes. I agree the tests do worse with outliers, but working well for over 99% of people is still useful!

The government has banned IQ tests from being used as discriminators for job fitness;

That's an awful attack on freedom and reason!

Take four or five internet IQ tests. I guarantee you the answers will be in a small range (+/- 5ish), even though they are all different. Clearly they measure something! And that something is correlated with success in school and work (for large enough groups).

I agree.

My one experience with Deutsch was his two interviews on Sam Harris's podcast

For Popper and Deutsch, I'd advise against starting with anything other than Deutsch's two books.

FYI Deutsch is a fan of Ayn Rand, an opponent of global warming, strongly in favor of capitalism, a huge supporter of Israel, and totally opposed to cultural and moral relativism (thinks Western culture is objectively and morally better, etc.).

I have some (basically Objectivist) criticism of Deutsch's interviews which will interest people here. In short, he's recently started sucking up to lefty intellectuals, kinda like ARI. But his flawed approach to dealing with the public doesn't prevent some of his technical ideas about physics, computation and epistemology from being true.

But if one doesn't believe g exists,

I think g is a statistical construct best forgotten.

or that IQ tests measure anything real,

I agree that they do, and that the thing measured is hard to change. Many people equate genetic with hard to change, and non-genetic with easy to change, but I don't. There are actual academic papers in this field which say, more or less, "Even if it's not genetic, we may as well count it as genetic because it's hard to change."

or that IQ test results don't correlate with scholastics or job success across large groups, then there's really nothing to discuss.

I agree that they do. I am in favor of more widespread use of IQ testing.

As I said, I think IQ tests measure a mix of intelligence, culture and background knowledge. I think these are all real, important, and hard to change. (Some types of culture and background knowledge are easy to change, but some other types are very hard to change, and IQ tests focus primarily on measuring the hard to change stuff, which is mostly developed in early childhood.)

Of course intelligence, culture and knowledge all correlate with job and school success.

Finally, I don't think agreement is possible on this issue, because much of your argument depends upon epistemological ideas of Pooper/Deutsch and yourself, and I have read none of the source material. [...] I don't see how a discussion can proceed though on this IQ issue--or really any other issue--with you coming from such an alien (to me) perspective on epistemology that I have absolutely no insight into. I can't argue one way or the other about cultural memes since I have no idea what they are and what scientific basis for them exists. So I won't. I'm not saying you're wrong, I'm just saying I won't argue about something I know nothing about.

I'd be thrilled to find a substantial view on an interesting topic that I didn't already know about, that implied I was wrong about something important. Especially if it had some living representative(s) willing to respond to questions and arguments. I've done this (investigated ideas) many times, and currently have no high priority backlog. E.g. I know of no outstanding arguments against my views on epistemology or computation to address, nor any substantial rivals which aren't already refuted by an existing argument that I know of.

I've written a lot about methods for dealing with rival ideas. I call my approach Paths Forward. The basic idea is that it's rational to act so that:

  1. If I'm mistaken
  2. And someone knows it (and they're willing to share their knowledge)
  3. Then there's some reasonable way that I can find out and correct my mistake.

This way I don't actively prevent fixing my mistakes and making intellectual progress.

There are a variety of methods that can be used to achieve this, and also a variety of common methods which fail to achieve this. I consider the Paths-Forward-compatible methods rational, and the others irrational.

The rational methods vary greatly on how much time they take. There are ways to study things in depth, and also faster methods available when desired. Here's a fairly minimal rational method you could use in this situation:

Read until you find one mistake. Then stop and criticize.

You’ll find the first mistake early on unless the material is actually good. (BTW you're allowed to criticize meta mistakes, such as that the author failing to say why his stuff matters, rather than only criticizing internal or factual errors. You can also stop reading at your first question, instead of criticism.)

Your first criticism (or question) will often be met with dumb replies that you can evaluate using knowledge you already have about argument, logic, etc. Most people with bad ideas will make utter fools of themselves in answer to your first criticism or question. OK, done. Rather than ignore them, you've actually addressed their position, and their position now has an outstanding criticism (or unanswered question), and there is a path forward available (they could, one day, wise up and address the issue).

Sometimes the first criticism will be met with a quality reply which addresses the issue or refers you to a source which addresses it. In that case, you can continue reading until you find one more mistake. Keep repeating this process. If you end up spending a bunch of time learning the whole thing, it's because you can't find any unaddressed mistakes in it (it's actually great)!

A crucial part of this method is actually saying your criticism or question. A lot of people read until the first thing they think is a mistake, then stop with no opportunity for a counter-argument. By staying silent, they're also giving the author (and his fans) no information to use to change their minds. Silence prevents progress regardless of which side is mistaken. Refusing to give even one argument leaves the other guy's position unrefuted, and leaves your position as not part of the public debate.

Another important method is to cite some pre-existing criticism of a work. You must be willing to take responsibility for what you cite, since you're using it to speak for you. It can be your own past arguments, or someone else's. The point is, the same bad idea doesn't need to be refuted twice – one canonical, reusable refutation is adequate. And by intentionally writing reusable material throughout your life, you'll develop a large stockpile which addresses common ideas you disagree with.

Rational methods aren't always fast, even when the other guy is mistaken. The less you know about the issues, the longer it can take. However, learning more about issues you don't know about is worthwhile. And once you learn enough important broad ideas – particularly philosophy – you can use it to argue about most ideas in most fields, even without much field-specific knowledge. Philosophy is that powerful! Especially when combined with a moderate amount of knowledge of the most important other fields.

Given limited time and many things worth learning, there are options about prioritization. One reasonable thing to do, which many people are completely unwilling to do, is to talk about one's interests and priorities, and actually think them through in writing and then expose one's reasoning to public criticism. That way there's a path forward for one's priorities themselves.

To conclude, I think a diversion into methodology could allow us to get the genetic intelligence discussion unstuck. I also believe that such methodology (epistemology) issues are a super important topic in their own right.

Elliot Temple | Permalink | Messages (9)

You're a Complex Software Project; Introspection is Auditing

(from this discussion)

you are a more complex software project than anything from Apple, IBM, etc.

your consciousness gets to audit the software and do maintenance and add features. the heart of the software was written in childhood and you don't remember much of it. think of it like a different team of programmers wrote it, and now you're coming in later.
you don't have full access to the source code for your audit. you can see source code for little pieces here and there, run automated tests for little pieces here and there, read some incomplete docs, and do manual tests for sporadic chunks of code.

and your attitude is: to ignore large portions of the limited evidence available to you about what the code does. that is, the best evidence of what the code says available is your own behavior. but you want to ignore that in favor of believing what you think the code does. you think the conclusions of your audit, which ignores the best evidence (your behavior – actual observations of the results of running code), and doesn't even know that it's a software audit or the circumstances of the audit, should be taken as gospel.

you find it implausible there are hostile software functions that could be running without your audit noticing. your audit has read 0.001% of the source code during the last year, but you seem to think the figure is 99%.

introspection skills means getting better at auditing. this can help a ton, but there's another crucial approach: you can learn about what people in our culture are commonly like. this enables you to audit whether you're like that in particular ways, match behavior to common code, etc. b/c i know far more about cultural standard software (memes) than you, and also i know what the situation is (as just described and more) and you don't, i'm in a much better position to understand you than you are. this doesn't apply to your idiosyncrasies, i know even less than you about those, but i know that and avoid claims about the areas where i don't know. on the other hand, i can comment effectively when you write down the standard output (as judged by category and pattern, not the exact bits) of a standard software modules, at length, and i recognize it.

for more info relating to intelligence, listen to my podcast about it.

Elliot Temple | Permalink | Messages (0)

Academia's Inadequacy

TheCriticalRat posted my article Animal Rights Issues Regarding Software and AGI on the Debate A Vegan SubReddit.

This post shares a discussion highlight where I wrote something I consider interesting and important. The main issue is about the inadequacy of academia.

The two large blockquotes are messages from pdxthehunted and the non-quotes are me. I made judgment calls about newline spacing for reproducing pdxthehunted's messages and I changed the format for the two footnotes.

This came up a few weeks ago, when u/curi was posing questions on this subreddit. I looked through some of Elliot's work then and did so again just now. I'm not accusing them of being here in bad faith--they seem like they are legitimately interested in thinking about this topic and are asking interesting questions/making interesting claims.

That being said, they also seem to have little or no formal education in philosophy of mind or AGI. All of their links to who they are circle back to their own commercial website/blog, where they sell their services as a rationalist philosopher/consultant. It appears that they are (mostly) self-taught. Their (supposed)[1] connection to David Deutsch is why I bothered even to look further.

I don't think you need to have a degree to understand even advanced tenets in philosophy of mind or artificial intelligence. The problem here is that Elliot seems to have written an enormous amount--possibly thousands of pages--but has never been published in any peer-reviewed journal (at least none that I have access to through my community college) and so their credibility is questionable. Judging from their previous interactions on this sub, Elliot seems to have created their own curriculum and field of expertise.

I was impressed by the scope and seriousness of their work (the little I took the time to read). Still, it's very problematic for debate: they seem to be looking for someone who has the exact same intellectual background as they do--but without any kind of standardization, it's very hard to know what that is without investing possibly hundreds of hours into reading his corpus. This is the benefit of academic credentials; we can engage with someone under the assumption that they know what they're talking about. Most of Elliot's citations and links are to their own blog--not to any peer-reviewed, actual science. I suspect that's why they've left the caveat that "blog posts are okay."

A very quick browse through Academic Search Premier found over 100 published peer-reviewed journal articles on nonhuman animals and general intelligence. I browsed the abstracts of the first three, all of which discuss general intelligence in nonhuman animals. General intelligence is hard to define--especially in a way that doesn't immediately bias it in favor of humans--but even looking at the usual suspects in cognition demonstrate that many animals possess it unless we move the goalposts to human-specific achievements like writing symphonies or building spacecraft (which of course leaves the vast majority of all humans who've ever existed in the cold).

In short--not to be rude or dismissive--but the reason that animal rights activists aren't concerned about the "algorithms" that animals have that "give them the capacity to suffer" (forgive me if I'm misquoting) is that it is a non-issue. No serious biologists doubt that nonhuman animals (at least mammals and birds) can have preferences for or against different mental states and that those preferences can be frustrated or thwarted. Pain and suffering are fitness-selecting traits that allowed animals to avoid danger and seek nourishment and mates. I'm not an expert in any of your claimed domains; that being said, to believe that consciousness and the capacity to suffer evolved only in one species of primate demonstrates a shockingly naive understanding of evolution, philosophy of mind, cognitive science/neuroscience, and biology.

Similar questions can be asked about general intelligence. My answer to that is we don’t entirely know. We haven’t yet written an AGI. So what should we think in the meantime? We can look at whether all animal behavior is consistent with non-AGI, non-conscious, non-suffering robots with the same sorts of features and design as present day software and robots that we have created and do understand. Is there any evidence to differentiate an animal from non-AGI software? I’m not aware of any, although I’ve had many people point me to examples of animal behavior that are blatantly compatible with non-AGI programming algorithms.

There is no "scoop" here. There are a few serious philosophers I've read--Daniel Dennett, for instance--who I think make similar arguments as you're making here, which we can call the "animals as automata" meme. The very fact that you believe that cows show no more intelligence than a self-driving car makes me feel very suspicious that you don't know what you're talking about. Nick Bostrum basically states in his AI opus Superintelligence that if humans managed to emulate a rodent mind, we would have mostly solved human-level AGI.

To claim that there are "no examples" of an animal doing something that a non-AGI robot couldn't[2] do discredits your entire thesis--you're either woefully misinformed, or disingenuous. Again, I'm very impressed by your (Elliot's) obvious dedication to learning and thinking. Still, I don't think this argument is even to the point where it's refined enough to take seriously. There's so much wrong with it that betrays not just a lack of competence in adjacent disciplines but also an arrogance around the author's imagined brilliance that it feels awkward and unrewarding to engage with.

EDIT 12/2: [1] Connection to Deutsch--though not necessarily relevant to this argument--is not overstated.

[2] Changed would to couldn't

Suppose I'm right about ~everything. What should I do that would fix these problems?

Thanks for the response. Also, I checked the Beginning of Infinity and saw that you don't seem to be exaggerating your claim (obviously you know this--I'm mentioning it for any skeptics). Elliot Temple is not only listed in the acknowledgments of BOI, but they are given special thanks from the author. That's very cool, regardless of anything else. Congratulations. I'm hesitant to do too much cognitive work for you on how to fix your problems--it sounds like you're used to charging people a fair amount of money to do the same. Still, I engaged with you here, so I'll let you know what I think.

Read More

You need to become better read in adjacent fields--cognitive neuroscience, ethology, evolutionary biology, ethics--these are just a few that come up off the top of my head. If you're right about more or less everything, peer-reviewed research done by actual scientists in most of these fields should agree with your thesis. If it doesn't, make a weaker claim.


Right now, your argument is formatted as a blog post. Anyone with access to a computer is technically capable of self-publishing thousands of pages of their thoughts. Write an article and submit it to an academic journal for peer review. Any publication that survives the peer-review process will give you more credibility. I'm not saying that's fair, but it is a useful heuristic for nonexperts to decide whether or not you are worth their time. An alternative would be to see your blog posts cited in books by experts (for instance, Eliezer Yudkowsky has no formal secondary education, but his ideas are good enough that he is credited by other experts in his field).


As it currently stands, you're essentially making a claim and insisting that others disprove it. This, of course, is acceptable as a Reddit discussion or a blog post--but is not suitable for uncovering the truth. I can insist that my pet rock has a subjective experience and refuse to believe otherwise unless someone can prove it to me, but I won't be taken seriously (nor should I be). Could you design an experiment that tests a falsifiable claim about nonhuman animal general intelligence? (Or, alternatively, find one that has already been published demonstrating that only humans possess it?) What would it look like?

What computations, what information processing, what inputs or outputs to what algorithms, what physical states of computer systems like brains indicates or is consciousness? I have the same question for suffering too.

We don't know the answer to these questions. Staking your thesis on possible answers to open questions might be a way to stalemate internet debates, but won't deepen your or anyone else's understanding.


You're widely read and the depth of your knowledge/understanding in some areas is significant. You need to recognize that some people will have different foundations than yours--they might be very well-read on evolutionary biology--but have less of an understanding of Turing computability. Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level. What do they have to teach you? What thinkers can they expose you to? Your self-curated curriculum is impressive but uneven and far from comprehensive. Try a little humility. Assuming you're right about everything, you should be able to communicate it to experts outside of your field.


I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it. If you aren't, you might be able to clarify where you went wrong and either abandon your claim or reformulate it to make a weaker--but possibly true--version.

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again. The article references 60 others and has been cited in 14. This does not mean that the authors' findings are replicable or ironclad, but again--it is a useful heuristic in deciding what kind of probability we want to assign to the likelihood it is on the right track, especially when the alternative is trying to read through hundreds of pages of random blog posts so that we can meet an interlocutor on their level.

To find that article, I searched for "general intelligence in animals" using Academic Search Premier. Pubmed and Google Scholar might find similar results. I filtered out all articles that were not subject to peer review or were published before 2012. It was the 4th search result out of over 50 published in the last seven years. Science may never be finished or solvable, but nonhuman animal's capacity to learn, have intentional states, preferences, and experience pain are not really still open questions in relevant disciplines.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once). I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level.

If I'm right about ~everything (premise), that includes that I'm right about my understanding of evolutionary biology, which is an area I've studied a lot (as has Deutsch). That's not outside my comfort zone.

I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it.

We disagree about the current state of the world. How many smart people exist, how many competent people exist in what fields, how reasonable are intellectuals, what sort of things do they do, etc. You mention Eliezer Yudkowsky, who FYI agrees with me about this something like this particular issue, e.g. he denies "civilizational adequacy", and says the world is on fire, in Hero Licenscing. OTOH, he's also the same guy who took moderator action to suppress discussion of Critical Rationalism on his site because – according to him – it was downvoted a lot (factually there were lots of downvotes, but I mean he actually said that was his reason for taking moderator action – so basically just suppressing unpopular ideas on the basis that they are unpopular). He has publicly claimed Critical Rationalism is crap but has never written anything substantive about that and won't debate, answer counter-arguments, or endorse any criticism of Critical Rationalism written by someone else (and I'm pretty confident there is no public evidence that he knows much about CR).

The reason I asked about how to fix this is I think your side of the debate, including academic institutions and their alleged adequacy, are blocking error correction. They don't allow any reasonable or realistic way that, if I'm right, it gets fixed. FYI I've written about the general topic of how intellectuals are closed to ideas and what rational methods of truth seeking look like, e.g. Paths Forward. The basic theme of that article is about doing intellectual activities in such a way that, if you're wrong, and someone knows you're wrong, and they're willing to tell you, you don't prevent them from correcting you. Currently ~everyone is doing that wrong. (Of course there are difficulties like how to do this in a time-efficient manner, which I go into. It's not an easy problem to solve but I think it is solvable.)

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again.

PS, FYI it's readily apparent from the first sentence of the abstract of that article that it's based on an intellectual framework which contradicts the one in The Beginning of Infinity. It views intelligence in a different way than we do, which must be partly due to some epistemology ideas which are not stated or cited in the paper. And it doesn't contain the string "compu" so it isn't engaging with our framework re computation either (instead it's apparently making unstated, uncited background assumptions again, which I fear may not even be thought through).

I guess you'll think that, in that case, I should debate epistemologists, not animal rights advocates. Approach one of the biggest points of disagreements more directly. I don't object to that. I do focus a lot on epistemology and issues closer to it. The animal welfare thing is a side project. But the situation in academic epistemology has the same problems I talked about in my sibling post and is, overall, IMO, worse. Also, even if I convinced many epistemologists, that might not help much, considering lots of what I was saying about computation is already a standard (sorta, see quote) view among experts. Deutsch actually complains about that last issue in The Fabric of Reality (bold text emphasized by me):

The Turing principle, for instance, has hardly ever been seriously doubted as a pragmatic truth, at least in its weak forms (for example, that a universal computer could render any physically possible environment). Roger Penrose's criticisms are a rare exception, for he understands that contradicting the Turing principle involves contemplating radically new theories in both physics and epistemology, and some interesting new assumptions about biology too. Neither Penrose nor anyone else has yet actually proposed any viable rival to the Turing principle, so it remains the prevailing fundamental theory of computation. Yet the proposition that artificial intelligence is possible in principle, which follows by simple logic from this prevailing theory, is by no means taken for granted. (An artificial intelligence is a computer program that possesses properties of the human mind including intelligence, consciousness, free will and emotions, but runs on hardware other than the human brain.) The possibility of artificial intelligence is bitterly contested by eminent philosophers (including, alas, Popper), scientists and mathematicians, and by at least one prominent computer scientist. But few of these opponents seem to understand that they are contradicting the acknowledged fundamental principle of a fundamental discipline. They contemplate no alternative foundations for the discipline, as Penrose does. It is as if they were denying the possibility that we could travel to Mars, without noticing that our best theories of engineering and physics say that we can. Thus they violate a basic tenet of rationality — that good explanations are not to be discarded lightly.

But it is not only the opponents of artificial intelligence who have failed to incorporate the Turing principle into their paradigm. Very few others have done so either. The fact that four decades passed after the principle was proposed before anyone investigated its implications for physics, and a further decade passed before quantum computation was discovered, bears witness to this. People were accepting and using the principle pragmatically within computer science, but it was not integrated with their overall world-view.

I think we live in a world where you can be as famous as Turing, have ~everyone agree you're right, and still have many implications of your main idea substantively ignored for decades (or forever. Applying Turing to physics is a better result than has happened with many other ideas, and Turing still isn't being applied to AI adequately). As Yudkowsky says, it's not an adequate world.

Update: Read more of this discussion at Discussing Animal Intelligence

Elliot Temple | Permalink | Messages (9)

Intelligence Isn't Speed

I explained on Reddit [one typo is fixed in this post] that intelligence isn't a matter of computing hardware speed.

Sounds like the IQ vs Universality thing is just two camps talking past each other.

Suppose we do believe in the basic premise of universality, that all computers are equally "powerful" in a specific way, namely that there's no problem a sophisticated computer can solve that a simple computer cannot, provided we just give the simple computer a long enough time frame to solve it in.

Fair enough. But surely we're also interested in how fast the computer can solve the problems. That's not a trivial factor, especially when we consider that human computers are prone to getting bored, frustrated, confused, or forgetful.

So maybe when we talk about IQ we're not talking about computational power, but maybe something like computational speed. Or, more likely, computational speed combined with some other personality traits.

I think computational universality helps change the primary point of interest (re intelligence) to software that is created and modified after birth. You think maybe it makes hardware speed the key place to look re intelligence. FYI, your view is something I've already considered and taken into account.

You also think some other (genetic) personality traits may be important to intelligence. I don't think so partly because of a different type of universality: universal intelligence (or universal learning, universal knowledge creating, universal problem solving, same things). Universalities are discussed in The Beginning of Infinity by David Deutsch. It's important, in these discussions, to keep the two types of universalities separate (universal computer; universal learning/thinking software). I won't go into this point further right now. I'm going to talk about the hardware speed issue.

Suppose my brain is 100% faster than yours (which sounds like an unrealistically high difference). You will still outperform me, by far, if you use a better algorithm than I do. E.g. if you use an O(N) algorithm to think about something while I'm using O(N^2).

That's called Big O notation, which basically means how many steps it takes to complete the algorithm. N is the number of data points. In this example, you need time proportional to the amount of data. I need time proportional to the square of the amount of data. So for decent sized data sets, you win even if my hardware is twice as fast. E.g. with 10 data points, you win by a a factor of 5. Taking 2 seconds per step, you need 10 * 2 = 20 seconds. I, doing steps in 1 second, need 10^2 = 100 seconds. How does it scale? With 100 data points, you need 200 seconds and I need 100^2 = 10,000 seconds. Now you won by a factor of 50. That factor will go up if there's more data. And the world has a lot of data.

Exponential differences in Big O complexity between algorithms are common and routinely make a huge difference in processing time – far more than CPU speed. In software we write, lots of work goes into using algorithms that are only sub-optimal by a linear or constant amount.

If people think at different speeds, you should probably blame their thinking method (software) rather than their hardware for well over 99% of the difference. Especially because hardware variation between humans is pretty small.

But most differences in intelligence are not speed differences anyway. For example, often one human solves a problem and another doesn't solve it at all. The second guy doesn't solve it slower, he fails. He gets stuck and gives up, or won't even begin because he knows he doesn't understand how to do it. This is partly because of what knowledge people have or lack (learned information that wasn't inborn), and partly because of thinking methods (e.g. algorithms which could be fast or exponentially slow depending on how well they're designed). With bad algorithms, the time to finish can be a million years while a good algorithm can do the same task in minutes on a slower CPU.

There are other crucial non-hardware issues too, e.g. error correction. If you make a thinking mistake, can you recover from that, identify that something has gone wrong, find the problem, and fix it? Some ways of thinking can accomplish that pretty reliably for a wide variety of errors. But some ways of thinking are quite fragile to error. This is leads to wildly different thinking results that aren't due to hardware speed.

I'll close with an explanation of these issues from David Deutsch, from my interview with him:

David: As to innate intelligence: I don't think that can possibly exist because of the universality of computation. Basically, intelligence or any kind of measure of quality of thinking is a measure of quality of software, not hardware. People might say, "Well, what hardware you have might affect how well your software can address problems." But because of universality, that isn't so: we know that hardware can at most affect the speed of computation. The thing that people call intelligence in everyday life — like the ability of some people like Einstein or Feynman to see their way through to a solution to a problem while other people can't — simply doesn't take the form that the person you regard as 'unintelligent' would take a year to do something that Einstein could do in a week; it's not a matter of speed. What we really mean is the person can't understand at all what Einstein can understand. And that cannot be a matter of (inborn) hardware, it is a matter of (learned) software.

Elliot Temple | Permalink | Messages (4)

Discussing Animal Intelligence

This post replies to pdxthehunted from Reddit (everything he said there is included in quotes below). There is also previous discussion before this exchange, see here. This post will somewhat stand on its own without reading context, but not 100%. Topics include about whether animals can suffer, the nature of intelligence and the flaws of academia.

[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—just to see if I understand you correctly.]

Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but just that I think I—finally—realize that I don’t understand where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably do need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.

Yes, what that article is studying is different and I don't think it should be called "general intelligence". General means general purpose, but the kind of "intelligence" in the article can't build a spaceship or write a philosophy treatise, so it's limited to only some cases. They are vague about this matter. They suggest they are studying general intelligence because their five learning tasks are "diverse". Being able to do 5 different learning tasks is a great sign if they are diverse enough, but I don't think they're diverse with respect to the set of all possible learning tasks, I think they're actually all pretty similar.

This is all more complicated because they think intelligence comes in degrees, so they maybe believe a mouse has the right type of intelligence to build a spaceship, just not enough of it. But their research is not about whether that premise (intelligence comes in degrees) is true, nor do they write philosophical arguments about it.

That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).

For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best use of your energy.

I asked for a solution but I'm happy with that response. I find it a very hard problem.

Sadly, Deutsch has given up on the problem to the point that he's focusing on physics (Constructor Theory) not philosophy now. Physics is one of the best academic fields to interact with, and one of the most productive and rational, while philosophy is one of the worst. Deutsch used to e.g. write about the implications of Critical Rationalism for parenting and education. The applications are pretty direct from philosophy of knowledge to how people learn, but the conclusions are extremely offensive to ~everyone because, basically, ~all parents and teachers are doing a bad job and destroying children's minds (which is one of the main underlying reasons for why academia and many other intellectual things are broken). Very important issues but people shoot messengers... The messenger shooting is bad enough that Deutsch refused me permission to post archived copies of hundreds of things he wrote publicly online but which are no longer available at their original locations. A few years earlier he had said he would like the archives posted. He changed his mind because he became more pessimistic about people reaction's to ideas.

I, by contrast, am pursuing a different strategy of speaking truth to power without regard for offending people. I don't want to hold back, but I also don't have a very large fanbase because even if someone agrees with me about many issues, I have like two dozen different ideas that would alienate many people, so pretty much everyone can find something to hate.

I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Fair enough.

I’m not going to respond to the rest of your posts line-by-line because I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).

I think most people would deny most of it. I wasn’t expecting a lot of agreement. But OK, great.

For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).

Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.

The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.

I don’t think there are levels of general intelligence, I think it’s present or not present. This is analogous to there not being levels of computers: it’s either a universal classical computer or it’s not a computer and can compute ~nothing. The jump from ~nothing to universality is discussed in BoI.

Otherwise, close enough.

(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s because we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never articulate to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscious preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)

I think suffering comes in degrees if it’s present at all. Some injuries hurt more than others. Some bad news is more upsetting than other bad news.

Similarly, how smart people are comes in degrees when intelligence is present. They have the same basic capacity but vary in thinking quality due to having e.g. different ideas and different thinking methods (e.g. critical rationalist thinking is more effective than superstition).

Roughly there are three levels like this:

  1. Computer (brain)
  2. Intelligent Mind (roughly: an operating system (OS) for the computer with the feature that it allows creating and thinking about ideas)
  3. Ideas within the mind.

Each level requires the previous level.

Sand fails to match humans at level 1. No brain.

Apes fail to match humans at level 2. They run a different operating system with features more similar to Windows or Mac than to intelligence. It doesn’t have support for ideas.

Self-driving cars have brains (CPUs) which are adequately comparable to an ape or human, but like apes they differ from humans at level 2.

When Sue is cleverer than Joe, that’s a level 3 difference. She doesn’t have a better brain (level 1), nor a better operating system (level 2), she has better ideas. She has some knowledge he doesn’t. That includes not just knowledge of facts but also knowledge about rationality, about how to think effectively. E.g. she knows some stuff about how to avoid bias, how to find and correct errors effectively, how to learn from criticism instead of getting angry, or how to interpret disagreements as disagreements instead of as other things like heresy, bad faith, or “not listening”.

Small hardware differences between people are possible. Sue’s brain might be a 5% faster computer than Joe’s. But this difference is unimportant relative to the impact of culture, ideas, rationality, bias, education, etc. Similarly, small OS differences are possible but they wouldn’t matter much either.

There are some complications. E.g. imagine a society which extensively tested children on speed of doing addition problems in their head. They care a ton about this. The best performers get educated to be scientists and lower performers do unskilled laborer. Someone with a slightly faster brain or slightly different OS might do better on those tests. Those tests limit the role of ideas. So, in this culture, a small hardware speed advantage could make a huge difference in life outcome including how clever the person is as an adult (due to huge educational differences which were caused by differences in arithmetic speed). But the same hardware difference could have totally different results in a different culture, and in a rational culture it wouldn’t matter much. What differentiates knowledge workers IRL, including scientists and philosophers, is absolutely nothing like that the 99th percentile successful guys are able to get equal quality work done 5% faster than the 20th percentile guys.

Our actual culture has some stuff kinda like this hypothetical culture, but much more accidental and with less control over your life (there are many different paths to success, so even if a few get blocked, you don’t have to do unskilled labor). It also has similar kinda things based on non-mental attributes like skin color, height, hair color, etc, though again with considerably smaller consequences than the hypothetical where your whole fate is determined just by addition tests.

Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frustrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).

It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frustration abstractly, animals might not suffer in any meaningful sense.

This is a reasonable approximation except that I think preferences are ideas and I don’t think animals have them at all (not even preprogrammed).

So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.

This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:

curi: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.

curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.

curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.

I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog thus far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.

The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.


My view: if you want to figure out what’s true, a lot of ideas are relevant. Gotta learn it yourself and/or find a way to outsource some of the work. So e.g. Singer needs to read Popper and Deutsch or contact some people competent to discuss whether CR is correct and its implications. And Singer also needs to contact some computer people and ask them and try to meet them in the middle by explaining some of what he does to them so they understand the problems he’s working on, and then they explain some CS principles to him and how they apply to his problems. Something like that.

That is not happening.

It ought to actually be easier than that. Instead of contacting people Singer or anyone else could look at the literature. What criticisms of CR have been written? What counter-arguments to those criticisms have CR advocates written? How did those discussions end? You can look at the literature and get a picture of the state of the debate and draw some conclusions from that.

I find people don’t do this much or well. It often falls apart in a specific way. Instead of evaluating the pro-CR and anti-CR arguments – seeing what answers what, what’s unanswered, etc. – they give up on understanding the issues and just decide to assume the correctness of whichever side has a significant lead in popularity and prestige.

The result is, whenever some bad ideas and irrational thinkers become prestigious in a field, it’s quite hard to fix because people outside the field largely refuse to examine the field and see if a minority view’s arguments are actually superior.

Also, often people just use common sense about what they assume would be true of other fields instead of consulting literature. So e.g. rather than reading actual inductivist literature (induction is mainstream and is one of the main things CR rejects), most animal researchers and others rely on what they’ve picked up about induction, here and there, just from being part of an intellectual subculture. Hence there exist e.g. academic papers studying animal intelligence that don’t cite even mainstream epistemology books or papers.

The current state of the CR vs. induction debate, in my considered and researched opinion, is there don’t actually exist criticisms of CR from anyone who has understood it, and there’s very little willingness to engage in debate by any inductivists. Inductivists are broadly uninterested in learning about a rival idea which they have not understood or refuted. I think ignoring ideas that no one has criticized is something of a maximum for a type of irrationality. And people outside the field (and in the field too) mostly assume that some inductivists somewhere did learn and criticize CR, though people usually don’t have links to specific criticisms, which is a problem. I think it’s important to have sources in other fields that aren’t your own so that if your sources are incorrect they can be criticized and corrected and you can change your mind, whereas if you just say “people in the field generally conclude X” without citing any particular arguments then it’s very hard to continue the discussion and correct you about X from there.

From my POV, (a) the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals do suffer in a way that is very different from human suffering but still ethically and categorically relevant.

That’s a reasonable place to start. What I can say is that if you investigate the details, I think they come out particular way rather conclusively. (Actually the nature of arguments, and what is conclusive vs. unsettled – how to evaluate and think about that – is a part of epistemology, it’s one of the issues I think mainstream epistemology is wrong about. That’s actually the issue where I made my largest personal contribution to CR.)

If you don’t want to investigate the details, has anyone else done so as your proxy or representative? Has Singer or any other person or group done that work for you? Who has investigated, reached a conclusion, written it up, and you’re happy with what they did? If no one has done that, that suggests something is broken with all the intellectuals on your side – there may be a lot of them, but between all of them they aren’t doing much relevant thinking.

In some ways, the more people believe something and still no one writes detailed arguments and addresses rival ideas well, the more damning it is. In other words, CR has the excuse of not having essays to cover every little detail of every mainstream view because there aren’t many of us to write all that and we have ~no funding. The other side has no such excuse yet they’re the side, between all those people, has no representatives who will debate! They have plenty of people to have some specialists in refuting CR but they don’t have any.

Sadly, the same pattern repeats in other areas, e.g. The Failure of the 'New Economics’ by Henry Hazlitt is a point-by-point book-length refutation of Keynes’ main book. It uses tons of quotes from Keynes, similar to how I’m replying his this comment using quotes from pdxthehunted. As far as I know, Hazlitt’s criticisms went unanswered. Note: I think Hazlitt’s level of fame/prestige was loosely comparable to Popper and more than Deutsch; it’s not like he was ignored for being a nobody (which I’d object to too, but that isn’t what happened).

Large groups of people ignore critical arguments. What does it mean for intellectuals to rationally engage with critics and how can we get people to actually do that? I think it’s one of the world’s larger problems.

new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuous with other animals.

New_grass says:


But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscious is one. And that is absurd, given our current meager understanding of consciousness.

The relevant question is what the probability is that other animals are conscious, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.

But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.

Does the following sound fair?

Yeah, I have arguments here covering other cases (the cases of the main issue being suffering or consciousness rather than intelligence) and linking the other cases to the intelligence issue. I think it’s linked.

If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on *(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*

In short, yes. Might have to add a few more pieces of background knowledge.

Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.

Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.

Yeah, comprehensive understanding of DD’s two books covers most of the main issues. That’s hard though. I run the forums where people reading those books (or Popper) can ask questions (it’s this website and an email group with a 25 year history, where DD used to write thousands of posts, but he doesn’t post anymore).

Finally--if capacity for suffering hinges on general intelligence, is consciousness relevant to the argument at all?

To a significant extent, I leave claims about consciousness out of my arguments. I think consciousness is relevant but isn’t necessary to say much about to reach a conclusion. I do have to make some claims about consciousness, which some people find pretty easy to accept, but others do deny. These claims include:

  1. Dualism is false.
  2. People don’t have souls and there’s no magic involved with minds.
  3. Consciousness is an emergent property of some computations.
  4. Computation is a purely physical process that is part of physics and obeys the laws of physics. Computers are regular matter like rocks.
  5. Computation takes information as input and outputs information. Information is a physical quantity. It’s part of the physical world.
  6. Some additional details about computation, along similar lines, to further rule out views of consciousness that are incompatible with my position. Like I don’t think consciousness can be a property of particular hardware (like organic molecules – molecules with carbon instead of silicon) because of the hardware independence of computation.
  7. I believe that consciousness is an emergent property of (general) intelligence. That claim makes things more convenient, but I don’t think it’s necessary. It’s a stronger claim than necessary. But it’s hard to explain or discuss a weaker and adequate claim. There aren’t currently any known alternative claims which make sense given my other premises including CR.

One more thing. The “general intelligence” terminology comes from the AI field which calls a Roomba’s algorithms AI and then differentiates human-type intelligence from that by calling it AGI. The concept is that a Roomba is intelligent regarding a few specific tasks while a human is able to think intelligently about anything. I’d prefer to say humans are intelligent and a Roomba or mouse is not intelligent. This corresponds to how I don’t call my text editor intelligent even though, e.g., it “intelligently” renumbered the items in the above list when I moved dualism to the top. In my view, there’s quite a stark contrast between humans – which can learn, can have ideas, can think about ideas, etc. – and everything else which can’t do that at all and has nothing worthy of the name “intelligence”. The starkness of this contrast helps explain why I reach a conclusion rather than wanting to err on the side of caution re animal welfare. A different and more CR-oriented explanation of the difference is that all knowledge creation functions via evolution (not induction) and only humans have the (software) capacity to do evolution of ideas within their brains. (Evolution = replication with variation and selection.)

That’s just the current situation. I do think we can program an AGI which will be just like us, a full person. And yes I do care about AGI welfare and think AGIs should have full rights, freedoms, citizenship, etc. (I’m also, similarly, a big advocate of children’s rights/welfare and I think there’s something wrong with many animal rights/welfare advocates in general that they are more concerned about animal suffering than the suffering of human children. This is something I learned from DD.) I think it’s appalling that in the name of safety (maybe AGIs will want to turn us into paperclips for some reason, and will be able to kill us all due to being super-intelligent) many AGI researchers advocate working on “friendly AI” which is an attempt to design an AGI with built-in mind control so that, essentially, it’s our slave and is incapable of disagreeing with us. I also think these efforts are bound to fail on technical grounds – AGI researchers don’t understand BoI either, neither its implications for mind control (which is an attempt to take a universal system and limit it with no workarounds, which is basically a lost cause unless you’re willing to lose virtually all functionality) nor its implications for super intelligent AGIs (they’ll just be universal knowledge creators like us, and if you give one a CPU that is 1000x as powerful as a human brain then that’ll be very roughly as good as having 1000 people work on something which is the same compute power.). This, btw, speaks to the importance of some interdisciplinary knowledge. If they understood classical liberalism better, that would help them recognize slavery and refrain from advocating it.

Elliot Temple | Permalink | Messages (28)