Intelligence Isn't Speed

I explained on Reddit [one typo is fixed in this post] that intelligence isn't a matter of computing hardware speed.


Sounds like the IQ vs Universality thing is just two camps talking past each other.

Suppose we do believe in the basic premise of universality, that all computers are equally "powerful" in a specific way, namely that there's no problem a sophisticated computer can solve that a simple computer cannot, provided we just give the simple computer a long enough time frame to solve it in.

Fair enough. But surely we're also interested in how fast the computer can solve the problems. That's not a trivial factor, especially when we consider that human computers are prone to getting bored, frustrated, confused, or forgetful.

So maybe when we talk about IQ we're not talking about computational power, but maybe something like computational speed. Or, more likely, computational speed combined with some other personality traits.

I think computational universality helps change the primary point of interest (re intelligence) to software that is created and modified after birth. You think maybe it makes hardware speed the key place to look re intelligence. FYI, your view is something I've already considered and taken into account.

You also think some other (genetic) personality traits may be important to intelligence. I don't think so partly because of a different type of universality: universal intelligence (or universal learning, universal knowledge creating, universal problem solving, same things). Universalities are discussed in The Beginning of Infinity by David Deutsch. It's important, in these discussions, to keep the two types of universalities separate (universal computer; universal learning/thinking software). I won't go into this point further right now. I'm going to talk about the hardware speed issue.

Suppose my brain is 100% faster than yours (which sounds like an unrealistically high difference). You will still outperform me, by far, if you use a better algorithm than I do. E.g. if you use an O(N) algorithm to think about something while I'm using O(N^2).

That's called Big O notation, which basically means how many steps it takes to complete the algorithm. N is the number of data points. In this example, you need time proportional to the amount of data. I need time proportional to the square of the amount of data. So for decent sized data sets, you win even if my hardware is twice as fast. E.g. with 10 data points, you win by a a factor of 5. Taking 2 seconds per step, you need 10 * 2 = 20 seconds. I, doing steps in 1 second, need 10^2 = 100 seconds. How does it scale? With 100 data points, you need 200 seconds and I need 100^2 = 10,000 seconds. Now you won by a factor of 50. That factor will go up if there's more data. And the world has a lot of data.

Exponential differences in Big O complexity between algorithms are common and routinely make a huge difference in processing time – far more than CPU speed. In software we write, lots of work goes into using algorithms that are only sub-optimal by a linear or constant amount.

If people think at different speeds, you should probably blame their thinking method (software) rather than their hardware for well over 99% of the difference. Especially because hardware variation between humans is pretty small.

But most differences in intelligence are not speed differences anyway. For example, often one human solves a problem and another doesn't solve it at all. The second guy doesn't solve it slower, he fails. He gets stuck and gives up, or won't even begin because he knows he doesn't understand how to do it. This is partly because of what knowledge people have or lack (learned information that wasn't inborn), and partly because of thinking methods (e.g. algorithms which could be fast or exponentially slow depending on how well they're designed). With bad algorithms, the time to finish can be a million years while a good algorithm can do the same task in minutes on a slower CPU.

There are other crucial non-hardware issues too, e.g. error correction. If you make a thinking mistake, can you recover from that, identify that something has gone wrong, find the problem, and fix it? Some ways of thinking can accomplish that pretty reliably for a wide variety of errors. But some ways of thinking are quite fragile to error. This is leads to wildly different thinking results that aren't due to hardware speed.

I'll close with an explanation of these issues from David Deutsch, from my interview with him:

David: As to innate intelligence: I don't think that can possibly exist because of the universality of computation. Basically, intelligence or any kind of measure of quality of thinking is a measure of quality of software, not hardware. People might say, "Well, what hardware you have might affect how well your software can address problems." But because of universality, that isn't so: we know that hardware can at most affect the speed of computation. The thing that people call intelligence in everyday life — like the ability of some people like Einstein or Feynman to see their way through to a solution to a problem while other people can't — simply doesn't take the form that the person you regard as 'unintelligent' would take a year to do something that Einstein could do in a week; it's not a matter of speed. What we really mean is the person can't understand at all what Einstein can understand. And that cannot be a matter of (inborn) hardware, it is a matter of (learned) software.


Elliot Temple | Permalink | Messages (4)

Discussing Animal Intelligence

This post replies to pdxthehunted from Reddit (everything he said there is included in quotes below). There is also previous discussion before this exchange, see here. This post will somewhat stand on its own without reading context, but not 100%. Topics include about whether animals can suffer, the nature of intelligence and the flaws of academia.

[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—just to see if I understand you correctly.]

Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but just that I think I—finally—realize that I don’t understand where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably do need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.

Yes, what that article is studying is different and I don't think it should be called "general intelligence". General means general purpose, but the kind of "intelligence" in the article can't build a spaceship or write a philosophy treatise, so it's limited to only some cases. They are vague about this matter. They suggest they are studying general intelligence because their five learning tasks are "diverse". Being able to do 5 different learning tasks is a great sign if they are diverse enough, but I don't think they're diverse with respect to the set of all possible learning tasks, I think they're actually all pretty similar.

This is all more complicated because they think intelligence comes in degrees, so they maybe believe a mouse has the right type of intelligence to build a spaceship, just not enough of it. But their research is not about whether that premise (intelligence comes in degrees) is true, nor do they write philosophical arguments about it.

That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).

For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best use of your energy.

I asked for a solution but I'm happy with that response. I find it a very hard problem.

Sadly, Deutsch has given up on the problem to the point that he's focusing on physics (Constructor Theory) not philosophy now. Physics is one of the best academic fields to interact with, and one of the most productive and rational, while philosophy is one of the worst. Deutsch used to e.g. write about the implications of Critical Rationalism for parenting and education. The applications are pretty direct from philosophy of knowledge to how people learn, but the conclusions are extremely offensive to ~everyone because, basically, ~all parents and teachers are doing a bad job and destroying children's minds (which is one of the main underlying reasons for why academia and many other intellectual things are broken). Very important issues but people shoot messengers... The messenger shooting is bad enough that Deutsch refused me permission to post archived copies of hundreds of things he wrote publicly online but which are no longer available at their original locations. A few years earlier he had said he would like the archives posted. He changed his mind because he became more pessimistic about people reaction's to ideas.

I, by contrast, am pursuing a different strategy of speaking truth to power without regard for offending people. I don't want to hold back, but I also don't have a very large fanbase because even if someone agrees with me about many issues, I have like two dozen different ideas that would alienate many people, so pretty much everyone can find something to hate.

I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Fair enough.

I’m not going to respond to the rest of your posts line-by-line because I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).

I think most people would deny most of it. I wasn’t expecting a lot of agreement. But OK, great.

For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).

Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.

The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.

I don’t think there are levels of general intelligence, I think it’s present or not present. This is analogous to there not being levels of computers: it’s either a universal classical computer or it’s not a computer and can compute ~nothing. The jump from ~nothing to universality is discussed in BoI.

Otherwise, close enough.

(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s because we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never articulate to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscious preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)

I think suffering comes in degrees if it’s present at all. Some injuries hurt more than others. Some bad news is more upsetting than other bad news.

Similarly, how smart people are comes in degrees when intelligence is present. They have the same basic capacity but vary in thinking quality due to having e.g. different ideas and different thinking methods (e.g. critical rationalist thinking is more effective than superstition).

Roughly there are three levels like this:

  1. Computer (brain)
  2. Intelligent Mind (roughly: an operating system (OS) for the computer with the feature that it allows creating and thinking about ideas)
  3. Ideas within the mind.

Each level requires the previous level.

Sand fails to match humans at level 1. No brain.

Apes fail to match humans at level 2. They run a different operating system with features more similar to Windows or Mac than to intelligence. It doesn’t have support for ideas.

Self-driving cars have brains (CPUs) which are adequately comparable to an ape or human, but like apes they differ from humans at level 2.

When Sue is cleverer than Joe, that’s a level 3 difference. She doesn’t have a better brain (level 1), nor a better operating system (level 2), she has better ideas. She has some knowledge he doesn’t. That includes not just knowledge of facts but also knowledge about rationality, about how to think effectively. E.g. she knows some stuff about how to avoid bias, how to find and correct errors effectively, how to learn from criticism instead of getting angry, or how to interpret disagreements as disagreements instead of as other things like heresy, bad faith, or “not listening”.

Small hardware differences between people are possible. Sue’s brain might be a 5% faster computer than Joe’s. But this difference is unimportant relative to the impact of culture, ideas, rationality, bias, education, etc. Similarly, small OS differences are possible but they wouldn’t matter much either.

There are some complications. E.g. imagine a society which extensively tested children on speed of doing addition problems in their head. They care a ton about this. The best performers get educated to be scientists and lower performers do unskilled laborer. Someone with a slightly faster brain or slightly different OS might do better on those tests. Those tests limit the role of ideas. So, in this culture, a small hardware speed advantage could make a huge difference in life outcome including how clever the person is as an adult (due to huge educational differences which were caused by differences in arithmetic speed). But the same hardware difference could have totally different results in a different culture, and in a rational culture it wouldn’t matter much. What differentiates knowledge workers IRL, including scientists and philosophers, is absolutely nothing like that the 99th percentile successful guys are able to get equal quality work done 5% faster than the 20th percentile guys.

Our actual culture has some stuff kinda like this hypothetical culture, but much more accidental and with less control over your life (there are many different paths to success, so even if a few get blocked, you don’t have to do unskilled labor). It also has similar kinda things based on non-mental attributes like skin color, height, hair color, etc, though again with considerably smaller consequences than the hypothetical where your whole fate is determined just by addition tests.

Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frustrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).

It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frustration abstractly, animals might not suffer in any meaningful sense.

This is a reasonable approximation except that I think preferences are ideas and I don’t think animals have them at all (not even preprogrammed).

So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.

This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:

curi: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.

curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.

curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.

I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog thus far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.

The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.

Yes.

My view: if you want to figure out what’s true, a lot of ideas are relevant. Gotta learn it yourself and/or find a way to outsource some of the work. So e.g. Singer needs to read Popper and Deutsch or contact some people competent to discuss whether CR is correct and its implications. And Singer also needs to contact some computer people and ask them and try to meet them in the middle by explaining some of what he does to them so they understand the problems he’s working on, and then they explain some CS principles to him and how they apply to his problems. Something like that.

That is not happening.

It ought to actually be easier than that. Instead of contacting people Singer or anyone else could look at the literature. What criticisms of CR have been written? What counter-arguments to those criticisms have CR advocates written? How did those discussions end? You can look at the literature and get a picture of the state of the debate and draw some conclusions from that.

I find people don’t do this much or well. It often falls apart in a specific way. Instead of evaluating the pro-CR and anti-CR arguments – seeing what answers what, what’s unanswered, etc. – they give up on understanding the issues and just decide to assume the correctness of whichever side has a significant lead in popularity and prestige.

The result is, whenever some bad ideas and irrational thinkers become prestigious in a field, it’s quite hard to fix because people outside the field largely refuse to examine the field and see if a minority view’s arguments are actually superior.

Also, often people just use common sense about what they assume would be true of other fields instead of consulting literature. So e.g. rather than reading actual inductivist literature (induction is mainstream and is one of the main things CR rejects), most animal researchers and others rely on what they’ve picked up about induction, here and there, just from being part of an intellectual subculture. Hence there exist e.g. academic papers studying animal intelligence that don’t cite even mainstream epistemology books or papers.

The current state of the CR vs. induction debate, in my considered and researched opinion, is there don’t actually exist criticisms of CR from anyone who has understood it, and there’s very little willingness to engage in debate by any inductivists. Inductivists are broadly uninterested in learning about a rival idea which they have not understood or refuted. I think ignoring ideas that no one has criticized is something of a maximum for a type of irrationality. And people outside the field (and in the field too) mostly assume that some inductivists somewhere did learn and criticize CR, though people usually don’t have links to specific criticisms, which is a problem. I think it’s important to have sources in other fields that aren’t your own so that if your sources are incorrect they can be criticized and corrected and you can change your mind, whereas if you just say “people in the field generally conclude X” without citing any particular arguments then it’s very hard to continue the discussion and correct you about X from there.

From my POV, (a) the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals do suffer in a way that is very different from human suffering but still ethically and categorically relevant.

That’s a reasonable place to start. What I can say is that if you investigate the details, I think they come out particular way rather conclusively. (Actually the nature of arguments, and what is conclusive vs. unsettled – how to evaluate and think about that – is a part of epistemology, it’s one of the issues I think mainstream epistemology is wrong about. That’s actually the issue where I made my largest personal contribution to CR.)

If you don’t want to investigate the details, has anyone else done so as your proxy or representative? Has Singer or any other person or group done that work for you? Who has investigated, reached a conclusion, written it up, and you’re happy with what they did? If no one has done that, that suggests something is broken with all the intellectuals on your side – there may be a lot of them, but between all of them they aren’t doing much relevant thinking.

In some ways, the more people believe something and still no one writes detailed arguments and addresses rival ideas well, the more damning it is. In other words, CR has the excuse of not having essays to cover every little detail of every mainstream view because there aren’t many of us to write all that and we have ~no funding. The other side has no such excuse yet they’re the side, between all those people, has no representatives who will debate! They have plenty of people to have some specialists in refuting CR but they don’t have any.

Sadly, the same pattern repeats in other areas, e.g. The Failure of the 'New Economics’ by Henry Hazlitt is a point-by-point book-length refutation of Keynes’ main book. It uses tons of quotes from Keynes, similar to how I’m replying his this comment using quotes from pdxthehunted. As far as I know, Hazlitt’s criticisms went unanswered. Note: I think Hazlitt’s level of fame/prestige was loosely comparable to Popper and more than Deutsch; it’s not like he was ignored for being a nobody (which I’d object to too, but that isn’t what happened).

Large groups of people ignore critical arguments. What does it mean for intellectuals to rationally engage with critics and how can we get people to actually do that? I think it’s one of the world’s larger problems.

new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuous with other animals.

New_grass says:

link

But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscious is one. And that is absurd, given our current meager understanding of consciousness.

The relevant question is what the probability is that other animals are conscious, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.

But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.

Does the following sound fair?

Yeah, I have arguments here covering other cases (the cases of the main issue being suffering or consciousness rather than intelligence) and linking the other cases to the intelligence issue. I think it’s linked.

If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on *(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*

In short, yes. Might have to add a few more pieces of background knowledge.

Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.

Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.

Yeah, comprehensive understanding of DD’s two books covers most of the main issues. That’s hard though. I run the forums where people reading those books (or Popper) can ask questions (it’s this website and an email group with a 25 year history, where DD used to write thousands of posts, but he doesn’t post anymore).

Finally--if capacity for suffering hinges on general intelligence, is consciousness relevant to the argument at all?

To a significant extent, I leave claims about consciousness out of my arguments. I think consciousness is relevant but isn’t necessary to say much about to reach a conclusion. I do have to make some claims about consciousness, which some people find pretty easy to accept, but others do deny. These claims include:

  1. Dualism is false.
  2. People don’t have souls and there’s no magic involved with minds.
  3. Consciousness is an emergent property of some computations.
  4. Computation is a purely physical process that is part of physics and obeys the laws of physics. Computers are regular matter like rocks.
  5. Computation takes information as input and outputs information. Information is a physical quantity. It’s part of the physical world.
  6. Some additional details about computation, along similar lines, to further rule out views of consciousness that are incompatible with my position. Like I don’t think consciousness can be a property of particular hardware (like organic molecules – molecules with carbon instead of silicon) because of the hardware independence of computation.
  7. I believe that consciousness is an emergent property of (general) intelligence. That claim makes things more convenient, but I don’t think it’s necessary. It’s a stronger claim than necessary. But it’s hard to explain or discuss a weaker and adequate claim. There aren’t currently any known alternative claims which make sense given my other premises including CR.

One more thing. The “general intelligence” terminology comes from the AI field which calls a Roomba’s algorithms AI and then differentiates human-type intelligence from that by calling it AGI. The concept is that a Roomba is intelligent regarding a few specific tasks while a human is able to think intelligently about anything. I’d prefer to say humans are intelligent and a Roomba or mouse is not intelligent. This corresponds to how I don’t call my text editor intelligent even though, e.g., it “intelligently” renumbered the items in the above list when I moved dualism to the top. In my view, there’s quite a stark contrast between humans – which can learn, can have ideas, can think about ideas, etc. – and everything else which can’t do that at all and has nothing worthy of the name “intelligence”. The starkness of this contrast helps explain why I reach a conclusion rather than wanting to err on the side of caution re animal welfare. A different and more CR-oriented explanation of the difference is that all knowledge creation functions via evolution (not induction) and only humans have the (software) capacity to do evolution of ideas within their brains. (Evolution = replication with variation and selection.)

That’s just the current situation. I do think we can program an AGI which will be just like us, a full person. And yes I do care about AGI welfare and think AGIs should have full rights, freedoms, citizenship, etc. (I’m also, similarly, a big advocate of children’s rights/welfare and I think there’s something wrong with many animal rights/welfare advocates in general that they are more concerned about animal suffering than the suffering of human children. This is something I learned from DD.) I think it’s appalling that in the name of safety (maybe AGIs will want to turn us into paperclips for some reason, and will be able to kill us all due to being super-intelligent) many AGI researchers advocate working on “friendly AI” which is an attempt to design an AGI with built-in mind control so that, essentially, it’s our slave and is incapable of disagreeing with us. I also think these efforts are bound to fail on technical grounds – AGI researchers don’t understand BoI either, neither its implications for mind control (which is an attempt to take a universal system and limit it with no workarounds, which is basically a lost cause unless you’re willing to lose virtually all functionality) nor its implications for super intelligent AGIs (they’ll just be universal knowledge creators like us, and if you give one a CPU that is 1000x as powerful as a human brain then that’ll be very roughly as good as having 1000 people work on something which is the same compute power.). This, btw, speaks to the importance of some interdisciplinary knowledge. If they understood classical liberalism better, that would help them recognize slavery and refrain from advocating it.


Elliot Temple | Permalink | Messages (28)

Vegan Debate

curi: The trait that differentiates humans from non-human animals, in a veganism-relevant way, is (general, universal) intelligence, which is the ability to learn (aka create knowledge), which is the ability to do evolution of ideas within one's mind.

This is a binary trait, not a matter of degree.

This is not a complete explanation, e.g. it doesn't say how that trait relates to other issues vegans may bring up like consciousness or suffering.

Vegans: What about mentally handicapped people. If they have less intellectual capacity than a cow, is it OK to kill them?

curi: Yes, in principle. They're (by premise) on the wrong side of the intelligence/non-intelligence asymmetry.

However, we should begin our discussion with cases which are easier to understand and potentially agree about, not hard cases or edge cases. If you understand and agree with my way of differentiating most humans from cows, then it'd make sense to discuss edge cases in detail.

Vegans: How do you tell if a normal person or cow is intelligent?

curi: Primarily behavior: people have intelligent conversations, write blog posts demonstrating that they understand TV show plots, act according to learned jobs skills, develop new science, etc. That is best explained by knowledge the person created in his mind rather than by genetic knowledge. Animals behave in simplistic, algorithmic ways which are best explained by the knowledge in their genes.

I think careful analysis of animal behavior, and trying to differentiate it from the capabilities of stuff like video game enemies and self-driving cars, is one of the more productive ways to continue this discussion. People have strong intuitions that animals are somewhat intelligent and are clearly different, in terms of intelligence, than current robots and "AI" software algorithms. Relatedly, people believe intelligence is a matter of degree. Looking at rigorous information of animal behavior, from scientists, and carefully considering the simplest ways it could be achieved, can be informative.


Elliot Temple | Permalink | Messages (8)

Animal Welfare and The Problem of Design

This is an answer to Name That Trait which asks what trait differentiates humans from animals. The named trait should justify vegan-objectionable activities such as slaughtering animals for food.

Short answer: the trait is being a universal knowledge creator. This answer relies on lots of non-standard background knowledge such as The Beginning of Infinity.

This post gives a different argument which I think is easier to understand with less background knowledge. It will still require going over some background.

The Problem of Design

An important problem in the history of philosophy is the problem of design, famously argued by William Paley. It says some objects (such as an animal or pocket watch) have the appearance of design which requires explanation. Paley’s explanation was that a pocket watch has an intelligent, human designer, and animals were designed by God.

Plants, animals and pocket watches have the appearance of design. They’re complex. Stones, crystals, dirt and stars don’t. This is a big difference. Stones and stars are worth explaining in terms of fundamental physics like the big bang, but plants merit additional explanation. Plants e.g. have chloroplasts which do photosynthesis, which are nothing like rocks and wouldn’t be created randomly or purposelessly.

The above is widely accepted. What’s not widely known is that “appearance of design” is knowledge. Knowledge is information adapted to a purpose.

The underlying problem is how knowledge can be created starting with non-knowledge. Where can new knowledge come from? How can it originate?

This is a hard problem and not many answers have been proposed. The bad answers include magic, knowledge is just created sometimes out of thin air, and designers. Saying that a designer created the knowledge doesn’t explain how the designer created the knowledge (using intelligence – but how does intelligence work?), nor where that designer’s intelligence came from. If you say knowledge comes from God who already has tons of knowledge, then where did God come from?

A single good answer has been developed. It’s the only known answer that makes much sense. It’s the theory of evolution. Replication with variation and selection is able to adapt information to a purpose and thereby create new knowledge. The appearance of design, in plants and animals, was created by evolution.

Where did eyes come from? Evolution. Why does a rabbit run away from danger? It evolved to do that. Why are trees structured in an organized way with the leaves on top where they can better receive light? Because that structure has better survival and replication value for trees (survival and replication value is the short answer for what biological evolution selects for). Etc. This is widely accepted.

With this background in mind:

Intelligence

How does intelligence work and create new knowledge? I believe intelligence works by evolution, literally, not as an analogy. (Seriously I find that 90% of people assume I mean an analogy even though I just told them I didn’t.) This is not a mainstream view. It’s been developed by Critical Rationalist philosophers, especially David Deutsch.

Biological evolution does replication with variation and selection of genes. Intellectual evolution does replication with variation and selection of ideas. Genes and ideas are both things which it’s possible to make copies of – replicators – so evolution applies to them.

FYI, the view that evolution applies to replicators is a fairly standard view in the field even though most of the public is ignorant of it. It’s held by e.g. Richard Dawkins and is why he developed the idea of a “meme” (which means an idea that replicates). A meme plays the role in the evolution of ideas that a gene plays in the evolution of plants and animals.

Name That Trait

Lots of animal behavior has the appearance of design (or the appearance of intelligence or purposefulness). This indicates knowledge is involved. I think that knowledge comes from the animal’s genes and was created by biological evolution. I think it’s this appearance of intelligent behavior that is the primary reason people (correctly) differentiate animals from rocks.

Human behavior also has the appearance of design, so what’s the difference? Humans create new knowledge that isn’t in their genes. Instead of relying only on biological evolution for knowledge, humans do intelligent evolution of ideas within their minds. This is a capacity that no animal has and explains why only humans were able to invent philosophy and science.

When an animal does intelligent-appearing behavior, the designer was biological evolution. When a human does intelligent-appearing behavior, the designer is usually a human being who created ideas using mental evolution of ideas.

Animals have one source of knowlege: genetic evolution. Humans have two sources of knowledge: genetic and memetic evolution.

People commonly assume that the appearance of design in animal behavior is an indicator of intelligence, while the appearance of design in an animal’s eyes and claws is not. The primary mechanism by which genes control animal behavior is through creating the animal’s brain according to a design detailed in the animal’s genes. The animal brain is a computer which the genes build and configure with behavioral algorithms. Humans work differently because they’re capable of doing evolution within their minds to create new algorithms, new behaviors, new ideas. etc.

Getting from these claims to a full case against animal welfare or rights requires additional arguments. I won’t detail them here but see this post for some explanation. The basic issue is that animals aren’t differentiated from rocks in a relevant way because genes (which are where the knowledge is) are not conscious and can’t suffer (like rocks), and animals behave according to algorithms in conceptually the same way as a robot like a self-driving car.

For more info, see e.g. Evolution and Knowledge, Evolution, and the books of David Deutsch and Richard Dawkins.


Elliot Temple | Permalink | Messages (14)