Chapter XXVIII: Heuristics | The Philosophy Of Science by Steven Gussman [1st Edition]

        “Nobody would ever claim that a bacterium was a conscious strategist, yet bacterial parasites are

        probably engaged in ceaseless games of Prisoner's Dillemma with their hosts and there is no

        reason why we should not attribute Axelrodian adjectives—forgiving, non-envious, and so on— to

        their strategies... Needless to say, there is no suggestion that the bacteria work all this out in their

        nasty little heads!  Selection on generations of bacteria has presumably built into them an

        unconscious rule of thumb which works by purely biochemical means.

                Plants, according to Axelrod and Hamilton, may even take revenge, again obviously

        unconsciously.”
        – Richard DawkinsI

        
Culture is created by the communal mind, and each mind in turn is the product of the genetically

        structured human brain... Genes prescribe epigenetic rules, which are the neural pathways and

        regularities in cognitive development by which the individual mind assembles itself.  The mind

        grows from birth to death by absorbing parts of the existing culture available to it, with selections

        guided through epigenetic rules inherited by the individual brain.
        – Edward O. Wilson
II

        “... the definition of epigenetic rules is the best means to make important advances in the

        understanding of human nature.  Such an emphasis seems inescapable.  The linkage between genes

        and culture is to be found in the sense organs and programs of the brain... epigenetic rules are the

        automatic processes that extend from the filtering and coding of stimuli in the sense organs all the

        way to perception of the stimuli by the brain.  The entire sequence is influenced by previous

        experience only to a minor degree, if at all.”
        – Edward O. WilsonIII

        “... excellent predictions would be yielded by the hypothesis that the billiard player made his shots

        as if he knew the complicated mathematical formulas that would give the optimum direction of

        travel, could estimate by eye the angles etc., describing the location of the balls, could make

        lightning calculations from the formulas, and could then make the balls travel in the direction

        indicated by the formulas.  Our confidence in this hypothesis is not based on the belief that billiard

        players, even expert ones, can or do go through the process described; it derives rather from the

        belief that, unless in some way or other they were capable of reaching essentially the same result,

        they would not in fact be expert billiard players.”
        – Milton Friedman
IV

        “Humans have limited time and brainpower.  As a result, they use simple rules of thumb—

        heuristics—to help them make judgments.”
        – Richard Thaler (summarizing Amos Tversky and Daniel Kahneman)
V

        Heuristics are rules of thumb that we use to try and navigate the world.VI  We have already encountered some heuristics in past chapters, such as assumptions of good-faith, including the unfortunate need for trust in science, in-practice (for example, that the null hypothesis is that a reported experiment and its purported results were genuinely carried out unless evidence suggests otherwise).VII  Whereas common sense heuristics may often be some combination of intuitive and inducted beliefs about ontology, heuristics may also guide us about whether one can trust somebody or their arguments in lieu of being able to actually follow and verify said arguments (for example, a layperson might need to lean on heuristics to form their beliefs about physics if they do not or cannot study it).  Thus, I will focus here on those heuristics that have more of an epistemological flavor (and I will perhaps treat those that are more developed and less intuitive than common sense).  Common sense perhaps goes a bit back towards system-one thinking whereas heuristics may be skewed more towards system-two thinking.VIII  The heuristics I will be focusing on here tend to represent more indirect, circumstantial evidence to do with whether an idea “sounds” reasonable or whether a person is trustworthy (rather than actually evaluating the claim on its own merits).  Thus they are largely for someone who does not have the expertise or the real ability to employ the strongest facets of the scientific method to the topic at hand, for themselves.  This is not only the position of laypeople, as experts will tend to be laypeople in more fields than not.  If one is not educated enough, that is, they do not have enough prerequisite knowledge, then when they nevertheless have to decide whether or not to think something is more likely to be true or false, they must make use of heuristics.  Frankly, there is no good alternative to actually learning about the things you want to form beliefs on, but we live in the real world; sometimes for practical reasons, those who are not educated (or even intelligent enough) to directly interface with the material nonetheless need to know whether some claim is more likely to be true or false.  This is often the case in medicine, as one wants to make somewhat informed decisions about one's own treatment—even those who have no inmate interest in (or time for) learning about medicine.  It is an unfortunate facts that we all must at times employ such shortcuts in our evaluation of the world (though one hopes that some experts are doing a better job of looking more closely, longer timescales).  The trick is to know when you are leaning on fallible heuristics, and when you are actually interfacing with the material itself.

        One such flawed heuristic is the searching of the claimant for a conflict of interest, that is, some ulterior motive which may account for their result.  Most commonly, this means tracing a money trail; one does not expect studies funded by oil companies (whose fiduciary responsibility is to make profits on oil) to conclude that different power sources are needed to solve anthropogenic climate change, for example.IX  Likewise, if a study claims that the best way to solve climate change is to become vegan, yet the study is funded by a special-interest lobby such as People For The Ethical Treatment Of Animals (PETA) (whose goal was always to convince people to go vegan quite aside and before any interest in climate change), this is cause for some skepticism.  The mechanism for a flawed result could be anything from fraudulent data to the presence of confirmation bias.  Another is the expertise or credential heuristic: one who does not know how to evaluate the claims of a certain field may lean on the ideas of someone else who does, and they may lean on that person's earned credentials to decide whether or not they are an expert.  Someone who has not (and perhaps cannot) study physics is most likely better off putting more weight onto the claims of someone with a PhD in physics than a random man off of the street.  Yet remember that heuristics are highly flawed and one should not fully stake their beliefs and dis-beliefs in them; there is no real alternative to understanding the hypothetical mechanisms and their predictions, and testing them against empirical evidence.  For one thing, their system could truly be set up to be truth-seeking despite the perverse incentives to put their thumb on the scale.  Returning to the conflict-of-interest heuristic, it could easily be that the special interest group got lucky and the facts do align with their ulterior motive!  As long as these researchers showed their work (and it is not fraudulent), they do deserve the true peer review of those competent enough to engage their ideas at a level beyond mere heuristics, quite aside from conflicts of interest.  But in a pinch, circumstantial evidence is worth considering.  Properly used, heuristics should perform better than random for an otherwise uninformed person (but nowhere near as well as real analysis).  These are useful tools on the fly, but they can also easily lead to a profoundly unscientific epistemology when over (and non-judiciously) used.  Leaning too heavily on an expertise heuristic leads to elitism or credentialism, for example.  There are such things as experts—not everyone knows as much as everyone else on a given topic, and there are two reasons broadly for this: some people are not as intelligent as other people, and some people have just simply not committed a sustained study (let alone a contribution) to a topic.X  Once one has placed their trust in an expert based on a heuristic, one must understand that they have also signed onto the biases and errors of that person.  Most people have a general sense of this when it comes to life-or-death scenarios: patients will often do their best to do their own research, and many will get second opinions from other doctors so that they may combine expertise heuristics with whatever level of other philosophical foothold they may be able to latch onto.  We all are required to have a starting point, and to have some basic trust in other thinkers who have a reputation, who have proven themselves to be decent at what they do; otherwise we would all need to start from scratch and redo all the scientific research, to replicate all findings, and to understand everything to the greatest depth, all for ourselves.  This is of course not realistic (though I think the aspiration is the right goal for maximizing one's scientific abilities).  My personal favorite kind of scientist, and I think the greatest kind of scientist, does try to do something like this as best that they can, knowing that they must fail.  I recognize deep down that the scientific project is far greater than what any one person could possibly do themselves, and it is really a historical project of contribution by many people—but I believe this group effort works best when each node attempts to embody the full scientific ethic for themselves.  If only those deemed “experts” by our institutions are allowed to think or speak about a topic, if only those experts that are credentialed in their expertise or have the right degree from the right university even get a hearing, then much of scientific progress to this point would not have occurred.XI

        Along those lines, I often get annoyed when people make statements about, “the scientific consensus,” as they usually mean the sociological consensus among scientists about what is true and false.  That is not really what matters because people, including experts, are fallible; those “scientists” might be not acting as scientists in the moment, their biases might be getting to them, the state of the field might be otherwise unhealthy, or the majority of experts in a field may simply be on the wrong track, in need of the fresh ideas of a relative outsider (which happens all of the time!).XII  It is the consensus of evidence that matters.  One wants to be able to look at and evaluate different lines of evidence that lead to the same conclusion, such a body of evidence would be convincing to any reasonable person performing a dispassionate analysis of it.  For example, those who want to convince the public of anthropogenic climate change should not spend their time telling us that the majority of climate scientists believe that global warming is real and human-caused, they should instead be saying to look at the data (and providing the public with an education in how to evaluate such data for themselves): as it turns out, climate change follows one of the simpler scientific arguments, with the data producing fairly straight-forward graphs demonstrating increasing temperatures, atmospheric carbon-dioxide (CO2), or the positive relationship between the two.XIII  Child developmental psychologist Alison Gopnik recounts that even young children value a consensus of opinion: the more people who ascent to something, the more likely the baby is to assent, him or herself.XIV  This may partially be because our prehistoric world was relatively without specialization; for the most part, most people knew what everyone else knew, or had available to them the same information that everyone else had, and so if a majority of people read the evidence the same way, it was probably the best available interpretation (and had perhaps originally developed through the best minds coming together).  In fact, very young children also evaluate whether an adult seems confident in their answer or not (seemingly as an expertise heuristic), and this affects how they learn from you.XV  A fit belief is also not ultimately based on truth-value,XVI and may even function as an in-group signal; everyone believing the same thing, say a myth, allowed members to signal group membership status, supporting group cohesion and cooperation.XVII  This tribal instinct is of course epistemologically dangerous because it may kick in today; there may be situations where, in a given subculture, scientists have created an unhealthy field.  They may all believe a myth together, using it as an in-group litmus test to signal cohesion to each other, and worse, they are in a position for their expertise heuristic to decay into credentialist bias, believing their myth is a scientific discovery, despite insufficient (or falsifying!) evidence.

        How to solve such issues?  In-principle, one could go around and ask every single person on the planet, from homeless people to the greatest living scientists what they think about a topic, say, quantum gravity.  From there, one could attempt to evaluate every single one of their claims, combining the best parts and throwing away the worst parts, of each.  By definition, one would find the best (and worst, and everything in-between) available ideas on quantum gravity by doing so.  Now, if none of those interviewees, nor the combination of their ideas, possesses what is needed for a theory of quantum gravity (which is almost certainly the case, or one of them would have put it together already), then someone still needs to come up with something new.  But the point is that one can't even do this in-practice because of the opportunity costs involved: one cannot interview every last person on every last topic and synthesize all of those answers.  Any researcher doing a literature review needs to take this enormous problem and shrink it down to a more manageable problem, somehow.XVIII  The way we do that is through the use of heuristics; there is a scientific use for heuristics as a sort of necessary evil.  The first step is to identify which question(s) to tackle, based on what one is actually interested in (one is unlikely to do a good job analyzing a topic one is bored by—remember your level effort and performance on all of those school projects where a topic was thrust upon you).  From there, one cannot again ask every last expert on a topic, let alone everyone in the world, and so one will lean on heuristics such as expertise, the credentials and (much better yet) achievements meant to signal such expertise, and one's own judgment of the quality of the thinkers on tap (such as one's ease in understanding their explanations).  One might look for those who have written papers or perhaps especially popular books on the topic at hand; perhaps for people who have PhDs in the topic.  Then one might seek out their resources and get a sense of their quality and reputation.  The fact is, there might only be ten people in the entire population of the world who have something useful to say about a given esoteric scientific topic, and it is a needle-in-a-haystack problem to find them.  I again caution about the fallibility of such heuristics: they may (and based on the history of science, may be more likely to than many admit) completely miss the person who is actually going to solve the problem, because often a free-thinking gadfly who did not follow the beaten path is the one who solves a longstanding riddle, or solves a problem no one even recognized.XIX  If you solve it, in part using what you learned from all of the experts you “interviewed”, realize that if someone else had undertaken the same project, their heuristics would have missed you as a source.  We must always keep an open mind that someone may cut their teeth solving a problem from what appears to be an unlikely background.  We also need to take care to reward such people and, “let them in the club,” so to speak, once they have earned their keep; such a person should not be treated as a happy-go-lucky fluke.XX  When one forms a belief by applying heuristics to expert positions, yet is not capable of evaluating such positions (or else one would have their own expert opinion!), one should never feel very certain of this belief—that is, it should be taken to be even more provisional, and with a higher degree of skepticism and equipoise than normal.  Until one has the working knowledge to understand and form their own genuine position on a topic, one should be aware of the fact that some of their beliefs are more ill-formed, and therefore uncertain, than they may realize without reflection.  One should then be proportionately less arrogant and pushy when talking about such positions!  There is a delicate balance between full-blown scientific understanding and heuristics.

        Another situation in which heuristics are unavoidable is when making a decision about how much evidence or information one needs before pulling the trigger and making an atomic choice to believe (or act on) or disbelieve a given claim or its alternative.XXI  One may be largely in equipoise on the answer to a given research question, but may nevertheless face, for example, medical options that require the betting on one or another alternative in the here-and-now (as different descriptive hypotheses for the biology of the ailment will prescribe different treatments).  The disease one is suffering from will certainly not wait for more evidence to come in so that you may form a firmer belief in how to tackle the situation!  We always have to make decisions whose actions and outcomes are predicated on some hypothesis being true or false, and yet we are only ever in contact with the provisional truth (worse, we are only ever in contact with our own personal ability to evaluate the available theory and evidence of our day).  At some point, one has to say, “okay, I've seen enough, I'm pulling the trigger,” on such-and-such an action.  Our descendants (provided we preserve our cumulative knowledge and culture through the ages) will always be in a better position to tackle the same questions because of the progressive nature inherent in the provisional nature of knowledge: the future will benefit from geniuses, evidence, and ideas we are not in possession of, today.  Our decisions and actions have beliefs implicitly built into them; couple that with the provisional nature of knowledge, and we are bound, at times, to make mistakes (though again, there will be no better alternative to the ontological results of the scientific method at a given time—only the updated version of the scientific body of knowledge in the future).  In the end, science is about making predictions at above chance rates.  But as a result of the provisional nature of knowledge, there will always be errors in which one turns out to have been correct for the wrong reasons, as well as errors in which one turns out to be incorrect despite utilizing the right reasoning.XXII  Many are confused by this, but one should prefer being wrong for the right reasons over being right for the wrong reasons—the reason for this are twofold: 1. because when one is right or wrong for the right reason, one has understood the world which produces these results and 2. the world is an iterated game in which one must use the method which is most likely to work out more often than not.  Imagine someone asks you whether to bet your money on a die landing on one, or on a die landing anywhere two-through-six.XXIII  Of course, one should always choose the latter option because it has a 5/6th chance of winning whereas the former only has a 1/6th chance.  Now, imagine the die is rolled and nevertheless, this time, it lands on 1, and you lose.  You should not regret your choice!  There was no way, in-practice, to know that this particular roll would land on 1 instead of any other number; only a fool would place a bet on the losing odds, even in hindsight.XXIV  If you were to play the die game repeatedly, you would win 1/6th of the time when betting on one, and 5/6th of the time when betting on two-through-six.  One might then think that it would be clever to mix in some bets on one—perhaps 1/6th of the time, to try and win more than 5/6ths of the games.  But this can only serve to reduce one's winnings in the long run, because one does not get to know, in-practice, which rounds will roll a one or not.  It is not just gambling that is a game of chance, however, and an iterated game does not need to be the same game of chance every time; every decision one makes offers the choice of using the scientific method—the best available understanding of the information about the system at hand—or not.  Clearly, while one will not always be right, one should always opt for the scientific strategy to maximize their winnings in the long-run.XXV

        Discussions of heuristics very commonly come to us from the field of behavioral economics, which is the descriptive marriage of economic and psychological theory, in which researchers described the deviations from (normative) classical economic theory in the behaviors of the average person due to errors and biases (and perhaps most of all, to the fitness-maximizing tendencies of the animal mind, which is sometimes at odds with the more narrow profit-maximizing goal of the economist).XXVI  In the view of evolutionary psychologists, the cash value of any behavior is not cash, but the degree to which such a trait contributes to the probability of leaving behind more viable offspring (more strictly, genetic replicates).XXVII  These heuristics or rules-of-thumb that the brain (a finite machine) is built to carry out, are meant to, on average, increase one's fitness.XXVIII  Sometimes that means being factually correct, and sometimes that means something else.XXIX  It is good to know when one is employing heuristics, and to what degree, to be sure that one has some justification for them, and to be transparent about what these are such that they may be scrutinized by review from one's peers.

        If one has an ulterior motive aside from truth-seeking, they are likely to employ dangerous heuristics in the service of those goals, much to the detriment of veracity. This stands to hurt the quality of the body of knowledge produced and “contributed” to.  This bias could range from the building of one's career or reputation, to merely confirmation bias, to the telling of a so-called “noble lies” to persuade one's opponents or the public into a false belief because of the (perceived) desirable consequences of a populace that holds such a belief (analytical philosopher Daniel C. Dennett terms this belief-in-belief).XXX  Any error and bias that is left in the wake of our judicious use of heuristics will be subject to the error-correcting mechanism built into science, to lower-case-p-r peer review.XXXI  Over time, many participants will analyze previous assumptions, bringing better theory and evidence to bear in attempts to falsify, establish, or replicate previous provisional results.  A society of thinkers working together (especially asynchronously, through time)XXXII to achieve a philosophical and evidentiary consensus on the state of knowledge.

        While free-thought is foundational, there are certain things I will not look up because I do not (yet) have the working knowledge on the topic required to make sense of the effectively random claims I am likely to encounter.  That is why this book is introductory on the foundations of so many topics: it assumes one does not need prerequisite working knowledge to bring to bear, and in fact it aspires to provide some of those bases.


Footnotes:

0. The Philosophy Of Science table of contents can be found, here (footnotephysicist.blogspot.com/2022/04/table-of-contents-philosophy-of-science.html).

I. See The Selfish Gene by Dawkins (pp. 296, 435) which further cites “The Evolution Of Cooperation” by Robert Axelrod and William D. Hamilton (Science) (1981) (https://www.science.org/doi/10.1126/science.7466396) and The Evolution Of Cooperation by Robert Axelrod (Basic Books) (1984) (though I have not read these works).

II. See Consilience by E. O. Wilson (pp. 138).

III. See Consilience by E. O. Wilson (pp. 164).

IV. See Misbehaving by Thaler (pp. 45-46) which further cites “The Methodology of Positive Economics” by Milton Friedman (University Of Chicago Press) (1953) from Essays In Positive Economics by Friedman (though I have not yet read this work). Contemporary behavioral economics such as Thaler focus on the errors produced by heuristics, but of course they exist because they serve some useful function in some circumstance, see Misbehaving by Thaler (pp. 43-53) and Consilience by E. O. Wilson (pp. 223-227, 343) which further cites “Judgment under Uncertainty: Heuristics And Biases: Biases In Judgments Reveal Some Heuristics Of Thinking Under Uncertainty” by Amos Tversky and Daniel Kahneman (Science) (1974) (https://www.science.org/doi/10.1126/science.185.4157.1124), “On The Reality Of Cognitive Illusions” by Daniel Kahneman and Amos Tversky (Psychological Review) (1996) (https://pubmed.ncbi.nlm.nih.gov/8759048/), and The Foundations Of Primitive Thought by Christopher Robert Hallpike (Oxford University Press) (1979) (though I have not read these works).

V. See Misbehaving by Thaler (pp. 22) which further cites “Judgment under Uncertainty: Heuristics And Biases” by Tversky and Kahneman (https://www.science.org/doi/10.1126/science.185.4157.1124).

VI. See Misbehaving by Thaler (pp. 22-23, 25, 45-46); Consilience by E. O. Wilson (pp. 47, 138, 164, 171, 255, 226-227); The Selfish Gene by Dawkins (pp. 114-116, 296, 375-376); and The Extended Phenotype by Dawkins (pp. 199, 218-237, 267-269, 408).

VIII. For more on system-one and system-two thinking, see the “Reason” chapter; Thinking Fast And Slow by Daniel Kahneman (Farrar, Straus, And Giroux) (2011 / 2013) (though I have not yet read this work—and reader beware, I am under the impression that it discusses some experimental results in psychology that would later fail replication). Look forward to the “Psychology” chapter in the “Ontology” volume.

IX. Incidentally, psychologist Sean Duffy once assigned our college class a contrarian global warming piece by Lomborg (among other readings). I seem to remember being surprised yet interested by the piece, but to have found the claim that Lomborg was somehow in the pocket of oil companies (though it may have been a relatively small conflict of interest, such as owning stocks), and I informed Duffy. In hindsight, I suspect I was misled by climate-activists in the media—I cannot even find the claim against him, anymore. By now, Lomborg seems to me to be attempting a dispassionate economic cost-benefit analysis as pertains to climate policy. I believe this is likely to have been an example of heuristic failure on my part.

X. Though I am an under-credentialed independent scholar vying for attention, admittedly, I myself do not spend much time looking through the blogs of people like myself; most of the authors I read on scientific topics do indeed hold credentials such as PhDs (Twitter aside).

XI. See "Bret And Heather 10th DarkHorse Podcast Livestream: SARS-CoV2--Unintelligent Design?" by B. Weinstein with H. Heying (https://www.youtube.com/watch?v=FKtsx0fZzzQ) (6:40 – 8:24, 53:52 – 57:48); “Bret And Heather 11th DarkHorse Podcast Livestream: Choose Your Own Black Mirror Episode” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=mYQJSobQgAc) (57:49 – 1:02:52); “Bret And Heather 16th DarkHorse Podcast Livestream: Meaning, Notions, & Scientific Commotions” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=QvljruLDhxY) (0:59 – 51:37, 1:03:00 – 1:06:07); “Science is Open Mic Night - With Eric Weinstein - Bret and Heather 22nd DarkHorse Podcast Livestream” by B. Weinstein, E. Weinstein, and H. Heying (https://www.youtube.com/watch?v=e6_8W4E0W1w) (27:10 – 30:46); "Bret And Heather 54th DarkHorse Podcast Livestream: Lane Splitting In The Post-Election Era" by Bret Weinstein and Heather Heying (DarkHorse) (2020) (https://www.youtube.com/watch?v=tZMskLj1N0I) (48:09 – 50:42); “Bret And Heather 79th DarkHorse Podcast Livestream: #NotAllMice” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=bU63lsHA0y0) (47:27 – 51:04); “Bret And Heather 81st DarkHorse Podcast Livestream: Permission To Think” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=LoaKtBMk53Y) (15:31 – 38:34); “#84: Hey YouTube: Divide By Zero (Bret Weinstein & Heather Heying DarkHorse Livestream)” by B. Weinstein and H. Heying (https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS80MjQwNzUucnNz/episode/QnV6enNwcm91dC04NzMwNTYw?sa=X&ved=0CAUQkfYCahcKEwiY2uSX-Pn6AhUAAAAAHQAAAAAQcg) (40:38 – 1:23:45); “Excellence” by E. Weinstein (https://www.edge.org/response-detail/23879) from What Should We Be Worried About? edited by Brockman; and The Demon Haunted World by Sagan (pp. 302-303).

XII. See "Bret And Heather 46th DarkHorse Podcast Livestream: RBG, Scalia, And The Court Supreme" by Bret Weinstein and Heather Heying (DarkHorse) (2020) (https://www.youtube.com/watch?v=hAutXNjqbNY) (40:00 – 49:55). See also “Bret And Heather 65th DarkHorse Podcast Livestream: Because Science” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=VerirOAHgUU) (19:12 – 26:51).

XIII. For such graphs, see The Physics Of Climate Change by Krauss.

XIV. See The Gardener And The Carpenter by Gopnik (pp. 119-120, 130).

XV. See The Gardener And The Carpenter by Gopnik (pp. 119-123).

XVI. See the “Empiricism” and “Intellectual Honesty” chapters which further cite “Patternicity” by Shermer (https://michaelshermer.com/sciam-columns/patternicity/) and "Truer Perceptions Are Fitter Perceptions" by Hoffman (https://www.edge.org/response-detail/25450) in This Idea Must Die edited by Brockman (pp 467-468). Look forward to the “Psychology” and “Economics” chapters in the “Ontology” volume.

XVII. See "#119 — Hidden Motives: A Conversation With Robin Hanson" by Sam Harris and Robin Hanson (Making Sense) (2018) (https://www.samharris.org/podcasts/making-sense-episodes/119-hidden-motives) (31:52-39:47).

XVIII. See Exploratory Programming For The Arts And Humanities by Montfort (pp. 37) and “Bret And Heather 6th Live Stream: Death And Peer Review - DarkHorse Podcast” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=zc6nOphi0yE) (30:40 – 59:56).

XIX. See "Bret And Heather 10th DarkHorse Podcast Livestream: SARS-CoV2--Unintelligent Design?" by B. Weinstein with H. Heying (https://www.youtube.com/watch?v=FKtsx0fZzzQ) (6:40 – 8:24, 53:52 – 57:48); “Bret And Heather 11th DarkHorse Podcast Livestream: Choose Your Own Black Mirror Episode” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=mYQJSobQgAc) (57:49 – 1:02:52); “Bret And Heather 16th DarkHorse Podcast Livestream: Meaning, Notions, & Scientific Commotions” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=QvljruLDhxY) (0:59 – 51:37, 1:03:00 – 1:06:07); “Science is Open Mic Night - With Eric Weinstein - Bret and Heather 22nd DarkHorse Podcast Livestream” by B. Weinstein, E. Weinstein, and H. Heying (https://www.youtube.com/watch?v=e6_8W4E0W1w) (27:10 – 30:46); “Bret And Heather 54th DarkHorse Podcast Livestream: Lane Splitting In The Post-Election Era” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=tZMskLj1N0I) (48:09 – 50:42); “Bret And Heather 79th DarkHorse Podcast Livestream: #NotAllMice” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=bU63lsHA0y0) (47:27 – 51:04); “Bret And Heather 81st DarkHorse Podcast Livestream: Permission To Think” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=LoaKtBMk53Y) (15:31 – 38:34); “#84: Hey YouTube: Divide By Zero (Bret Weinstein & Heather Heying DarkHorse Livestream)” by B. Weinstein and H. Heying (https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS80MjQwNzUucnNz/episode/QnV6enNwcm91dC04NzMwNTYw?sa=X&ved=0CAUQkfYCahcKEwiY2uSX-Pn6AhUAAAAAHQAAAAAQcg) (40:38 – 1:23:45); “Excellence” by E. Weinstein (https://www.edge.org/response-detail/23879) from What Should We Be Worried About? edited by Brockman; and The Demon Haunted World by Sagan (pp. 302-303).

XX. See "Bret And Heather 22nd DarkHorse Podcast Livestream: Don't #ShutDownSTEM" by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=h0n6Q0o-bVg) (32:00 – 40:57) and “Bret And Heather 81st DarkHorse Podcast Livestream: Permission To Think” by B. Weinstein and H. Heying (https://www.youtube.com/watch?v=LoaKtBMk53Y) (15:31 – 38:34).

XXI. For some information on the psychology of decision making, see “How Do People Make Decisions? (THE SAAD TRUTH_67)” by Gad Saad (The Saad Truth) (2015) (https://www.youtube.com/watch?v=SHyjfu8CrtU).

XXII. This is distinct from, but related to the concept of type-1 errors (false positives) and type-2errors (false negatives), see "Type I And Type II Errors" by Phil Rosenzweig (Edge / Harper Perennial) (2017 / 2018) (https://www.edge.org/response-detail/27070) in This Idea Is Brilliant edited by Brockman (pp. 447-449, 514).

XXIII. For more on the topic of iterated games, see the “Statistics, Probabilities, And Games” chapter.

XXIV. Again, I stress in-practice. In-principle, as discussed in the “In-Principle And In-Practice” chapter, if you had a time machine that brought to back to exactly that die-roll, you should bet on one, knowing now that that is where it will land. But of course, if we were to do a true re-wind of all conditions like this, you wouldn't be able to change a thing: you would bet on two-through-six (or whatever you had bet on) again just as clockwork-sure as the die will land on one, again. In the real world, in-practice, the next die roll will include all sorts of unknown different initial conditions at each roll that are the true source of the apparent randomness, and you will always be left with no recourse but to bet one two-through-six.

XXV. For more on games of chance, see the “Statistics, Probabilities, And Games” chapter. See also the “Approximation” chapter which makes mentions of Thinking In Bets by Duke (though I have yet to read this volume).

XXVI. See Misbehaving by Thaler (though I have so far only read through pp. 102).

XXVII. For an excellent classic on the neo-Darwinnian synthesis, see The Selfish Gene by Dawkins. For evolutionary psychology in particular, see The Blank Slate by Pinker and The Ape That Understood The Universe by Stewart-Williams.

XXVIII. See Consilience by E. O. Wilson (pp. 138, 164, 171, 225-227); The Selfish Gene by Dawkins (pp. 114-116, 296, 375-376); The Extended Phenotype by Dawkins (pp. 199, 218-237, 267-269, 408); The Gardener And The Carpenter by Gopnik (pp. 119-123, 130); and Misbehaving by Thaler (pp. 22-23). I am sure I heard B. Weinstein emphasize the “on average” fitness maximizing component of evolutionary psychology (which is due to the role of evolved-in rules-of-thumb, in-practice) on The DarkHorse Podcast, but I do not know in which episode.

XXIX. See the “Empiricism” and “Intellectual Honesty” chapters which further cite “Patternicity” by Shermer (https://michaelshermer.com/sciam-columns/patternicity/) and "Truer Perceptions Are Fitter Perceptions" by Hoffman (https://www.edge.org/response-detail/25450) in This Idea Must Die edited by Brockman (pp 467-468). Look forward to the “Psychology” and “Economics” chapters in the “Ontology” section.

XXX. See the “Descriptive Theory And Normative Theory” chapter. See also The God Delusion by Dawkins (pp. 13, 20-21, 25) which further cites Breaking The Spell: Religion As A Natural Phenomenon by Daniel C. Dennett (Penguin Books) (2006) (though I have not yet read this work); The Moral Landscape by Harris (pp. 38, 46, 146, 174-175); and “The Four Horsemen: Discussions with Richard Dawkins” by Richard Dawkins, Daniel C. Dennett, Sam Harris, and Christopher Hitchens (CFI) (2007) (https://centerforinquiry.org/store/product/the-four-horsemen-discussions-with-richard-dawkins-episode-1-dvd/) (8:15-9:01, 16:26-16:34, 1:1:00-1:51:57) (Note: I feel very bad, as I originally listened to this discussion on YouTube while at work. As I am against piracy, this is out-of-character for me! I am also not sure if the YouTube upload I just accessed to search its transcript function is the same one I had listened to a few years ago. As a result, I cited the official DVD copy, here (using time-stamps from the YouTube upload), but I feel caught between a rock and a hard place because while I would prefer not to propagate further watching of a bootlegged copy of someone else's work, I also want to give credit where it is due—I obviously found this YouTube copy useful. My imperfect compromise for now is that I will include this person's YouTube username in an HTML comment surrounding this footnote in the version of this chapter uploaded to my blog, technically giving this person a nod, while placing at least some practical friction in the way of suggesting watching an illegitimate copy).

XXXI. See for example Cosmos by Sagan and Druyan (pp. xviii, 94, 194); The Demon Haunted World by Sagan (pp. 20-22, 230, 274-275, 414, 423); Cosmos: Possible Worlds by Druyan (pp. 75); and The Coddling Of The American Mind by Lukianoff and Haidt (pp. 109). See also the “peer review” chapter.

XXXII. See Cosmos by Sagan (pp. 295-297).

Comments

  1. Change Log:
    Version 1.00 1/10/23 7:30 AM
    - Fixes:
    "CH 28
    FN 1 [CHECK]
    , 435) which further cites “The Evolution Of Cooperation” by Robert Axelrod and William D. Hamilton (Science) (1981) (https://www.science.org/doi/10.1126/science.7466396) and The Evolution Of Cooperation by Robert Axelrod (Basic Books) (1984) (though I have not read these works).
    * Chapters are finally hyperlinked!
    FN 11 [CHECK]
    "Bret And Heather 54th DarkHorse Podcast Livestream: Lane Splitting In The Post-Election Era" by Bret Weinstein and Heather Heying (DarkHorse) (2020) (https://www.youtube.com/watch?v=tZMskLj1N0I) (48:09 – 50:42)
    FN 12 [CHECK]
    "Bret And Heather 46th DarkHorse Podcast Livestream: RBG, Scalia, And The Court Supreme" by Bret Weinstein and Heather Heying (DarkHorse) (2020) (https://www.youtube.com/watch?v=hAutXNjqbNY) (40:00 – 49:55)
    FN 22 [CHECK]
    "Type I And Type II Errors" by Phil Rosenzweig (Edge / Harper Perennial) (2017 / 2018) (https://www.edge.org/response-detail/27070) in This Idea Is Brilliant edited by Brockman (pp. 447-449, 514)
    FN 23 [CHECK]
    Ch link
    FN 25 [CHECK]
    Ch link
    FN 28 [CHECK]
    I am sure
    Un-red
    FN 30 [CHECK]
    Un-red"
    - Changed title to "1st Edition"

    ReplyDelete
    Replies
    1. Version 1.01 1/10/23 11:27 AM
      Fixed:
      xxviii fn v re-stated that I hadn't read a source [CHECK]
      fn xvi says "section" instead of "volume"

      Delete
    2. Version 1.02 2/12/23 2:25 PM
      - Fixed Dawkins' quote to bring in line with Print Version 1.02

      Delete

Post a Comment

Popular posts from this blog

Table Of Contents | The Philosophy Of Science by Steven Gussman [1st Edition]

The Passive Smell Hypothesis

Planck Uncertainties