Chapter III: Logic | The Philosophy Of Science by Steven Gussman [1st Edition]

        “Why?”

        “Well, because some things are, and some things are not.”

        “Why?”

        “Well because things that are not can't be!

        “Why?”

        “Because then nothing wouldn't be!   You can't have fucking nothing isn't!   Everything is!”

        “Why?”

        “'Cause if nothing wasn't, there'd be fucking all kinds of shit that we don't—like giant ants with

        top-hats, dancing around. There's no room for all that shit.”

        – Louis C.K.I


        At the base of epistemology is logic.  These are the most basic laws of reality that cannot be broken—things like 'you cannot both be and not be something'; that is, that contradiction is not allowed.   In logic, we sometimes label propositions with letters like A and B (generally, this is known as naming a variable).   We then use the exclamation point to mean NOT: so !A means NOT A.II  To denote that something cannot be equal, we sometimes use the following symbol: .   Therefore a formal way to state the necessity of logical coherence would be:

A !A

        Much stems from this simple law!  The need for consistency across our arguments (something a child can understand yet even adults achieve only with difficulty) is a consequence of the fact that the world is the way that it is, whatever that way may be.   It establishes that there may only be one correct answer to a question, and that our job is then to choose among competing alternatives to find it.   It allows for falsification because it disallows one from claiming that empirical evidence in favor of an alternative is somehow not falsification of their preferred idea (look forward to more on falsification in the “Empiricism” chapter).   Coupling this law with the obvious fact that:

A = A

reinforces that A must only be equal to the value of A (even if it goes by some other label, B), and not also some other value (which is not allowed to also go under the name A) (that is, logic precludes dualism, pluralism, or relativism—the belief that multiple contradictory alternatives can simultaneously be accurate descriptions of reality).   The following:III

        A = 10

        B = 10

        ∴ A = B

is perfectly fine because A and B are variables which hold the same value.   But if A = 10 and B = 20, then one cannot say A = B for the same reason they cannot say A = 10 and A = 20.  Mathematically (and math emerges from logic), you can easily see this with numbers, for example:IV

        10 = 10

        10 20

If someone tells you that 10 equals 20, you cannot just agree to disagree on the grounds that this is their opinion: they are wrong.V

        Another note about consistency: it applies to both ontology and epistemology.   Of course one must be consistent when they make separate ontological claims (if you claim in physics that the special theory of relativity sets a universal speed limit, then your biology hypothesis may not require information that travels faster than that, for example).   But one must also be consistent epistemologically (and methodologically): it is arguably more important to hold all claims and all thinkers to the same standards than that you get the rest of those standards right (or rather, consistency is among the most important of all of those standards).VI  Double standards lead to lopsided bodies of knowledge that massage the dominant group through confirmation bias (the tendency to more easily believe what we want to believe, or what we're incentivized to believe).VII  (It may be that different fields have different thresholds of evidence that they can realistically meet, but one mustn't take this idea too far, and certainly everything within a field must be consistent).   Beware proponents of competing philosophies—when waxing enlightened about how the provisional nature of knowledge places limits on the scientific method's abilities, it is worth considering the baseline: we each of us believe things in our day-to-day lives on far worse evidentiary standards, and no known alternative philosophy produces anywhere near the level of certainty (however absolutely weak you take it to be) that science does.   Taking the full picture into account, science goes from 'just un-falsified models' to the closest thing to the truth we are ever in possession of!

        In formal logic, there are two ways of finding answers to questions: induction and deduction.  Induction is when you assume something to be generally true because you've observed it to be true in many previous cases: you might multiply many different numbers by zero and conclude that because the result was always zero, the product of any number with zero is probably zero.  Such a "proof" would look something like this:VIII

        0 × 0 = 0

        1 × 0 = 0

        2 × 0 = 0

        

        99 × 0 = 0

        ∴ x × 0 = 0

        Deduction is when one takes two (or more) previously known propositions or axioms, and demonstrates that the consideration of these facts at the same time (in other words, their interaction) necessarily implies a new proposition.  A version of the classic example is this:IX

        All Englishmen are human

        George Boole is an Englishman

        ∴ George Boole is human

In the case of mathematics, deduction means rigorously, analytically proving a theorem from first principles, rather than making assumptions from a few examples: deduction shows (assuming those previous axioms we already know) that a theorem could not be any other way.   For multiplication by zero, this might look like:X

        x × 0 = x × (0 + 0)

        x × (0 + 0) = (x × 0) + (x × 0)

        x × 0 = (x × 0) + (x × 0)

        0 = x × 0

        ∴ x × 0 = 0

        Logicians, mathematicians, and philosophers tend to claim that deduction is a firmer basis for proof, and therefore more fundamental, than induction.   Certainly, one could not get away with naively extrapolating from a few example calculations in lieu of formal deduction in a math class (believe me—I've tried!), let alone in a mathematics journal.   There are good reasons for this.   Mathematics and logic are a formal symbolic system in which "the universe" is essentially at your finger tips: their very project is to deduce all they can from elegant first principles, and they have the practical means for doing so.   Furthermore, considering again the multiplication-by-zero question, given that there seem to be infinitely many numbers, any sample size used to induce your result is technically an infinitesimal proportion of the actual group, and therefore always susceptible to the criticism of being a small sample size (the tendency to test contiguous numbers then susceptible to the criticism of having a biased, non-representative sample to boot).XI  But then scientists, who in the right context might pay lip service to the strength of deduction over induction, nevertheless rely most heavily on empirical, observational evidence.   Even if they manage to make a claim by fairly strict derivation from well-known laws of physics, this idea is only considered a hypothesis until it is checked against experiments (or otherwise passive observations of the world)—at which point, if the prediction comes true, it is considered a fact and the framework that made the prediction is considered a theory (more on this in the “Laws And Facts, Theories And Data” chapter).

        Whither the importance of deduction?   In truth, it is a mistake to conclude that deduction is more powerful than induction.   Deduction is only more powerful given you already assume the propositions the new theorem is being deduced from.  When one examines those prior propositions, one finds that these are themselves contingent on other propositions: a networked tree of logic built over time (this is the body of mathematical and logical knowledge).   Each theorem is itself the product of prior deduction (or otherwise induction).   If you follow up this deductive pathway to the root node of the tree, the ancestor of all knowledge, you of course find fundamental propositions: axioms.   But what are axioms?   They certainly weren't deduced (what would they have been deduced from?)!   And one better hope they're not merely assumed.   The only option left is induction: those things which appear to always and everywhere be true, with no known exceptions (such as A !A).   This flips the conventional wisdom about logic on its head: induction is fundamental and deduction can only emerge from it, later.   A Sceptic might then throw his arms up and proclaim that all of knowledge is uselessly uncertain, but this would be unwise; as mentioned earlier, one must be consistent with the standards they hold: I know far more people who will doubt powerful scientific results on the grounds that science isn't all that, than I do people who actually walk around uncertain about whether their brother is in fact their brother, for example.

        Induction requires no such prerequisites, and unlike deduction, can pull itself up by its own bootstraps.XII  The fact is that if you assume induction works and utilize it, you will find over time that it does indeed tend to work; that is the application of induction to induction.   Some might argue that this is circular—which is usually a logical flaw—but I propose that the use of induction to support the use of induction is the only place where such self-use is not circular in a bad way: it amounts to the fact that one can show that empirical evidence works to elucidate the world by checking empirically whether empirical evidence has tended to work to elucidate the world in the past (more on this topic in the “Empiricism” chapter).   When someone questions the strength of philosophy of science (whether in general or in its application to a specific problem), it is perfectly reasonable to point out their revealed preferences from the past: everything from their cell-phone to any medical interventions they have received were discovered through the same process; this is both an appeal to nested induction and to consistency on the part of one's interlocutor.

        Because we must use mathematics as the language of science (more on this in the upcoming “Mathematics” chapter), it is good that it be on the firm footing of first principles deduction.   The science itself will need to be confirmed by induction, but we should hope for the firmest of languages to express such things in, so that we can essentially take the basic veracity of the language itself for granted.   I also want to be careful not to understate the true strength of deduction, even in science.   We verified what we know of the periodic table of the elements ultimately from induction.   But empirical patterns guided chemist Dmitri MendeleevXIII to the theoretical taxonomy that is the periodic table.  From there, predictions could be made, such as the existence of a chemical element in a particular place on the table, and its likely properties by proximity to others.   This guidance is how we knew to empirically look for (or otherwise engineer) the existence of such elements.XIV  Furthermore, quantum physicists have been able to derive some features of the periodic table, and chemistry generally, from the underlying physics!   This is deduction, though largely after induction got us the answer sooner and simpler.   Nevertheless, it is an astounding contribution to knowledge and puts these induced results on much firmer footing, being a separate, more deductive line of evidence leading to the phenomena.   (More on the topic of theoretical taxonomy in the “Laws And Facts, Theories And Data” chapter).

        So science is a delicate dance between many pairs: induction and deduction; passive observation and experiment; theory and engineering: whereas full-blown contradiction is illegal, much insight lies in the tensions between concepts.   Ontologically, we know that deduction is king, in principle: the cosmos is the way it is, some coherent object or process whose facets are perfectly consistent and emerge from first principles reductions.   But epistemologically—and not being omniscient gods but fallible men, this is our only line into discovery—induction is king because if one is in possession of faulty axioms or logic, they may come to the wrong conclusions: we simply must continually check our ideas, statistically, against nature as she is.


Footnote:

0. The Philosophy Of Science table of contents can be found, here (footnotephysicist.blogspot.com/2022/04/table-of-contents-philosophy-of-science.html).

I. This bit can be found on YouTube, see “Louis C.K. - Kid's Questions” uploaded by user fohsiao (https://www.youtube.com/watch?v=Tf17rFDjMZw) which further cites an origin, see One Night Stand, S5:E1, entitled “Louis C.K. (HBO) (2005) (https://tv.apple.com/us/episode/louis-ck/umc.cmc.65cscycr5kf7nxv11s5sgcls4).

II. My computer science background may be showing, here—a formal logician is more likely to use a different symbol, such as a bar above the expression (), to express the logical NOT. Incidentally, I have found that many of those authors who appear to me to best understand the philosophy of science, writ-large, do have computer science backgrounds (including evolutionary biologist Richard Dawkins and evolutionary economist Gad Saad, for example)—I will explore why that may be in the “Computation” chapter.

III. Note that mathematicians use the symbol ∴ to mean “therefore”.

IV. The fact that 10 ≈ 11 (with ≈ being the symbol for “about equal to”), for example, will be explained later in the “Approximation” chapter—suffice it to say for now that approximations allow us to make less precise claims but not to actually break with logical foundations.

V. Of course, if you actually encounter someone so silly in the real world, I do encourage you “agree to disagree” in the sense of getting as far away from this person as possible without wasting time with them; but internally, in scholarly idea-space, you must realize that this is not a serious “opinion” nor even a matter of “opinion”.

VI. For a nice discussion of the importance of logical consistency (and its use as a tool) in science, see "Bret and Heather 83rd DarkHorse Podcast Livestream: Doing Science in an Emergency" by Bret Weinstein and Heather Heying (DarkHorse) (2021) (https://youtu.be/pQiv8I9Peqk) (12:38 - 24:50).

VII. Social psychologist Lee Jussim calls these, “selective calls for rigor,” and argues that they are a mechanism for “idea laundering” (a concept I first heard about from evolutionary biologist Bret Weinstein), see as examples Jussim's December 2nd, 2021 tweet (https://twitter.com/PsychRabble/status/1466434372630982668?s=20) and Weinstein's October 3rd, 2018 tweet (https://twitter.com/BretWeinstein/status/1047603733083828224?s=20).

VIII. Multiplication by zero may not be the best example since actually multiplying any given number by zero may be a non-trivial thing to do. But the application of logic could perhaps get one there—ask “how much is zero sets of five?” Put that way, the answer is pretty clearly zero.

IX. I found that logician George Boole was from England from his entry in Encyclopaedia Britannica, see “George Boole” (Encyclopaedia Britannica) (1999 / 2021) (https://www.britannica.com/biography/George-Boole) (though I have not read this entire entry).

X. See the “Proofs Of Elementary Ring Properties” entry in Wikipedia (https://en.wikipedia.org/wiki/Proofs_of_elementary_ring_properties#Multiplication_by_zero) (retrieved 7/19/2021). This proof requires knowledge of basic algebraic manipulation, which will be discussed in the “Mathematics” chapter.

XI. For more on inference from samples, see the “Methodology” and “Statistics, Probabilities, And Games” chapters.

XII. Sam Harris has on multiple occasions argued on his podcast, Making Sense (formerly known as the Waking Up podcast) that, 'science, like any philosophy, must ultimately,' “pull itself up by its bootstraps,” (though I do not know in which specific episodes he does so).

XIII. I fetched the great man's name from the Encyclopaedia Britannica's entry for “Dmitri Mendeleev” (Encyclopaedia Britannica) (1998 / 2021) (https://www.britannica.com/biography/Dmitri-Mendeleev) (though I have not read this article).

XIV. As examples, the elements germanium, promethium, halfnium (naturally isolated), and uranium (experimentally synthesized) were all correctly predicted by the periodic table, see Elemental: How The Period Table Can Now Explain (Nearly) Everything by Tim James (Abrams Press) (2019) (pp. 58-59, 64-71, 97-100, 146, and 165). See also Chemistry: A Very Short Introduction by Peter Atkins (Oxford University Press) (2013 / 2015) (at least pp. 15 and 86) and Our Mathematical Universe: My Quest For The Ultimate Nature Of Reality by Max Tegmark (Vintage Books) (2014) (pp. 256).

Comments

  1. TO-DO #1 10/7/22 8:46 PM
    - Bret Weinstein already has a long introduction in a red footnote in the previous chapter

    ReplyDelete
  2. Change Log:
    Version 1.00 1/7/23 8:30 PM
    - Many fixes to footnotes:
    "CH3
    FN 6 [CHECK]
    See "Bret and Heather 83rd DarkHorse Podcast Livestream: Doing Science in an Emergency" by Bret Weinstein and Heather Heying (DarkHorse) (2021) (https://youtu.be/pQiv8I9Peqk) (12:38 - 24:50).
    FN 9 [CHECK]
    Italix
    FN 11 [CHECK]
    ch hyperlinks
    FN 12 [CHECK]
    Fix quouting
    FN 13 [CHECK]
    Italix
    FN 14
    Tegmark 256"
    - Changed title to "1st edition"

    ReplyDelete

Post a Comment

Popular posts from this blog

Table Of Contents | The Philosophy Of Science by Steven Gussman [1st Edition]

The Passive Smell Hypothesis

Planck Uncertainties