Chapter XXVI: Approximation | The Philosophy Of Science by Steven Gussman [1st Edition]

        “And, now, Socrates, there bursts upon him that wondrous vision which is the very soul of the

        beauty he has toiled so long for.  It is an everlasting loveliness which neither comes nor goes,

        which neither flowers nor fades, for such beauty is the same on every hand, the same then as now,

        here as there, this way as that way, the same to every worshipper as it is to every other... subsisting

        of itself and by itself in an eternal oneness, while every lovely thing partakes of it in such sort that,

        however much the parts may wax and wane, it will be neither more nor less, but still the same

        inviolable whole.”

        – PlatoI


        Approximation is the settling for an imperfect answer.  Ultimately, every claim lands somewhere on an approximation spectrum.  In elementary school, you were probably taught to make, “ballpark estimates,” and, “educated guesses,” which was meant to communicate to you that while your answer may not be very precise, its accuracy can still be “in the right ball park” (of course, if you're playing baseball, the difference between a home-run and a strike takes place entirely inside of the ball-park!).II

〰〰

        There are a few ways scientists attempt to quantify their error and uncertainty.  One is the percent error.  For example, if you are in possession of the empirically correct answer, and you want to know how well your theory predicts the value, you can use the following equation:III

|T – E| / |E|

where T is the theoretical value and E is the empirical value.  For example, imagine you theoretically model some facet of the world as behaving by the function f(t) = t2 and so at time t = 3 seconds, f(3) = 9.  Now imagine that you empirically measure whatever f(t) is meant to represent in the real world, and you find that it is actually of magnitude 8.00.  Depending on your use of this information, this may not be terribly far off.  This can be quantified by the percent error like so:

|9.00 – 8.00 / 8.00| = 0.125 = 12.5%
The percent error cannot tell you in absolute terms whether your theoretical model is useful, because 12% will be a “small” percent error for some purposes and a “large” percent error for others (if I arbitrarily told you to guess the integer I had in mind between 1 and 100, and you chose 9 when I was thinking of 8, that would be surprisingly close; but if I said to guess the real number I had in mind between 7.5 and 9.5, 8.0 would not be considered surprisingly “near” 9.0).  For less trite examples, you can imagine for example trying to figure out the correct dose to maximize the effectiveness of a medicine while minimizing the risky side-effects—there exist many real-world examples where the ten milligrams of difference can be a seriously “large” difference.  Returning to the example wherein your theoretical model found f(3) = 32 = 9 but in which the real answer was measured as 8.00, regardless of accuracy, one should nevertheless be curious as to why they possess an approximation of this kind rather than something closer to predicting reality.  In this case, perhaps the real world follows g(t) = 2t such that g(3) = 23 = 8.  This is important to know, because while t2 and 2t approximate each other to some level of accuracy (given some sensitivity to precision), between a given positive input-domain, but outside of that these functions are extremely divergent and will give very different answers; it is a very different model of the world, whether a physical process follows one or the other, in-principle, even if there are situations in-practice in which the differences are negligible.  Take a look for yourself:IV


                                        f(t) = x2 (gray)

                                        g(t) = 2x (black)

There are also ways to estimate uncertainty even when an easy comparison with the right answer is not available.  Here, scientists will calculate error bars, which they do by taking the standard error for their distribution of values.  Essentially, when you take the mean of a given data-set, the standard error associated with it is equal to the standard deviation of the data-set, divided by the square root of the sample size: an error bar is then your mean plus-or-minus the product of this standard error and 1.96 (because to achieve a confidence interval of 95%, meaning the range within you are 95% sure the real value falls, you take 1.96 standard deviations):V
                                σx σ / √N
                                CFI95% = ±1.96σx

where σx (pronounced “sigma sub-x”) is the standard error, σ (pronounced “sigma”) is the standard deviation, N is the sample size, and CFI95% is the confidence interval.  Let's take a real-world example.  In 2021, I took an informal poll of some hundred of my friends asking about covid-19 vaccine side-effects they perceived themselves or people they knew to have (or have not) experienced (N = 111).VI  I did not include error bars in the findings, so let's endeavor to rectify this, in part.  What I had deemed “severe” side-effects (things like vomiting) summed to 6, and dividing that by 111 gives the mean / roughly the percent chance of moderate side-effects:
                        μ = 6 / 111 = 0.054 = 5.4%
where μ (Greek letter mu, pronounced “mew”) is the mean.  The
standard deviation associated with this measure is given by:VII
                        σ = √(Σ(xi – μ)2 / N)

where xi are each of the individual values.  In our case, there were only three possible values for the variable: I considered 0 (the person did not have a “serious” side-effect), 0.5 (it's debatable whether the person's side effect fell into “serious” or some other category), and 1 (this person had a “serious” side-effect).  That means that the (xi – μ)2 term can also have only one of three values:
                        (0 – 0.054)2 = 0.002916
                        (0.5 – 0.054)2 = 0.198916
                        (1 – 0.054)2 = 0.894916
Because there were only two respondents I deemed 0.5, five I deemed 1, and the remaining 104 were deemed 0, we can sum these squares simply:

        Σ(xi – μ)2 = (2 × 0.198916) + (5 × 0.894916) + (104 × 0) = 4.872412
Now we can simply finish the standard deviation equation:
                        σ = √(4.872412 / 111) = 0.2095
Equipped with the standard deviation, we can now estimate the standard error and the 95% confidence interval:
                σx 0.2095 / √111 = 0.01988

                CFI95% ±1.96 × 0.01988 ±0.03897

Thus, while the mean we seemed to empirically gather was that about 5.4% of people can expect a “serious” side-effect from the covid-19 vaccines, we can now see that with 95% confidence, that value is 5.4% ± 3.9% (in other words, somewhere between 1.5% and 9.3%).  Though our definitions surely diverge greatly, one can look at the clinical trials for the covid-19 vaccines and see that this result likely passes the sniff test: Moderna's three ways of categorizing serious side-effects average to 3.7%, Pfizer's definition of “serious” or “severe” side-effects caught 2.6%, and J&J reported 0.4% “serious” side-effects in their vaccinated group.VIII  Given that many more people took the mRNA twins than the classical J&J vaccine, these numbers agree pretty well with my own far less careful look (in addition, their studies included a placebo-control group in which these adverse event rates tended to be quite similar to those in the vaccine-experiment group, casting some doubt upon causation, implying that real vaccine side-effects are on the lower side.  In my own survey, one respondent I nevertheless counted as “severe” for vomiting, admitted that they had stayed up drinking heavily the night before).

〰〰

        The fact is, we may never be in a situation where we are in perfect possession all of the laws of nature, the initial conditions, and the empirical observations (there will tend to be a discrepancy, however small, between our measurement and nature—sometimes caused by the measurement process).IX  For this reason, there is likely to always be a gulf between epistemology and ontology: science is the ongoing process of reaching ever more accurate, and more precise, answers to our questions.  But let us not exaggerate this gulf: it is certainly far smaller than the gulf between reality and the body of knowledge produced by any other philosophy!  The truth is that when we interface with the world, when it comes time to cite our beliefs or even act on them, we must be working from a place of imperfection, ultimately collapsing a probability down to a final decision (poker playing cognitive psychologist Annie Duke calls this “thinking in bets”).X  For many engineering purposes, the relevant science has gone well beyond the accuracy and precision practically needed; very often the focus on ever-improving our understanding is, at least at first, purely academic.  To this day, for much of what NASA and other space-flight agencies and companies do, they do by using Newton's theory of gravitation—a theory which we now know to be an approximation of Einstein's general theory of relativity (itself expected to be an approximation of a deeper theory of quantum gravity which we are not yet in possession of).XI  As alluded to earlier, those keeping score of a baseball game will need to be measuring distances at a precision well smaller than the size of a baseball diamond (to differentiate a strike from a home-run, for example); by contrast, if one's normative goal is to locate which city a baseball diamond is inside of, they may be able to use far less precise measurements (one could cut their map up into the size of entire cities and find which city the baseball diamond's latitude and longitude was inside of).  Remember: sometimes (depending on one's needed level of precision), we treat 1.4 1 and other times, even 9 1.XII

        In science, we use an nth approximation system.  The beginning step is the 0th approximation (often lovingly called a back-of-the-envelope calculation or a back-of-the-napkin calculation), and it is a brute order of magnitude estimate to see if one's hypothesis is anywhere near being a candidate for describing the real world (by first checking the lowest useful precision—the order of magnitude).  For example, after the covid-19 vaccine was released to the public, the CDC used the VAERS system to collect claims of deaths potentially caused by it.XIII  mRNA vaccines being a new technology, and the clinical trials being emergency absolved of their longitudinal data hurdle, some were worried about the potential for side-effects, and even deaths.XIV  Out of curiosity, I decided to see if the number of deaths expected to occur in such a way as to appear to have been caused by the vaccine by coincidence was near that reported to VAERS.  So I did a rough estimate of the number of people (taking into account the higher-aged skew of vaccinated individuals) expected to die within six weeks of their first dose (the full period between getting the first shot, and the second shot setting in).  This calculation ignores that one of the three-vaccines is merely a one-shot—in back-of-the-envelope calculations we allow ourselves such liberties if we do not think that it will have an effect on the scale of an order of magnitude.  Indeed, I got an order of magnitude match in mid-2021: ~2,000 coincidental deaths versus VAERS's ~6,000.XV  If I had done a more precise calculation with higher certainty, we could be worried about why there appeared to be ~3x as many deaths as expected, but here, we assume the calculation is at-best accurate to an order-of-magnitude precision, and so it is considered a successful prediction, and lightly suggests that the VAERS-reported deaths may be mere coincidences and not caused by the vaccines at all. In this preliminary probe, the figure would have to be off by at least a factor of ten before suggesting that something out-of-the-ordinary was afoot (which also means that this calculation alone is only good for detecting quite severe death-rates, yet we would indeed care about smaller risks than that, in reality).

〰〰

        To demonstrate a back-of-the-envelope calculation, let's endeavor to update our estimate of coincident deaths and see what it looks like, today, in late 2022.  First, let's ask how many Americans died in 2020 (a year which did include covid-19 but largely did not include covid-19 vaccination): 3,358,814.XVI  Now we have to be careful about counting the deaths in 2021 and 2022, because we don't want to treat potential vaccine deaths as “normal” or “expected”—at the same time, we cannot just multiply the 2020 figure by three because we know there were many more deaths caused by covid-19 in those later years (because there were many more cases).  The deaths attributed to covid-19 in 2020, 2021, and 2022 are as follows: 346,037; 478,009; and 264,014, respectively.XVII  We should also keep in mind that there may be long-term trends in deaths to account for, covid-19 aside; the growth per year of deaths between 2014 and 2019 was, on average, +1.687171%.XVIII  Compunding this growth to the number of deaths in 2019 predicts the following death tolls:

                                2019: 2,854,838
                                2020: 2,903,004
                                2021: 2,951,983
                                2022: 3,001,788
As expected, covid-19 outpaced those numbers readily.  What happens if we merely take the 2020 death toll, and factor in the known covid-19 deaths on top of it each year (despite the fact that covid-19 is only the third-leading cause of death, and therefore fails to count any growth in heart disease and cancer):

                                2021: 3,836,823
                                2022: 3,622,828
                                TOT: 7,459,651

At first it seems strange that deaths might actually decrease between 2021 and 2022, but this is perhaps not that absurd when you consider both that the pandemic is on its way out and excluding December (which hasn't occurred yet) may well be exclude a particularly deadly winter-holiday month.  This at least provides for us a somewhat realistic view of how many deaths may have occurred in the past two years including covid-19 but excluding potential vaccination deaths.  The chance of dying during those two years was approximately 7,459,651 / 331,900,000 = 2.248%.XIX  About 653,000,000 covid-19 vaccination doses have been given during roughly this same time-frame.XX  Since so much of the population has now been vaccinated, we can safely avoid controlling for variables such as age.  Let's define a “coincident” death that may look like a vaccination death and then be reported to VAERS as any death that occurs within two weeks of a vaccination dose (since it takes about two weeks to do its work and since this is probably the time-frame in which it would cross many people's minds).  There are 52 weeks in a year, and so 52 two-week periods over two years.  Therefor the chance of death over those two years divided by the 52 two-week periods should provide an estimate for the chance of coincidentally dying in any two week period:

                    PDeath = 2.248% / 52 = 0.04322%
Finally, we can multiply the number of doses administered by this probability to estimate the number of “coincident” deaths expected:
        DCoincident = 653,000,000 × 0.04322% = 282,000

Now, let's check in on where the value in the VAERS reported deaths lies by this point in time:XXI
                                DVAERS = 17,640
Our back-of-the-envelope calculation finds not an order-of-magnitude match, but actually that there were expected to be an order-of-magnitude
more potential candidate deaths than actually got reported.  Does this demonstrate that the vaccine is not killing anyone?  No.  But as a preliminary probe, it suggests that the vaccine is not causing many deaths, and that it may not be fruitful for me to look further into it.  I can identify at least one potential source of error: there is a potential for a selection effect under-counting vaccine deaths as by now, many people have had up to four shots (meaning they survived earlier shots and were more likely to be made of the stuff to survive future shots—by definition, anyone the vaccine killed could not have received further shots).  If you assume that everyone got four shots (which is of course not true), then perhaps you can divide DCoincident by four to control for survivor bias:
                    D'Coincident = 282,000 / 4 = 70,500

which yields an order-of-magnitude (though still significantly larger) match with VAERS.  Any attempt to control for neglected variables in our death probability would at minimum need to reduce the chance four-fold.  This is not necessarily that high of a hurdle to make it over, but considering that such a large proportion of the population has been vaccinated (and that the elderly are probably more likely to be vaccinated, anyway), I do not think it is likely that increasing our accuracy of this estimate would suggest that the vaccine is deadly, after all.  If these calculations had gone the other way, if they had suggested that the covid-19 vaccine was indeed causing substantial numbers of deaths, the preliminary result would be likely to send me down a longer process of more accurate estimates to figure out what was going on, causally.

〰〰

        The ontological model behind a 0th approximation / back-of-the-envelope calculation is referred to as a toy model because it is a very simple, to-the-point mechanical model of the system at play.  Here, one takes a simple picture of an idealized version of a system where only the largest contributing mechanisms come into play—it is easily drawn and easily explained.  Such pictures are very important to science and have been to a degree lost as scientific projects have become less concerned with mechanical philosophy or ontology, generally.  From sketches of evolutionary family trees, to Feynman diagrams shedding light on the theory of quantum electrodynamics (QED), the deep relationship between mathematical geometry and the physical world helps us to understand the grand machine that is the cosmos.  Often times, one obtains their back-of-the-envelope calculation by first imagining (or even drawing) a picture of their toy model explanation for the system being proposed as producing the phenomenon.  Other times, one may be more directly extending previously known theory, making simplifying assumptions along the way; here, it is worth pausing to reflect on the picture such assumptions and calculations are painting, and how realistic it is or isn't.XXII  Though there exists no literal, ontological world of forms as ancient Greek philosopher Plato argued in favor of, the concept is useful if it is viewed epistemologically as the collection of idealized toy models of facets of the world: there may exist no perfect sphere anywhere in the cosmos, but many objects clearly come very close, and so the tractable mathematical description of such simpler objects, when brought to bare on imperfect reality, perform extremely well in their description.XXIII

        There is even some argument to be made that there exist slightly useful -1st approximations in the form of what I call pure philosophical arguments: when one knows that there are roughly two competing causes on an effect, and can be relatively sure which one out-competes the other, before doing any calculation, one might be able to make a qualitative prediction about the future.  For example: it is thought that if the ratio of dark matter density to dark energy density is on either side of a point, the universe will expand forever or eventually collapse back in on itself like a big bang in reverse, often called a big crunch (if however these densities are perfectly balanced, it should remain stable without expanding nor shrinking over time).XXIV  If one happens to know that there is more than one or the other, they can easily conclude that the universe will ultimately expand or contract—without being able to put any numerical magnitude on the time-scales involved.  More practical examples of this concern both the illegal-immigration at the U.S.'s southern border, as well as prevention of covid-19: building a border wall and masking have both been politically polarizing, yet utilize the same basic argument that a physical barrier is likely to cut down on what may pass beyond such a boundary (again, this merely gives a likely effect-direction without being able to quantify the actual effect-size, nor cost-benefit analyses concerned).XXV  Perhaps the best argument in favor of this simple kind of model is the second law of thermodynamics so beloved by Eddington: it merely states that the probability is highly in favor of entropy increasing over time, based on Boltzmann's simple micro-state arguments (particularly in its classical version, it makes no precise mathematical predictions about the rate of increase on its own).  Of course, the lower the value of N in one's Nth approximation, the more skeptical / less certain one should be of their result.  In the case of these pure philosophical arguments, there is zero precision; one cannot be terribly sure of the accuracy of the direction given either, but it should be better than a random guess if the underlying assumptions are any good.

        But what if you want to get serious about solving your problem?  Back-of-the-envelope calculations tend to be used largely to tell you whether a question is worth investigating further, and sinking resources such as your time into.  The real answers begin when you commit to a real first-approximation.  Here, you are submitting your work as an answer to be taken quite seriously, and more open to methodological criticisms because you are allowing yourself fewer liberties.  The attempt is to identify the largest known contributor to predicting the effect, and then perform the calculation.  Next, you might take the second-highest known contributing factor into account for a second approximation.  The second approximation must include the contributing factor from the first approximation as well, of course.  In this way, one can perform any Nth approximation, in which, strictly speaking (although researchers may be looser in practice), each further approximation is supposed to make less of a contribution to changing the answer: one is supposed to get diminishing returns from each revision such that one may stop early depending on one's purposes (based on the level of precision needed).  This is not always easy to know ahead of time, of course.  But generally speaking, if the first approximation gives ~75% of the answer and the second approximation adds ~15%, bringing it up to a total of ~90%, then a third approximation can only add less-than-or-equal-to ~15% (and not just because there is only 10% left to be explained, but because the marginal causation incorporated into the second approximation explained a marginal ~15% of the effect, and each new factor is supposed to be weaker than the last).XXVI  If we think in terms of calculus, an Nth approximation is the analytical answer (not an approximation at all); in terms of calculus, as N approaches infinity, the result approaches perfect precision (and accuracy).XXVII

        Sometimes, knowing that there will be error in one's estimate, one purposefully biases their estimate towards under-or-over-counting (particularly when working with a normative goal): this is called a conservative estimate because one is being conservative in their evaluation.  One may do this even in a descriptive context if one wants to bias their answer in favor of the null hypothesis.XXVIII  Imagine, for example, that one is attempting to account for the gravity associated with dark matter (an invisible attractive force in the universe, of unknown origin) with, say, black holes (the hypothesis being that the ordinary gravity of a very many black holes are actually causing the net effect that looks like dark matter to astrophysicists).  In the case of calculating what percent of a galaxy's mass can be accounted for by black holes, one will need to estimate the average mass of a black hole and the average number of black holes in a galaxy.  In doing so, one will need to make assumptions and will sometimes have a range to work with, rather than a single answer.  One could take the mean value of these and see how it works to try and establish whether or not the approach is viable.  But if one wanted to perform a conservative estimate, one would choose their assumptions, or choose from their range of available values, the value that is least likely to help their case (one should always avoid doing the opposite, that would be to cherry-pick values that are most likely to help one's case).  In the present example, since one is skeptical that black holes will account for dark matter (as the null hypothesis that a given explanation will not be correct until proven otherwise), one might bias towards the lower-end of both the number of black holes and the average black hole mass—that way, if the calculation broadly works, they can be more certain that they are onto something (this may be thought of as a built-in falsification attempt).  However, we typically go with the best available values when performing such descriptive estimates; it is when we have a normative goal that conservative estimates come most in handy.  For one, if one's hypothesis seems pretty strong in the general case, they may attempt to persuade skeptics and holdouts by demonstrating that even a conservative version of the estimate works well.XXIX  One example from my life is that when I am counting calories to lose weight, there is often a lot of wiggle room with regards to the calorie count of a given meal. If one uses a database, for example, there are often multiple entries for the same meal with slightly different calorie counts, none of which specifies the particular brands or weights of the exact, say, grilled cheese sandwich, that one ate.  If I were reporting for a scientific study on how many calories I ate, I would probably take the median value to try and be as accurate as possible.  But because my main normative goal was weight-loss (via caloric restriction), I instead might choose (within reasonable variance) one of the higher calorie values in the database, because if I am tending to over-count my caloric intake, I am more likely to successfully meet my restricted goal and consequently lose weight.  (Notice that I said “within reasonable variance”—one must still be careful that the accuracy of their value bares some resemblance to reality, otherwise one might end up eating dangerously fewer calories than they have recorded).  Conservative estimates are meant to maximize goal-seeking (including persuasion) by handicapping one's estimate to demonstrate that a description still holds, or otherwise to ensure that a normative goal is met by avoiding error and bias placing one unwittingly outside of their valued goal.

        One has to be careful to reach the right balance when performing their estimations.  If one is too rigid and will settle for no less than a perfectly analytical answer, they will never have a calculation fit to publish.XXX  On the other hand, if one blows out one's allowed error bars all the way, every quantity is approximately equal to every other value, which is perfectly useless.


Footnotes:

0. The Philosophy Of Science table of contents can be found, here (footnotephysicist.blogspot.com/2022/04/table-of-contents-philosophy-of-science.html).

I. See The Dream Of Reason by Gottlieb (pp. 179-180, 451, 462) which further cites Symposium by Plato (385-370 B.C.) by way of The Collected Dialogues Of Plato edited by Edith Hamilton and Huntington Cairns (Princeton University Press) (1963) (pp. 562) (while I have not read this collection, it is not unlikely that I read at least some of it for a Philosophy 101 course in college). The estimated publication date for Symposium comes from “Symposium (Plato)” (Wikipedia) (accessed 11/28/2022) (https://en.wikipedia.org/wiki/Symposium_(Plato)) which further cites The Symposium and the Phaedrus: Plato's Erotic Dialogues by William S. Cobb (State University Of New York Press) (1993) (pp. 11) and The Pregnant Male As Myth And Metaphor In Classical Greek Literature by David D. Leitao (Cambridge University Press) (2012) (pp. 183) (though I am not familiar with either of these works).

II. For more on the difference between accuracy and precision, refer back to the “Empiricism” chapter.

III. Where the terms enclosed by “|” symbols are to be treated as absolute values—that is, positive even if the term evaluates with a negative sign on its own. See “Percentage Error: The Difference Between Approximate And Exact Values, As A Percentage Of The Exact Value” (Math Is Fun) (2017) (https://www.mathsisfun.com/numbers/percentage-error.html).

IV. This graph was produced using a very useful internet program called Desmos (https://www.desmos.com/calculator). The gray line represents f(t) = x2 and the black line represents g(t) = 2x. One can view and manipulate these graphs in real-time: https://www.desmos.com/calculator/1q6eladali (I encourage you to zoom in and out to increase or decrease the precision, and pan back and forth to see to what degree their outputs diverge given different inputs, especially considering an infinite graph cannot be easily captured in a single image; note that the colors online are red and blue rather than the gray and black used for print).

V. See “Error Bar” (Wikipedia) (accessed 12/1/2022) (https://en.wikipedia.org/wiki/Error_bar) which further cites “Standard Deviation, Standard Error: Which 'Standard' Should We Use?” by George W. Brown (JAMA / American Journal Of Diseases Of Children) (1982) (https://jamanetwork.com/journals/jamapediatrics/article-abstract/510667) (though I have not yet read this article) and “Standard Error” (Wikipedia) (accessed 12/1/2022) (https://en.wikipedia.org/wiki/Standard_error) which further cites “Standard Deviations And Standard Errors” by Douglas G. Altman and J. Martin Bland (BMJ) (2005 / 2010 / 2015) (https://www.bmj.com/content/331/7521/903) and The Cambridge Dictionary Of Statistics by Brian S. Everitt and Anders Skrondal (Cambridge University Press) (2003 / 2010) (though I have not read this Wikipedia article in full nor any of the works that article cited). For more on these topics, see the “Mathematics”, “Statistics, Probabilities, And Games”, and “Methodology” chapters.

VI. The only place I published these results was in a June 25th, 2021 Twitter thread: https://twitter.com/schwinn3/status/1408325896293990400?s=20&t=KK4vLhzkjNpow1L1HowxdA, with mention in a July 7th, 2021 Instagram post: https://www.instagram.com/p/CRDIrZVg8mJ/?utm_source=ig_web_copy_link.

VII. See “Standard Deviation” (Wikipedia) (accessed 12/1/2022) (https://en.wikipedia.org/wiki/Standard_deviation) (though I have not read this article in its entirety).

VIII. See “Efficacy And Safety Of The mRNA-1273 SARS-CoV-2 Vaccine” by Baden et al. (https://www.nejm.org/doi/full/10.1056/nejmoa2035389); “Safety And Efficacy Of The BNT162b2 mRNA Covid-19 Vaccine” by Polack et al. (https://www.nejm.org/doi/full/10.1056/nejmoa2034577); and “Safety And Efficacy Of Single-Dose Ad26.COV2.S Vaccine Against Covid-19” by Sadoff et al. (https://www.nejm.org/doi/full/10.1056/NEJMoa2101544).

IX. See Modern Physics by Serway, Moses, and Moyer (pp. 175-177).

X. In fact, she wrote a book of that name, see Thinking In Bets: Making Smarter Decisions When You Don't Have All The Facts by Annie Duke (Portfolio) (2018) (though I have yet to read this volume).

XI. See "Effective Theory" by Lisa Randall (Edge / Harper Perennial) (2017 / 2018) (https://www.edge.org/response-detail/27044) in This Idea Is Brilliant edited by Brockman (pp. 218).

XIII. See “Selected Adverse Events Reported after COVID-19 Vaccination” (CDC) (https://www.cdc.gov/coronavirus/2019-ncov/vaccines/safety/adverse-events.html).

XIV. See for example "How So Save The World, In Three Easy Steps." by B. Weinstein, R. Malone, and S. Kirsch (https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS80MjQwNzUucnNz/episode/QnV6enNwcm91dC04NjgzNzgz?ep=14).

XV. I reported the result in the following June 24th, 2021 Twitter thread: https://twitter.com/schwinn3/status/1408266549606195205. Such a calculation would of course need to be brought up to date (taking into account how many people are now vaccinated, the new age-skew, and the current number of VAERS-reported deaths, but I use it here as an example of a back-of-the-envelope calculation).

XVI. See “Provisional Mortality Data — United States, 2020” by Farida B. Ahmad, MPH et al. (CDC) (2021) (https://www.cdc.gov/mmwr/volumes/70/wr/mm7014e1.htm).

XVII. See “COVID-19 Tracker” (Microsoft) (accessed 12/1/2022) (https://bing.com/covid/local/unitedstates), including the “Graphs” tab (https://bing.com/covid/local/unitedstates?vert=graph).

XVIII. See “United States Indicators: Total Deaths” (PRB) (2021) (https://www.prb.org/usdata/indicator/deaths/table/) which further cites the CDC. I was originally going to use ten years of data, but I switched to five to be conservative in my estimate as it appeared that deaths grew by unusually little in the most recent two years of the data.

XX. See “Selected Adverse Events Reported after COVID-19 Vaccination” (CDC) (accessed 12/1/2022) (https://www.cdc.gov/coronavirus/2019-ncov/vaccines/safety/adverse-events.html).

XXI. See See “Selected Adverse Events Reported after COVID-19 Vaccination” (https://www.cdc.gov/coronavirus/2019-ncov/vaccines/safety/adverse-events.html).

XXII. As mentioned in previous chapters, I believe that it was the failure to take such concerns seriously that has misled the quantum physicists to believe they have a more complete and fundamental theory than they do, and that conversely it was their opponents' (such as Einstein's) concerns with the ontology, or lack thereof, quantum physics implied which allowed them to see the issue.

XXIII. See The Dream Of Enlightenment by Gottlieb (pp. 196-197, 226-228). I was also under the impression that Peterson has stated this connection (likely in relation to psychologist Carl Jung's related concept of archetypes), but I cannot find it.

XXIV. See the “Omega” chapter in Cosmic Numbers: The Numbers That Define Our Universe by James D. Stein (Basic Books) (2011) (pp. 185-201); Astrophysics For People In A Hurry by Neil deGrasse Tyson (W. W. Norton & Company, Inc.) (2017) (pp. 94, 99-100, 102-103, 105-110); The Elegant Universe by Greene (pp. 234-235); Phenomenal Physics: A Totally Non-Scary Guide To Physics And Why It Matters by Isaac McPhee (Metro Books) (2016) (pp. 138-139); Our Mathematical Universe by Tegmark (pp. 99-100, 366-368); On Grabity by Zee (pp. 135-138); The Endless Universe: Beyond The Big Bang—Rewriting Cosmic History by Paul J. Steinhardt and Neil Turok (Broadway Books) (2007) (pp. 6, 9-10, 35-36, 42-45, 47, 51-52, 59, 62-64, 66, 198, 253); Fashion, Faith, And Fantasy by Penrose (pp. 5, 217, 221, 224-227, 253-253, 272, 281-282); The Great Unknown by du Sautoy (pp. 213-215, 365); A Brief History Of Time by Hawking (pp. 41-42, 45-48, 119, 189); Brief Answers To The Big Questions by Hawking (pp. 32, 63); The Cosmic Web: Mysterious Architecture Of The Universe by J. Richard Gott (Princeton University Press) (2016) (pp. 16-19, 23-27, 82-83, 96, 193-226); and Astronomy: A Self-Teaching Guide: Eight Edition by Dinah L. Moché (Wiley General Trade) (1978 / 1981 / 1987 / 1993 / 2000 / 2004 / 2009 / 2015) (pp. 188-189, 192-193) (though I have not yet finished this work).

XXV. See my first December 13th, 2021 Twitter thread: https://twitter.com/schwinn3/status/1470456019193151492?s=20&t=07IWv66C4HRlZ_7QvP7JtA, and my second December 13th, 2021 Twitter thread: https://twitter.com/schwinn3/status/1470484800062566413?s=20&t=07IWv66C4HRlZ_7QvP7JtA. I chose these examples so as to place political bias on display: both arguments are effectively the same and yet, typically, the political left claims border-walls do not impede border-crossings while masks do impede viral contraction, and the right, vice-versa.

XXVI. In practice, if one finds that what they had attempted as their third approximation accounts for more of the effect than their second approximation had, then strictly they should reconsider and switch which one is labeled the second and third approximation. Technically, accuracy should not change much between approximations, as we try not to compromise on it as much as we do on precision; as mentioned before, an imprecise but accurate answer can be quite useful, whereas a highly precise but inaccurate answer is entirely meaningless.

XXVII. This is true in-principle, but in-practice, we are likely only to asymptotically approach a perfect understanding, replete with diminishing returns. See the “Mathematics” chapter.

XXVIII. In the above estimation of “coincident” deaths, you witnessed me take a few measures to get a conservative estimate (such as dividing the answer by four at the end as a blunt over-correction for survivor bias on repeated dosings).

XXIX. I again refer to the blunt control for survivor bias in the above “coincident” death estimation. I got a result and then probed the limits of it by seeing if it still held true under less favorable circumstances (removing the effects of multiple-dose survivor bias).

XXX. A great related article is “Effective Theory” by Randall (https://www.edge.org/response-detail/27044) in This Idea Is Brilliant edited by Brockman (pp. 217-219).

Comments

  1. To-Do:
    12/2/22 2:03 PM
    - In footnote XXIII, I should probably write something like "psychologist Carl Jung")

    ReplyDelete
  2. Version Log:
    Version 0.01 4:06 PM
    - Inserted the "Perhaps the best argument in favor of this simple kind of model is the second law of thermodynamics so beloved by Eddington: it merely states that the probability is highly in favor of entropy increasing over time, based on Boltzmann's simple micro-state arguments (particularly in its classical version, it makes no precise mathematical predictions about the rate of increase on its own)." sentence

    ReplyDelete
    Replies
    1. Version 0.02 4:11 PM
      - Inserted HTML comment "Okay to have no citations, here? Anything that might have needed citation had been done in the past in the same book. (Partly, I didn't want to insert another footnote and deal with shifting all of the footnote numbers in the published version)" ahead of the sentence inserted in in revision 0.01, above.

      Delete
    2. (Which brings it in line with my personal .odt document).

      Delete
    3. Version 1.00 1/10/23 6:27 AM
      - Fixes:
      "CH 26
      FN 1 [CHECK]
      Italix
      FN 2 [CHECK]
      Ch link
      FN 3 [CHECK]
      Italix?
      FN 5 [CHECK]
      Italix x5
      Ch links
      FN 7 [CHECK]
      Italix
      FN 8 [CHECK]
      Baden et al.
      Polack et al.
      Sadoff et al.
      Remove M.D.
      FN 11 [CHECK]
      "Effective Theory" by Lisa Randall (Edge / Harper Perennial) (2017 / 2018) (https://www.edge.org/response-detail/27044) in This Idea Is Brilliant edited by Brockman (pp. 218)
      FN 12 [CHECK]
      Ch link
      FN 20 [CHECK]
      (CDC)
      FN 21 [CHECK]
      Remove "accessed"
      FN 23 [CHECK]
      Un-red
      FN 27 [CHECK]
      Ch link"
      - Fixed title to "1st Edition"

      Delete
    4. Version 1.01 1/10/23 11:24 AM
      - Fixed:
      ch xxvi fn i missing italics on title [CHECK]
      fn viii shouldn't include first-names in vaccine papers
      fn xi should Randall's first name be there? [?]
      fn xx missing "(CDC)" [CHECK]
      fn xxiv missing phenom publisher & yr; "Moche" instead of "Moché"

      Delete
    5. Version 1.02 1/20/23 4:30 PM
      - Added "psychologist" title to Jung

      Delete

Post a Comment

Popular posts from this blog

Table Of Contents | The Philosophy Of Science by Steven Gussman [1st Edition]

The Passive Smell Hypothesis

Planck Uncertainties