There is a logical flaw in the statistical methods used across experimental science. This fault is not a minor academic quibble: it underlies a reproducibility crisis now threatening entire disciplines. In an increasingly statistics-reliant society, this same deeply rooted error shapes decisions in medicine, law, and public policy with profound consequences. The foundation of the problem is a misunderstanding of probability and its role in making inferences from observations.

Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the seventeenth-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. Clayton recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. He highlights how influential nineteenth- and twentieth-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics.

Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for readers interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach―that is, to incorporate prior knowledge when reasoning with incomplete information―in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli’s Fallacy explains why something has gone wrong with how we use data―and how to fix it.

30 reviews for Bernoulli’s Fallacy: Statistical Illogic and the Crisis of Modern Science

  1. (30)

    Racha

    I don’t have any background in higher math, but I’m a huge fan of Aubrey Clayton’s accessible, engaging, enlightening writing in the Boston Globe. I doubted that I would have the background to understand this book, but the author took me with him every step of the way throughout the intellectual journey meticulously laid out in this incredible book. I can’t speak to how you might experience it as a mathematician, but I can highly recommend it for people who aren’t.

  2. (30)

    Brendan

    This book should be added to the reading list of anyone teaching statistics.

  3. (30)

    Crawford C. Crews

    Clayton’s account of the history of probability theory is lucid and compelling. The special attention given to the many abuses of statistics in service of bigoted, oppressive schemes and projects helps to make clear just how much is at stake in this arena. The critical analysis of the failings of frequentism will persuade all but the most doctrinaire acolytes of this school of thought that something is rotten in the Kingdom of Galton and Pearson. The defense of Bayesianism is rigorous enough to satisfy serious students of probability and clear enough to engage and persuade interested layfolk. Perhaps Clayton’s most impressive accomplishment here is the production of a text that is genuinely entertaining, indeed often hilarious, without sacrificing the quantitative precision and clarity necessary to offer as profound and convincing a diagnosis as he does of statistics’ ills and of the prescription that can cure them.

  4. (30)

    Kurt Z. House

    Understanding the long fight within the statistics community over how to infer probabilities is both a riveting history as well as essential to fully appreciating the modern crisis of reproducibility. The author provides a lucid narrative that both explains the controversy’s core misunderstanding and illuminates the historic path dependency that enabled the institutionalization of that misunderstanding.

    Further, I’ve long thought there was an unfortunately large gap between scientific literature/text books and popular science. The gap is most salient when considering the near total dearth of even basic math in popular science—which is terribly unfortunate.

    Bernoulli’s Fallacy brilliantly straddles that gap by providing the reader with the mathematical tools to fundamentally understand the core controversy between frequentists and bayesians.

    Strongly recommended.

  5. (30)

    Rich

    An eloquent comparison of Bayesian and Frequentist inference.

  6. (30)

    Edoardo Angeloni

    The author talks about 2 great laws of the Statistics: the Bayes law and the law of great numbers. The success of this science must much to those positions. The research of 1800 has got away those opinions. In fact they give a good approximation to reality. The difference between theory and experiment is granted by that. In 1900 Savage , De Finetti, ecc. have consider subjective and objective probability, fact which has poned in crisis those same laws, fundamental in 1800.

  7. (30)

    Arthur R. Silen

    In his new book, Bernoulli’s Fallacy, author Aubrey Clayton explains clearly and in great detail why the so-called ‘frequentist school’ of statistical research and accounting embodies a fatal omission, a flaw in reasoning that obfuscates and confuses both scientific researchers and ordinary people in matters involving anything to do with social science, economics, law, or psychology; it’s because researchers often have hidden agendas that bias data selection, characterization, and analysis. It’s a bit complicated to describe in a short review, but the bottom line is that one of the cardinal tenets of scientific research, which is that scientific experiments must be both falsifiable, and that their products and conclusions must be replicable, meaning that two or more different researchers can develop data sets from a specified population, apply the same criteria for statistical analysis, and generally come up with the same answers and conclusions. Clayton cites empirical studies of research papers that have been published within the past twenty years showing that the claims and results of more than fifty percent of certain categories of statistical research, much of which involve human behavior studies, cannot be replicated when the experiment is repeated by another researcher attempting to achieve the same result. This has serious implications for follow-on research studies because the data and the methodology become unreliable. More importantly, the fundamental assumptions of the so-called ‘frequentists’ is that outliers should be excluded the populations of entities that are being studied. In plain language, it offers a perfect opportunity to ‘cook the books’, thus biasing the study that has been advertised to be scrupulously neutral in its data selection. Evidence of this invidious bias is found in the career histories of the founders of modern statistics, Frances Galton, Karl Pearson, and Ronald Fisher, all Britons, who individually and collectively codified and professionalized the modern discipline of statistics. These men, along with many others were avid proponents of a pseudoscience known as Eugenics, the idea that certain desirable social qualities (generally found in well-off, well educated middle class people) were heritable, meaning that those traits could be passed on to their offspring. This was in the century that preceded the study of human cell biology and the double helix. What the world got was bigotry dressed up as hard science; and Galton, Pearson, and Fisher worked their will to ensure that their views of human heredity were integral with their conception of statistical science and Gaussian probability. Distasteful and these views are, at one time they were embedded in law and public policy both in Great Britain and the Commonwealth countries, and in the United States. Garden-variety racism gained a whole new area of academic respectability during the from the 1890s through 1930s, until ‘scientific racism’ in Germany saw its ultimate denouement in the death camps. In the United States, many states had laws permitting forced sterilization of people who were believed to be feebleminded. This is what happens when a system of accounting is allowed to exist with little consideration given to the collateral consequences of what amounts to circular reasoning and self-fulfilling prophesy. A test for statistical significance, specifically the eponymous ‘null hypothesis significance testing’ appears to have been one of those things that Pearson and Fisher cooked up to burnish their reputations for mathematical innovation, which most practitioners, not knowing anything about it, and not thinking about its implications in any detail, simply accepted the notion, as long as it did not impede getting their research published. Problems arose in matters of uncontrollable instances of false positives in studies that are largely attributable to base rate neglect, which is a cognitive bias that arises from not comparing the outcome of a test or experiment with those reported results’ relative frequency in the outside world. Daniel Kahneman, in his magisterial book, Thinking Fast and Thinking Slow, referred to base rate neglect in his description of what he called, the ‘Linda problem’ (too much information about a hypothetical person he referred to as Linda, biasing the reader’s answer to a question about Linda’s current employment status). It’s also the basis of the famously controversial question, the ‘Monty Hall Problem, or Let’s Make A Deal’, another example used by Kahneman to illustrate that people are failing to think through the hypothetical situations readers are presented with, because their assumptions are anchored so deeply in the story line, and not in the real world.

    The one thing that Galton, Pearson, and Fisher insisted upon was that the prior (a priori) probability that any theory being tested be isolated from prior human experience, and that researchers use their own common sense and prior knowledge to frame the subject of inquiry. The idea was that the researcher proceed from a posture of either complete ignorance or indifference to extrinsic knowledge about the proposition being tested. The term ‘statistical significance’ arose and came into wide use without anyone ever questioning whether any measure of statistical coherence actually measured anything meaningful.

    Clayton advocates the notion that everything has a history, and not everything is immediately countable. The notion that social scientists are at an implicit disadvantage in academic research compared with those who do physics and chemistry gained traction in academic circles, leading to absurd results to mathematize research in political science, sociology, economics, and so on, because that was the only way to get one’s work published in a reputable academic journal, and thus ascend the rungs of the academic career ladder.

    The notion that learned journals and textbooks would be find themselves burdened with published articles and papers whose provenance is now suspect because the methodology of the research findings they report cannot be replicated in subsequently conducted experiments, more than 50 percent, or because the studies themselves were, as Clayton describes them, Type III errors, where an observed phenomena was real in a statistical sense, but did not actually support the scientific theory it was supposed to, or it was something idiosyncratic to the experiment to yielded data that was of no use to anyone else. The problem of base rate neglect is that it has a tendency to prompt people to jump to conclusions, sometimes referred to as the Availability Bias; and these can lead to catastrophic consequences if it happens in the context of a criminal prosecution. The Prosecutor’s Bias occurs when law enforcement uses statistical infrequency (rarity) to argue that because the coupling of one or more infrequent observations in the accused’s behavior or circumstances of death, the logical conclusion had to be robbery or murder because the confluence of the accused’s characteristics compelled no other conclusion. A posterior look at the facts of the cases that Clayton describes indicates that the prosecutors took everything at face value without probing deeper into whether there were significant facts that were overlooked because a statistician told the prosecutor that the confluence of observations was too rare to be ignored; but when matched against the relative likelihoods established by the base rate. Sticking with just the data you have generates false positives with marked frequency. In other words, investigators need to do a better job, which is hard because the world is influenced with which ‘Law and Order’ closes out its cases within the space of an hour, with commercial breaks.

    This is an important book. Statistics is not an easy subject to learn; and making sense of what comes out can be even more difficult. I like the idea of teaching Bayesian reasoning as a fundamental attribute of someone who has learned something in school that is intrinsically valuable. The world of Gaussian statistics and bell curves has much less applicability in real life. Learning to think probabilistically goes against the grain of wanting to know something with absolute accuracy; but that is a comfortable illusion fostered by our general unwillingness to go beneath and underneath what we’re seeing out in the world. Thinking is hard work, and it’s hard to get out of our comfort zone.

    I commend Aubrey Clayton for writing a highly readable, technically accurate book about an important skill that I am still developing.

  8. (30)

    John R. Meyers

    I’m a semi-pro student of probability and have never really bridged the gap (in my mind) between frequentism and Bayesianism. This is the first author to convincingly reconcile them. If you have taken a side in this battle, lay down your sword and read this book.

  9. (30)

    JIZreview

    I am enjoying the book but take issue with the catchy title, Bernoulli’s Fallacy. Bernoulli posed the following question:

    Given an urn that has been filled with white and black pebbles in the ratio 3:2, how many pebbles have to be removed and replaced in the urn in order to have a 99.9% certainty that the experimental ratio will fall in the range 3:2 plus or minus 5%?

    This a well constructed question with a unique answer because Bernoulli knew the actual ratio of white to black stones in the urn. He did not attempt to answer the trickier question of trying to determine an unknown ratio by analyzing the experimental ratio by sampling the pebbles in the urn. Laplace did that calculation much later.

    The question comes down to whether or not Bernoulli thought he was actually putting limits on an unknown ratio. If that was the case, the title is acceptable. My reading of Bernoulli’s mind is that he was in fact doing exactly what he said he was doing.

    Bernoulli clearly does not need me to defend him, but I thought Clayton took a cheap shot at Bernoulli.

  10. (30)

    David Hirsh

    Quite possibly the most important text any physician may read outside of their actual specialty. To not incorporate the lessons here taught, is to proceed as if in a dimly lit corridor of the hospital.

  11. (30)

    Clay Garner

    “Rejecting” or “accepting” a hypothesis is not the proper function of statistics and is, in fact, dangerously misleading and destructive. The point of statistical inference is not to produce the right answers with high frequency, but rather to always produce the inferences best supported by the data at hand when combined with existing background knowledge and assumptions.’’

    ‘Answers are not inferences’! Key idea here. Why?

    “Science is largely not a process of falsifying claims definitively, but rather assigning them probabilities and updating those probabilities in light of observation. This process is endless. No proposition apart from a logical contradiction should ever get assigned probability 0, and nothing short of a logical tautology should get probability 1.’’

    Definite conclusions are not scientifically possible!

    “The more unexpected, surprising, or contrary to established theory a proposition seems, the more impressive the evidence must be before that proposition is taken seriously.’’

    Knowledge isn’t wisdom. Data isn’t understanding.

    Therefore . . .

    “It is impossible to “measure” a probability by experimentation. Furthermore, all statements that begin “The probability is …” commit a category mistake. There is no such thing as “objective” probability.’’

    Judgement can’t be found using mathematics.

    How bad is this problem?

    “The methods of modern statistics—the tools of data analysis routinely taught in high schools and universities, the nouns and verbs of the common language of statistical inference written in journals, the theoretical results of thousands of person-years’ worth of effort—are founded on a logical error.’’

    “Logical error”! How serious?

    “These methods are not wrong in a minor way, in the sense that Newtonian physics is technically just a low-velocity, constant-gravitational approximation to the truth but still allows us successfully to build bridges and trains. They are simply and irredeemably wrong. They are logically bankrupt, with severe consequences for the world of science that depends on them.’’

    Wow! What’s needed?

    “However, I don’t want to risk understating the size of the problem or its importance. The problem is enormous; addressing it will require unwinding over a century of statistical thought and changing the basic vocabulary of scientific data analysis. The growth of statistical methods represents perhaps the greatest transformation in the practice of science since the Enlightenment.’’

    ‘Problem enormous’. How should we feel?

    “The suggestion that the fundamental logic underlying these methods is broken should be terrifying. Since I was first exposed to that idea almost fifteen years ago, I’ve spent nearly every day thinking, reading, writing, and teaching others about probability and statistics while living with the dual fears that this radical proposal I’ve committed myself to could be wrong and that it could be right.

    1. WHAT IS PROBABILITY?
    2. THE TITULAR FALLACY
    3. ADOLPHE QUETELET’S BELL CURVE BRIDGE
    4. THE FREQUENTIST JIHAD
    5. THE QUOTE-UNQUOTE LOGIC OF ORTHODOX STATISTICS
    6. THE REPLICATION CRISIS/OPPORTUNITY
    7. THE WAY OUT

    Clayton also presents the background of the disaster . . .

    “This also brings us into the age of evolution, the main driver behind the development of most of the new statistical tools. In this setting, in contrast to the lyric descriptions of the average man we saw from Quetelet, the quantification of human differences started to take on a menacing undertone, infused with racism, ableism, and settler colonialism.’’

    Man, this is really . . . bad. How bad?

    “One possible explanation is that they were equally dogmatic about what they wanted to do with statistics, for which they needed to assert an authority founded on what they claimed was objective truth. In a continuation of the trend we’ve already observed, their methods became more cloaked in objectivity as statistics gained more political importance, until by the end the stakes were such that they couldn’t allow any hint of subjectivity. Galton, Pearson, and Fisher were the mathematical equivalent of religious fundamentalists, and they claimed to follow a strictly literal reading of their holy texts.’’

    ‘Closed minds don’t produce reliable science. What impact?

    “It was during Fisher’s lifetime that the eugenics movement attained its most horrific final form in Nazi Germany. Elements of Adolf Hitler’s eugenics “project” were, in fact, descended from Galton’s in a surprisingly direct manner and therefore cousins of Fisher’s, by way of America.’’

    I was stunned. Science and statistics were purposely warped by desire to twist evidence for political motives. Especially genocide and extermination of undesirables.

    Man-o-man!

    And Clayton explains this in great detail. Many pages. Just . . . just . . . overwhelming!

    As he writes in introduction, this misuse, this broken science, this warped technique, controls modern thought.

    Terrible!

    Clayton writes on two levels. One, for academics with previous understanding and deep background. Two, giving history and explanation for serious, committed general reader (I only understood this part).

    Nevertheless, one of the most important books I’ve read in years. Explains a lot of modern decisions that seem so puzzling.

    Work deserves ten stars!

    Hundreds and hundreds of notes (linked)

    Great!

    Hundreds of references in the bibliography (not linked)

    Tremendous scholarship!

    Detailed index (linked)

    No photographs

  12. (30)

    Texas Slim

    As long as I can remember I have been skeptical of frequentist statistical reasoning. The significance criterion, p >.05, simply made no sense to me

    The Bayesian approach seemed much more sound. This approach is simply setting some axioms that define what we mean by ‘probability’ and then apply the derived theorems (e.g. Bayes theorem) to calculate conditional probabilities.

    I have recently come across this book which thoroughly lays out the flaws of and damage done by the frequentist approach, For anyone interested in learning from data, it is a must read.

    I also would like to mention he is a fellow PhD grad from UC Berkeley math.

  13. (30)

    John

    I learned probability and statistics during my engineering education using the frequentist techniques discussed in this book. But I was somewhat uncomfortable with how procedures were applied.
    This book addressed many of the sources of my discomfort. I found the background history of Galton, Pearson, Neyman, and Fischer especially interesting.
    I wish there would have been more on the topics related to using computer-based techniques such as re-sampling, bootstrapping, importance sampling, etc.
    Overall, this was a very worthwhile read.

  14. (30)

    Abigail

    to read.

  15. (30)

    Peter & Inge Crosby

    I have worked in the medical device industry for decades, using probability, statistics and data analysis in my daily life. It never quite made complete sense until this book tied it all together.
    The historical foundation of late 19th century and early 20th century statisticians, and how they corrupted understanding of the world is fascinating.
    Recommended reading for anyone who needs and uses statistics.

  16. (30)

    Barry F. Smith

    Aubrey Clayton provides, in simple language with a minimum use of mathematical notation and concepts, why Bayesian interpretation of probability is a useful tool for science and that frequentist interpretations often lead to bad science.

  17. (30)

    David Jacobs

    This helped me to think about how information is presented and expressed. If your doctor, or YouTube, or a friend tells you “this study just showed that ________ is true”, it’s good to understand the why and how of that statement. There is some math involved but much of the book can be understood from the narrative.

  18. (30)

    Peter Cotton

    The author describes his work as “propaganda to be dropped behind enemy lines” and I will leave it to the devout frequentists to opine on the effectiveness, so framed. I can only give my own subjective response to this book, as someone who always had some inner-ear discomfort trying to take on board traditional hypothesis testing.

    For me this book was an enjoyable listen. Some highlights for me included the discussion of everything that could go wrong with a laboratory experiment and the tangles this creates for frequentists; the discussion of now arcane attempts to define frequency in a rigorous yet non-trivial way (i.e the set of stochastic processes consistent with a prescribed asymptotic frequency); and Ronald Fisher’s own later-life rumination on unmistakably Bayesian ideas and their relationship to MLE.

    The closing discussion is also quite powerful, where the author makes an impassioned case for coming clean on our priors (as compared to pretending they don’t exist, or worse, giving credence to the ridiculous).

    So, high marks … in part just for the humor. That is not to discount the possibility of someone writing an equally wonderful response from the frequentist perspective.

  19. (30)

    M

    The “Fallacy” in the title is this: observed data can be used to judge a hypothesis based solely on how likely or unlikely the data would be if the hypothesis were true. The author, Aubrey Clayton, calls it Bernoulli’s Fallacy because Jacob Bernoulli’s Ars Conjectandi is devoted to determining how likely or unlikely an observation is given that a hypothesis is true. What we need is, not the probability of the data given the hypothesis, but the probability of the hypothesis given the data.

    In the preface, Clayton describes the Bayesian vs Frequentist schism as a “dispute about the nature and origins of probability: whether it comes from ‘outside us’ in the form of uncontrollable random noise in observations, or ‘inside us’ as our uncertainty given limited information on the state of the world.” Like Clayton, I am a fan of E.T. Jaynes’s “Probability Theory: The Logic of Science”, which presents the argument (proof really) that probability is a number representing a proposition’s plausibility based on background information — a number which can be updated based on new observations. So, I am a member of the choir to which Clayton is preaching.

    And he is preaching. This is one long argument against classical frequentist statistics. But Clayton never implies that frequentists dispute the validity of the formula universally known as “Bayes’s Rule”. (By the way, Bayes never wrote the actual formula.) Disputing the validity of Bayes’s Rule would be like disputing the quadratic formula or the Pythagorean Theorem. Some of the objections to Bayes/Price/Laplace are focused on “equal priors”, a term which Clayton never uses. Instead, he says “uniform priors”, “principle of insufficient reason”, or (from J.M.Keynes) “principle of indifference”.

    Clayton is not writing for readers like me. If he were, he would have included more equations and might have left out the tried, true, but familiar Monty Hall Problem, Boy or Girl Paradox, and standard examples of the prosecutor’s fallacy (Sally Clark and People v. Collins). But even with these familiar examples, he provides a more nuanced presentation. For example, under certain assumptions about the Monty Hall problem, it does you no good to switch doors. Clayton also provides notes and references, so I can follow threads for more detail.

    I appreciate that the book is also available in audio. The narrator is fine, but I find that I need the print version too.

    As someone already interested in probability theory and statistics, I highly recommend this book. I can’t say how individuals less into the topic would like it.

  20. (30)

    Dr Steve Hickey

    Listened to the audiobook and immediately ordered a print copy. It is a condemnation of current statistical methods turning the scientific literature into unreproducible crud.

    The author makes the reasonable argument that Bayesian methods would be more rational and solve many of the problems.

  21. (30)

    Brian

    Highly recommended. I listened to the audiobook. I may buy a printed copy now to keep in my home library.

  22. (30)

    Richard O. Michaud

    This is an important book about 21st century statistical analysis and a severe critique of 20th century statistical procedure and theory. the essential message of bayes theory as the central framework of modern statistical analysis is made with force and credibility. but the text wanders a great deal into later chapters issues of how and why modern statistics in the 20th century took many wrong turns that severely limited its applicability and contributed to many social statistical fallacies. while some of the book can be annoyingly pedantic it is nevertheless when brilliant indeed brilliant. thoughtful readers will be able to fast read and skip over what is well known for all those indoctrinated with classical 20th century statistical theory and in turn be dazzled in many cases with Clayton’s erudition and insights from the great heroes in statistical history. The key insight that statistical analysis is logical inference in uncertainty (extending aristotle and many great logicians) is brilliant. highly recommended in spite of obvious limitations.

  23. (30)

    Charles hudson

    What a wonderfully informative and important book this is. Is probability only about the occurrence of events or more about the information surrounding those events. Frequentism, or Bayesianism. The book is a delight to read and a nightmare of cases – legal and medical – where the interpretation of statistics and probability was misused. And is being misused to this day. As a physicist, I have always loved this subject because so much of probability theory in practice is counterintuitive. The human brain prefers quick, binary solutions to problems, not conclusions based on contingencies, alternate hypotheses, and the like. Highly recommended.

  24. (30)

    qirong29

    This book gives strong arguments for a Bayesian probabilistic approach against a frequentist one. The debate between these two interpretations has been going for a number of years, and both approaches dated from the 18th century, both from papers published posthumously – “ars conjectandi” (1713) by Jacob Bernouli, and “an essay towards solving a problem in the doctrine of chances” (1763) by Thomas Bayes.

  25. (30)

    P. Kapinos

    This is not a beach read. But, it is still very accessible to those interested. Comprehensive. I downloaded the audiobook as well and read it while listening to get the most out of it.

    The topic is VERY important in today’s world as we base so much policy and decision on purported evidence that someone told to someone else. And there is so much academic fraud that is also big business based on the same errors. Few people want to be involved in finding the truth about how we search for “truth”.

  26. (30)

    André Gargoura

    Fundamental if you like the debates about established- but-shaky theories — such as the “frequentist” approach to probability — and learn more about the Bayesian approach and its advantages.

    This kind of discussion is usually not found in books on statistics, often crammed with nebulous concepts and terms that both the authors and their readers don’t really understand… Resulting in a global magma and a lot of mistaken pseudo-inferences.

    Clayton clears the way to the understanding of the idea of probability as ultimately nothing more than a codification our ability to reason with less than perfect information and to understand what we’re doing !

  27. (30)

    Darren Hennig

    I am about 1/2 way so far, but the author has nailed many failures of modern scientific and sociological (amongst other societal aspects) methods, where potentially incorrect use of statistical data handling may skew results. I think this is a superb book so far and has gotten me rethinking the way our modern society, nay our whole planet, works!

    I wish that those using and collecting data would read this! It would solve a LOT of problems which are solvable, and prevent new ones cropping up.

    Recommended, and came quickly and well packed.

  28. (30)

    Dee

    If you ever took a statistics class and wondered why it seems like a hodgepodge of miscellaneous techniques, this book will help you understand why, and outlines a better approach.

  29. (30)

    Jorge A. Miranda Jr.

    Easily the best book on Bayesian statistics. The author skillfully breaks down where statistics went wrong and what we can do to fix it. If you have never heard of Bayesian statistics, this is a great introduction. And if you’re well versed in it, you’ll enjoy the genealogy of null hypothesis significance testing. After reading this book, I find it difficult, possibly impossible to go back to doing statistics the frequentist way.

  30. (30)

    WilliamH

    Very much enjoyed this tour de force. I did not try to follow all of the math — not the time or energy — but found the overarching message fascinating and ultimately highly persuasive. In my own career I’ve determined that 89.7% of all statistics are simply made up. This book shows a good part of the reason why.

Add a review

Your email address will not be published. Required fields are marked *

Back to top
X

New item(s) have been added to your cart.

Introduction to Graph Theory (Dover Books on Mathematics)
(30)
Original price was: $21,95.Current price is: $9,99.
What's the Point of Maths?
(30)
Original price was: $24,99.Current price is: $9,99.
The Calculus Story: A Mathematical Adventure Original price was: $35,00.Current price is: $9,99.
The Self-Taught Programmer: The Definitive Guide to Programming Professionally Original price was: $31,87.Current price is: $9,99.
The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics
(30)
Original price was: $29,99.Current price is: $10,99.
Storytelling with Data: A Data Visualization Guide for Business Professionals
(30)
Original price was: $37,00.Current price is: $11,99.
The Cartoon Guide to Geometry
(30)
Original price was: $35,99.Current price is: $12,99.
Practical Statistics for Data Scientists: 50+ Essential Concepts Using R and Python
(30)
Original price was: $79,99.Current price is: $13,99.
Everything You Need to Ace Pre-Algebra and Algebra I in One Big Fat Notebook (Big Fat Notebooks)
(30)
Original price was: $40,00.Current price is: $14,95.
Schaum's Outline of Mathematical Handbook of Formulas and Tables, Fifth Edition (Schaum's Outlines)
(30)
Original price was: $43,99.Current price is: $14,95.
The Little Book of Mathematical Principles, Theories & Things
(30)
Original price was: $47,99.Current price is: $14,95.
Trigonometry 11th Edition Original price was: $265,99.Current price is: $14,99.
Fractions Essentials Workbook with Answers
(30)
Original price was: $51,95.Current price is: $14,99.
Mastering Essential Math Skills: 20 Minutes a Day to Success, Book 2: Middle Grades/High School
(30)
Original price was: $41,99.Current price is: $14,99.
How to Solve It: A New Aspect of Mathematical Method (Princeton Science Library)
(30)
Original price was: $49,59.Current price is: $14,99.
Machine Learning: An Applied Mathematics Introduction
(30)
Original price was: $34,99.Current price is: $14,99.
Mathematics for Human Flourishing
(30)
Original price was: $49,99.Current price is: $14,99.
Oswaal CBSE Question Bank Class 9 Mathematics, Chapterwise and Topicwise Solved Papers For 2025 Exams
(30)
Original price was: $32,99.Current price is: $14,99.
Managing Agile Projects Original price was: $59,99.Current price is: $14,99.
The Official ACT Mathematics Guide Original price was: $45,00.Current price is: $14,99.
The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck
(30)
Original price was: $32,99.Current price is: $14,99.
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade 3
(30)
Original price was: $48,00.Current price is: $15,00.
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade 2
(30)
Original price was: $48,00.Current price is: $15,00.
Homeschool Essential Math: A Streamlined Curriculum for Grade 7
(30)
Original price was: $54,99.Current price is: $15,00.
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade 1
(30)
Original price was: $48,00.Current price is: $15,00.
Math-ish: Finding Creativity, Diversity, and Meaning in Mathematics
(30)
Original price was: $49,99.Current price is: $15,00.
Schaum's Outline of College Algebra, Fifth Edition
(30)
Original price was: $34,00.Current price is: $15,00.
Do Plants Know Math?: Unwinding the Story of Plant Spirals, from Leonardo da Vinci to Now
(30)
Original price was: $47,92.Current price is: $15,00.
Essential Prealgebra Skills Practice Workbook
(30)
Original price was: $58,95.Current price is: $15,00.
Ordinary Differential Equations (Dover Books on Mathematics) Original price was: $38,49.Current price is: $15,00.
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade 5
(30)
Original price was: $48,00.Current price is: $15,00.
Numsense! Data Science for the Layman: No Math Added
(30)
Original price was: $43,99.Current price is: $15,00.
Mindset Mathematics: Visualizing and Investigating Big Ideas, Grade K
(30)
Original price was: $48,00.Current price is: $15,00.