Measure for Measure

Our scientific understanding is powerful to the degree that it corresponds with the actual world of experience. The degree to which our understanding achieves this fit is judged by practical considerations; can we use it to predict outcomes of our investigations into what remains unknown or use it to control events in the real world? Understanding increases as we work to tune this correspondence between these bodies of knowledge and the actual structural, energetic and information patterns in the universe. Our rockets get where they’re going, our rounds fire straight.

Today we bear the fruit of three or four centuries of methodological investigation of how the things we perceive around us behave. This body of knowledge comes packaged in the form of conceptual models having, more often than not, a mathematical expression. The math holds the unique philosophy of this whole endeavor. These models are built on the ability to measure something. By measuring human beings are able to transcend their subjectivity and achieve a precision that is readily communicable. The objectivity, for all the flack it has run into in modern critiques of the philosophy of the scientific method, remains very real.

Remember the different types of yes and no we looked at last week? This is similar. One person says the stick is short, another that it is of medium length. Each is honestly reporting what they experience from within the inner jungle of their prior contexts. The power of the maths is seen when they both agree the stick measures 11 inches.

So measuring things has this useful characteristic; an ability to demonstrate an aspect of things which can be agreed on by anyone of sound mind and body. The assertion that a stick is 11 inches long is quickly verified or falsified by anyone with a ruler, in any country at any time regardless of their political or religious beliefs, the weather outside or an infinity of other variables. Access to the stick in question and a ruler marked with the agreed upon (yet ultimately arbitrary) metric is all that is required.

It would not be off the mark to explain these last few centuries of scientific exploration as an ever more extensive and subtle scramble to learn how to measure the mysterious events that surround us. Eventually Chevelier de Mere, after a particularly bruising loss dicing with friends, wondered if there might be a way to measure the seemingly random. Asking his mathematically inclined friend Blaise Pascal to look at the problem sparked a fire that started the new branches of mathematics we today associate most closely with science; probability and statistics. Try to imagine the first time something as seemingly random as tossing a pair of dice began to show its generalized behavior; that it was not random in the aggregate, only in the individual throws.

Here was something new. Not a measurement that could be confirmed by someone else with a single reading but one that required reproducing a series of events. Additionally in any given series the actual outcome might differ from the predicted one but over enough trials the pattern emerges. Everything about this type of metric made its proper use, and properly understanding it, a bit tricky. Today we manage to work with these probabilities very effectively through the use of confidence intervals and margins of error. Probability is not as easy to use as a ruler but is just as objective and precise in its own way.

This act of measuring things can become surprisingly complex. The length of a shoreline depends on the scale of ‘ruggedness’ you choose, as Mandelbrot taught us. Length itself changes under relativistic conditions. But these are dwarfed by a more basic fact about measurements as they occur in the real world; few are in perfect accord with theoretical predictions. They are close enough, which is well defined, and this is good enough. It has to be, it is all we have to work with.

For example let us assume an experiment in electronic circuits. We are to measure the resistance in the circuit as per ohms law: resistance = voltage / current. A simple algebra formula gives the expected resistance in a circuit, say 9 ohms. Using a multimeter we carefully take the measurement and find 8.89 ohms. Build the same circuit a few more times and measure their resistances. Now maybe you find 9.20 and 8.922 and so on. This spread of measurements arises from the details of the actual, specific circuit being tested that are abstracted away in the simplicity of ohms law. The purity of the metal and the quality of the components are just two of the details that might be relevant in any particular case, there are thousands upon thousands of others.

With an actual measurement we encounter reality in all its uniqueness where more details, more evidence is included by the nature of the circumstances. Measurement is the bridge between theory and observation. It is writing a reality check. The data gathered will either conform to the expected result, increasing our confidence in the theoretical model or it will not. Given these spreads of observational data the question of just how close the value of the data needs to be to that predicted by theory and still be considered confirmation becomes critical. And it is just here that a funny thing happened on the way to the circus…

Turns out when you take a set of independent observations like this they disperse in that familiar pattern, the Bell Curve:

BellCurveDark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for about 68% of the set, while two standard deviations from the mean (medium and dark blue) account for about 95%, and three standard deviations (light, medium, and dark blue) account for about 99.7%. (From


Regular readers will recognize the shape from last week. By including more and more of the evidence a spread of sorts arises. We are trying on systems thinking by including more and more of the relevant detail, training to sense the shape of questions and answers as they appear to us in the real world.

So what is a probability? First let’s get an intuitive grasp of the concept. The prolific, gentlemanly “Prince of Mathematicians” (Bell  1937) Carl Friedrich Gauss at one time concerned himself with the errors that accompany astronomical observations. He published a few comments that Laplace immediately recognized the importance of. Laplace developed them and laid the foundation for modern probability theory. An astronomer records the latitude and longitude of a star’s location. Each observation differs in each direction from previous observations by some amount. How then should we consider this situation? For centuries the concern was that the errors of each of the observations would multiply. Astronomers such as Tycho Brahe had been averaging the observations for centuries. They seemed to have discovered by empirical means that instead of multiplying out of control the errors seemed to cancel out. It was Gauss who gave the mathematical proof that this is indeed the case. In a small comment he derived what we today call the Gaussian – Laplace curve. Most everyone is familiar with this figure; it is the normal or bell curve ubiquitous throughout statistics. The families of such curves are referred to as probability density functions.

Instead of saying that the star is really at the mid-point this curve describes the spread of uncertainty inherent in the collection of observations. The actual position can be anywhere within the scope of the curve though each position entails differing degrees of probability. Here a probability is a measure of the uncertainty both of our measurements and our understanding of causes. Other times a probability might be measuring an objective characteristic of the external world as, for example when measuring radioactive decay. Probability as a distribution was an amazing insight that was to play a fundamental role in the evolution of modern quantum mechanics where probability waves are used as a model of atomic structure.

Concern with the size of errors in collected data is the field of sampling theory and its significance tests. The correct hypothesis is known – the position of the star as determined by many previous observations and my star chart. The question concerns the data. Are the observations I record with my new telescopic alignment indicating it is properly calibrated? This is the type of question that concerned the creators of probability theory in the 18th and 19th centuries. They wanted to capture what could be said about the data to be expected when randomly drawing from a sample population. This is familiar to anyone who has taken a course in statistics. Every course introduces the ubiquitous, if morbid, Urn; an Urn contains 50 white balls and 20 red, what are the chances of drawing at least one red ball if 5 balls are drawn from the Urn and not replaced? The hypothesis is known, the contents of the Urn, and what we want to know is the distribution of the evidence we can expect from sampling it.

Sampling is the only means available to investigate the enormous complexity of the biosphere. The richness of the specifically existing actual objects and relationships exceeds our grasp any other way. But the roots of probability run even deeper than that. Many of the neurophysiological processing algorithms our senses use seem to rely on probability as well. It is not just the measuring but that which is measuring too, both are intimately and inescapably entwined with probabilities. Perhaps the most well-known is how the human eye has a blind spot where the optic nerve passes through the eyeball yet we do not see a black spot, void of anything. Instead the networks of neurons involved in processing optic signals interpolates what it expects would be in the external environment if it could see in this spot and fills the spot in with pure imagination. The brain performs a fundamentally probabilistic operation, guessing what is most probably there where it cannot actually see. An Amazon reviewer of Vision and Brain: How We Perceive the World put it well when they wrote, “human vision is a highly efficient guessing machine.” Indeed some researchers find that the roots of probability run even deeper than our sensory processing all the way down into how our brains do what they do. Bayesian Brains: Probabilistic Approaches to Neural Coding provides an approachable overview for those interested in taking a deeper look.

It should be obvious why these matters are important to the concerns of this blog. The majority of the evidence about the ecological crises presents itself to us in terms of probability. The IPCC report on climate change includes detailed treatment of the terms it uses for dealing with uncertainty. It is worth a substantial quote:

“Three different approaches are used to describe uncertainties each with a distinct form of language. Choices among and within these three approaches depend on both the nature of the information available and the authors’ expert judgment of the correctness and completeness of current scientific understanding.

Where uncertainty is assessed qualitatively, it is characterised by providing a relative sense of the amount and quality of evidence (that is, information from theory, observations or models indicating whether a belief or proposition is true or valid) and the degree of agreement (that is, the level of concurrence in the literature on a particular finding). This approach is used by WG III through a series of self-explanatory terms such as: high agreement, much evidence; high agreement, medium evidence; medium agreement, medium evidence; etc.

Where uncertainty is assessed more quantitatively using expert judgement of the correctness of underlying data, models or analyses, then the following scale of confidence levels is used to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.

Where uncertainty in specific outcomes is assessed using expert judgment and statistical analysis of a body of evidence (e.g. observations or model results), then the following likelihood ranges are used to express the assessed probability of occurrence: virtually certain >99%; extremely likely >95%; very likely >90%; likely >66%; more likely than not > 50%; about as likely as not 33% to 66%; unlikely <33%; very unlikely <10%; extremely unlikely <5%; exceptionally unlikely <1%.”

We saw in an earlier post how calculus provided science with a useful set of tools for creating mathematical models of events in a world of constant change. Probability provides an equally critical foundation for modern science with methods that are needed for logically interpreting the meaning of data gathered. Through the use of the rigor only mathematics can provide a consensus has been reached for these numerical operations that are no less objective in principal than the one we found with the ruler measuring the 11 inch stick.

Leave a Reply

Your email address will not be published.