Life defies the dehumanising cut-off points of the bell curve

The global mono-cult pretends that all aspects of life can be categorised and understood in terms of normality – by the hump of the bell curve. But the living planet does not conform to anthropocentric normality, it is chaotic, it is beautifully and awesomely diverse.

Normality is a product of the industrial era

The discipline of statistics and the term normality are cultural products of the industrial era, steeped in the Newtonian understanding of the physical laws of motion discovered in the 17th century, which paved the way for formalising the engineering of mechanical machines and the development of industrial factories.

From Wikipedia:

Some authors attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738 published in the second edition of his The Doctrine of Chances the study of the coefficients in the binomial expansion of (a + b)n.

… Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function. Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares.

In 1823 Gauss published his monograph “Theoria combinationis observationum erroribus minimis obnoxiae” where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, M′, M′′, … to denote the measurements of some unknown quantity V, and sought the most probable estimator of that quantity: the one that maximizes the probability φ(M − V) · φ(M′ − V) · φ(M′′ − V) · … of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values. Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors ...

However, by the end of the 19th century some authors had started using the name normal distribution, where the word “normal” was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus normal. Peirce (one of those authors) once defined “normal” thus: “…the ‘normal’ is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances.” Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.

Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another ‘abnormal’.
— Pearson (1920)

The living planet does not conform to normality, it is chaotic

I grew up in the 1970s and 80s, and studied mathematics when chaos theory was developed and became practical to be explored with the help of digital computers and numerical algorithms. By that time it was clear that:

There are limits to which sequences of events and the behaviour of complex adaptive systems can be modelled numerically. No increase in computing power will ever allow the behaviour of complex adaptive systems to become predictable, and therefore fully comprehensible to human minds.

From Wikipedia:

Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple “noise” was considered by chaos theorists as a full component of the studied systems …

The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi’s laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called “randomly transitional phenomena”. Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.

Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz’s discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.

In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the “Noah effect” (in which sudden discontinuous changes can occur) and the “Joseph effect” (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published “How long is the coast of Britain? Statistical self-similarity and fractional dimension”, showing that a coastline’s length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales (“self-similarity”) is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory ...

As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984 …

Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships …

Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior …

Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable …

The unavoidability of chaos is a threat to anyone who is vested in systems of control and in maintaining social power gradients.

Imposing normality on an unpredicatable world

Looking back with 20/20 hindsight, the social developments in the WEIRD world since the invention of digital computers can be summarised as a desperate brute force attempt of the physically (via fossil fuels) and socially (via the globalisation of finance and capital flows) powered-up institutions of the industrial paradigm to deny the existence of chaos, and to impose the normality needed to achieve predictable profits for corporations and predictable capital gains for investors.

The pre-internet wave of computing in the 1980s and 1990s that automated industrialised production processes of material goods and related logistics mostly focused on processes and material flows on the factory floor. This era gave birth to the concept of continuous improvement and six sigma techniques in industrialised production. It provided a broad field in which normal distributions proved useful in reducing manufacturing errors and quality deficits in the material goods produced. Many of the factors that define quality of industrially produced goods relate directly to the Newtonian physical laws of motion and stochastic processes with few variables, and can thus be neatly “controlled” by applying the normalising cut-off points of the bell curve.

But even at that time honest and astute practitioners of scientific management like W. Edwards Deming and Harrison Owen clearly saw the limitations of the industrial paradigm, especially in terms of the living humans that are an integral part of the design and operation of any modern corporation, and especially within the context of a competitive, chaos-blind, control-obsessed economic ideology.

“Pay for merit, pay for what you get, reward performance. Sounds great, can’t be done. Unfortunately it can not be done, on short range. After 10 years perhaps, 20 years, yes. The effect is devastating. People must have something to show, something to count. In other words, the merit system nourishes short-term performance. It annihilates long-term planning. It annihilates teamwork. People can not work together. To get promotion you’ve got to get ahead. By working with a team, you help other people. You may help yourself equally, but you don’t get ahead by being equal, you get ahead by being ahead. Produce something more, have more to show, more to count. Teamwork means work together, hear everybody’s ideas, fill in for other people’s weaknesses, acknowledge their strengths. Work together. This is impossible under the merit rating / review of performance system. People are afraid. They are in fear. They work in fear. They can not contribute to the company as they would wish to contribute. This holds at all levels. But there is something worse than all of that. When the annual ratings are given out, people are bitter. They can not understand why they are not rated high. And there is a good reason not to understand. Because I could show you with a bit of time that it is purely a lottery.”
– W Edwards Deming (1984)

The internet era wave of computing from the mid 1990s onwards can be understood as the doubling down on keeping the myth of industrialised normality alive with brute force, by imposing it on the anthropocentric social realm. In The End of the Billionaire Mindset Douglas Rushkoff refers to hyper-normalisation in the digital realm as ‘auto-tuning’.

The internet provided the technological infrastructure, the development of smart phones made internet access quasi ubiquitous, the rise of social media enabled corporations to seize the means of communication and collaboration, and this in turn enabled digital algorithms to ingest, normalise, and disseminate gigatons of user produced content in ways that best serve the interests of digitised capital.

If you are culturally well adjusted to modern society, your sense of “normality” is shaped by the things you don’t notice and by the things that you take for granted. “Normality” is like the air you breathe as a mammal, or the water that you’d be swimming in if you were a fish. The hump of the bell curve is the digital God of Normality.

What started as “big data”, morphed into the “new oil” in a seemingly limitless digital realm – pushing away any niggling concerns about limits to growth in the physical realm, and then morphed again and was sold to power addicted investors as “artificial intelligence”.

“Artificial intelligence” is best understood as the computation of a mono-cultural hyper-normalised view of the world that is explicitly designed to be addictive to individuals and profitable for corporations.

Karl Marx’ critique of capitalism was correct, ownership of the means of production defined the locus of social and economic power in the early industrial era, but he could not foresee the extent to which digitisation of large parts of all forms of human communication would allow some corporations to effectively seize control of the means of communication and collaboration.

The global mono-cult, which continuously perpetuates itself in the anthropocentric digital realm by projecting a hyper-real image of the world in which corporations are “in control”, is increasingly in stark contrast to the ecological state of the world in the physical realm, in which everything is “out of control”.

A world yet to come

We can describe our overall direction of travel as: From artificial scarcity towards ecologies of abundant care. Manish Jain talks about the shift from deadlihoods to alivelihoods. Adebayo Akomolafe talks about a world yet to come.

I don’t think that the vibe here is “Let’s get to a solution and get with it”. I think we’re staying with the trouble of these questions. And somehow, navigating, meandering, Autistically sometimes, this vortex, or these vortices of these questions, will enable new kinds of sensibilities to sprout, and then we will suddenly realise we are different.
– Adebayo Akomolafe

Reflecting deeply on the relational nature of life allows us to become reacquainted with the lower and upper limits of human scale. Along the way we also begin to re-appreciate the limits of human comprehensibility and sense making.

Being at ease in an unpredictable world

The main difference between modern emergent human scale cultural species and prehistoric human scale cultural species lies in the language systems and communication technologies that are being used to coordinate activities and to record and transmit knowledge within cultural organisms, between cultural organisms, and between cultural species.

The proliferation of trauma in industrialised societies is a reflection of the scarcity of genuinely safe de-powered relationships. The path back towards safe social environments is a bottom up approach, focused on small teams, households, and whānau – the exact opposite of the corporate controlled, competitive, and super human scale social media environments that have infiltrated human lives over the last 20 years. Small is beautiful.

Humans all over the world need to address multiple existential threats, without any delay, within a time frame of a few years and decades, which is only possible by framing life in terms of collaborative niche construction, a self-organising process that relies on timeless practices for co-creating good company:

  1. The conception of life as a collaborative game that involves trust, mutual aid and learning
  2. Shared biographical information, which helps us understand prior experiences and trauma
  3. Joint experiences, which allows us to appreciate the extent to which various situations are experienced in similar or different ways, and which gives us insights into the cognitive lens of the other person
  4. Regular sharing of new experiences and observations, which allows us to learn more about the cognitive lens and the values of the other person
  5. Asking for advice, which allows us to acknowledge our own limitations, extend trust, and appreciate the knowledge and unique capability of the other person
  6. Being asked for advice, which signals trust and which gives us feedback on how the other person perceives our level of knowledge and domain specific competency
  7. The development of relationships and trust takes time

Over time this self-organising process results in unique relationships of deep trust between people, and in unique cultural microcosms between pairs of people that provides us with a baseline of safety. In human scale groups, over time, these practices result in new adaptive paradigmatic frameworks that are tailored around the unique needs of the members of a specific ecology of care.

Within good company (smaller than 50 people), everyone is acutely aware of the competencies of all the others, and transparency and mutual trust enables wisdom and meta knowledge (who has which knowledge and who entrusts whom with questions or needs in relation to specific domains of knowledge) to flow freely. This allows the group to rapidly respond intelligently, creatively, and with courage to all kinds of external events.

Humans are not the first hyper-social species on this planet. Insects such as ants offer great examples of successful collaborative niche collaboration at immense scale over millions of years.

Evolutionary biologist David Sloan Wilson observes that small groups rather than individuals are the primary organisms of human societies. This should provide all of us with food for thought and it has massive implications for the cosmolocal future of our species.

It turns out that lived experience in nurturing and maintaining mutual trust at human scale is the key ingredient for being at ease in a seemingly unpredictable world. Being able to rely on each other is at the core of the evolutionary heritage of our species. Mutual trust is a biophilic ecological phenomenon of emergent local predictability that is not limited to humans.

Somehow the Wonder of Life Prevails – Mark Kozelek & Jimmy LaValle

Leave a Reply