“Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid.” – Valery Legasov ( From the HBO serial “Chernobyl”)
The collapse of Soviet Union in 1991 is usually attributed to a combination of factors, including the war in Afghanistan, Chernobyl, civic unrest, low oil prices, the inherent weakness of the Soviet economy*, and/or the failure of Gorbachev’s Perestroika reforms. However, these traditional culprits were only symptoms of a more intractable problem facing the country which is that the decision-making capabilities of a highly centralized command structure are for shit.
We spend a lot of time talking about decision-making theories in the business world but all of them boil down to a sequence of observation, thought, decision, and action. Observations and other bits of information serve as inputs to some sort of decision/evaluation/analysis process which, in turn, leads to the implementation of some task or plan. All of this is wrapped up in a network of feedback loops which allows the decision-maker to keep tabs on both positive and negative outcomes of prior decisions.
One example of such a decision-making framework – there are many – is John Boyd’s OODA loop (Observe + Orient + Decide + Act). The OODA loop explores the complex interaction of inputs in the high-pressure decision-making environment of military conflict.
There is a heavy emphasis on the connections between different aspects of the model, with the expectation that a person wielding this tool will learn, adapt, and act so rapidly that their adversaries are left confused and unable to respond effectively. In fact, it might be more accurate to describe this as a model for learning instead of a model for decision-making. This relationship between learning and decision-making is crucial.
Writing about decision-making in the context of the world of professional poker, Maria Konnikova emphasizes the iterative aspect of this learning process:
“Each time you act, you have to reassess based on what is now known versus what was known before. You need to have a process, a system, a plan – one that evolves with feedback. If you don’t, how will you to know whether the outcome … is the result of skill or luck?”
The best learning/decision-making systems are self-correcting, using new information to better understand the circumstances and make course corrections. Capitalism, democracy, and the scientific method are all supposed to work this way – getting feedback through markets, elections, and experimental results.
But what if you don’t like the results? Or you’re worried that a particular outcome will favor your rivals or lead to a loss of power or status? The temptation to game the system at this point – to try and control the learning/decision-making framework – can be hard to resist. For businesses, this can lead to special-interest lobbying, rent seeking behavior, and outright fraud. For scientists it might mean resorting to plagiarism, a fabrication of results, or other scientific misconduct. For politicians it may mean support for voter suppression, partisan gerrymandering, or the introduction of propaganda designed to influence public opinion.
All of these transgressions disrupt of the feedback mechanisms that are essential to learning and good decision-making. And while the perpetrators of these approaches may think they are doing their organizations a favor, the steady accumulation of falsehoods becomes harder and harder to sustain over time.
In his biography of John Boyd, Robert Coram writes about the dangers of this gradual disassociation from reality:
“[I]f our mental process become focused on our internal dogmas and isolated from the unfolding, constantly dynamic outside world, we experience mismatches between our mental images and reality. Then confusion and disorder and uncertainty not only result but continue to increase. Ultimately, as disorder increases, chaos can result.”
In a review of HBO’s “Chernobyl” series, which explores the Soviet reaction to the 1986 accident at the Chernobyl nuclear power plant, Masha Gessen noted how government disinformation efforts actively discouraged people from learning from their mistakes:
“The Soviet system of propaganda and censorship existed not so much for the purpose of spreading a particular message as for the purpose of making learning impossible, replacing facts with mush, and handing the faceless state a monopoly on defining an ever-shifting reality.”
In the case of Chernobyl, lies designed to give the appearance of strength and infallibility prevented people from recognizing problems early and taking actions that would have averted disaster. By the late 1980s, similar deceptive policies had seeped into every facet of Soviet society, creating a toxic buildup of misinformation that led to dysfunction and – ultimately – dissolution.
“Chernobyl was not just the story of a disastrous testing accident in a Soviet nuclear power plant. It was the product of how endemic arrogance, negligence, careerism, and authoritarianism created a system that allowed that disaster. It was the Soviet Union in a microcosm, a deadly outcrop of decades of political failure and negligence that would ultimate help bring down the entire nation.”
By emphasizing comfortable lies over inconvenient truths, the country could no longer learn. its decisions were based on fabrications, its actions reinforced by its own destructive behavior. To put it simply, the Soviets were playing Russian roulette with reality and they lost.
Image credit: Anna Kinde
Series: The Short-Circuiting of the American Mind