SPEECH BY MR PETER HO, SENIOR ADVISOR, CENTRE FOR STRATEGIC FUTURES, AT THE DSTL HORIZON SCANNING AND FUTURES SYMPOSIUM ON TUESDAY, 15TH JANUARY 2013, AT THE DEFENCE ACADEMY, SHRIVENHAM.
“FORESIGHT AND THE FUTURE OF GOVERNANCE”
Introduction
On a cool Saturday morning, the 17th of December 2010, in Sidi Bouzid, Mohamed Bouazizi set himself on fire in terminal protest against the authorities for harassing him and preventing him from making a living. This act sparked demonstrations and riots throughout Tunisia. The flames of dissent and revolution spread like wildfire to other parts of the Arab world. Four governments collapsed (Tunisia, Egypt, Libya and Yemen), three countries saw changes to their governments (Kuwait, Bahrain and Oman), and Syria descended into civil war.
It would have taken even the bravest analyst a huge leap of imagination to predict the Arab Spring, such as it was. Truth, as it is often said, is stranger than fiction. The British historian and politician, H A L Fisher, concluded in 1935 that “men wiser and more learned than I have discerned in history a plot, a rhythm, a predetermined pattern. These harmonies are concealed from me. I can see only one emergency following another … and only one safe rule for the historian: that he should recognise in the development of human destinies the play of the contingent and the unforeseen.”
In other words, we shall continue to be surprised.
The Arab Spring has spawned a growth industry. There are now countless political and social scientists, historians and Arabists all trying to explain the causes of the Arab Spring. Many will find convincing reasons as to why these events unfolded as they did. But all this will be in retrospect. It is in the very nature of such post-mortem analysis that thinking and explanation must proceed backwards. That explanations after the fact are the norm for strategic surprises like the Arab Spring underlines the lack of any simple and understandable patterns in the complex world that we live in. The Danish philosopher Søren Kierkegaard observed, “life is understood backwards, but must be lived forwards.”
Retrospective Coherence
This captures the concept of “retrospective coherence”. The current state of affairs always makes sense when we look backwards. But it is only one of many patterns that could have emerged, any one of which would have been equally logical. While what we are today is the result of many actions and decisions taken along the way, retrospective coherence says that even if we were to start again, and take the same actions and make the same decisions, there is no certainty that we would end up in the same situation.
Undoubtedly, there is a fascinating “what if” question arising from the drama of the Arab Spring. What if Mohamed Bouazizi had not set himself on fire? Or what if he had survived the self-immolation? Would there then have been an Arab Spring?
The fact of the matter is that we cannot really answer such “what if” questions. Retrospective coherence tells us that hindsight does not necessarily translate into foresight. Simply because we can provide an explanation for why the current state of affairs has arisen does not mean that we are in a position to forecast the next political drama or catastrophe. Instead, they always seem to be lurking somewhere, hidden from view, just over the horizon, to surprise us when we least expect it.
This propensity to agonise over and analyse surprising and shocking events such as the Arab Spring satisfies the emotional need for answers to questions like “what if” and “why”. But such illumination will not necessarily help us anticipate or avoid the next strategic shock. The future is neither inevitable nor immutable. Applying the lessons of history is not enough to guide us down the right path into the future. Indeed, given what I said earlier about retrospective coherence, it is doubtful whether a single right path even exists.
Complexity
Retrospective coherence arises because of “complexity”.
“Complex” is not the same as “complicated”. An engineering system is merely complicated. It could be a missile or an aeroplane or a telecommunications satellite. Its inner workings may be difficult for a layman to understand. But it is designed to perform certain pre-determined functions that are repeatable, in stable patterns. It embodies the Newtonian characteristics of predictable cause and effect.
In contrast, a complex system will not necessarily behave in a repeatable and pre-determined manner. Cities are complex systems, as are human societies. The earth’s ecology is also a complex system. Political systems are complex. Countries are complex. The world as a whole is complex and unordered. There are many overlapping definitions of complexity, but many of them agree that complex systems are characterised by “emergent” outcomes that are not always predictable ex ante.
Black Swans
Black swans, as described by Nicholas Nassim Taleb, are rare, hard-to-predict events with a large impact. Black swans result, at least in part, from complexity.
Connections and interactions within a complex system are extremely difficult to detect, inexplicable without hindsight, and emergent. The agents are countless. In a complex system, we cannot assume that cause and effect are linked such that the output can be determined from the input, in which one step leads predictably to the next. The problem is that we do often make such an assumption, and then we are surprised, even shocked when things do not unfold as planned.
Black swans cannot be ignored. Although they happen infrequently, black swan events develop very fast, catching governments, societies and nations unprepared and severely challenged to find a rapid and cohesive response.
Complexity and the Dismal Science
The last decade and a half has seen three economic black swans – the Asian Financial Crisis of 1997/98, the global economic and financial crisis of 2008/09, and the on-going Eurozone crisis. Economics, the dismal science, has become even more dismal because it appears to have failed to anticipate these black swans.
The economist Paul Ormerod explains the problem. He wrote that “in orthodox economic theory, the agents involved in any particular market … are presumed to be able to both gather and process substantial amounts of information efficiently in order to form expectations on the likely costs and benefits associated with different courses of action, and to respond to incentives and disincentives in an appropriate manner. … The one thing these hypothetical individuals do not do … is to allow their behaviour to be influenced directly by the behaviour of others … and their tastes and preferences are assumed to be fixed, regardless of how others behave.”
What Ormerod is saying is that traditional economics does not take into sufficient account the complexity of the real world. In the real world, taking terminology from complexity science, agents – people – are not independent actors. They are interdependent – interacting and influencing one another in complex and emergent ways.
Complexity and Cumulative Effects
The Tohuku earthquake of 2011 surprised us. But why were we surprised? There are always earthquakes in Japan, which is one of the most seismically active regions in the world. It is also not true that the tsunami in itself surprised anyone. After all, tsunami is a Japanese word.
Part of the reason is that although the risk of an earthquake is known, it is very difficult to assess when it is going to occur. The geophysicist and earthquake expert Robert Geller wrote in 1997 that, “earthquake research has been conducted for over 100 years with no obvious successes. Claims of breakthroughs have failed to withstand scrutiny. Extensive searches have failed to find reliable precursors … reliable issuing of alarms of imminent large earthquakes appear to be effectively impossible.” Predicting tsunamis is just as difficult an exercise.
The chain of events, beginning with the earthquake, followed by the tsunami, which then damaged the Fukushima nuclear power plant causing a meltdown and radiation leakage, was the result of complex interconnectivities, combined in this case with significant human failures including outright negligence and what Margaret Heffernan called “wilful blindness”. It was therefore highly unpredictable.
The reality is that it is extremely difficult to estimate the cumulative effects of such complex events. It makes preparing for unforeseen situations an exercise fraught with difficulty. It also adds to the challenges of governments operating in complex situations.
Wicked Problems
Unfortunately, complexity not only generates black swans, but also gives rise to what the political scientist Horst Rittel called “wicked problems”. Wicked problems have no immediate or obvious solutions. They are large and intractable issues. They have causes and influencing factors that are not easily determined ex ante. They are highly complex problems because they contain many agents interacting with one other in sometimes mystifying ways. They have many stakeholders who not only have different perspectives on the wicked problem, but who also do not necessarily share the same goals.
Tackling one part of a wicked problem is more likely than not going to lead to new issues in other parts. Satisfying one stakeholder could well make the rest unhappy. A key challenge for governments is to move the many stakeholders towards a broad alignment of perspectives and goals. But this requires patience and a lot of skill at stakeholder engagement and consensus building.
Pandemics are an example of a wicked problem at a global level. So are aging populations in the developed world. Sustainable economic development, which is not unconnected to the triangular problem of food, water and energy security, is an enormous wicked problem. Last year’s London riots were a wicked problem because of their spontaneous and self-organising nature. Needless to say, the Eurozone crisis is a wicked problem. The list goes on without end.
In our increasingly inter-connected and globalised world, such wicked problems do not manifest in a singular fashion. Their impact, like the Arab Spring, can and will be felt around the world, in many forms, and in many fields like politics, economics, and in the social and other dimensions.
The Rise of Complexity
Stephen Hawking said, “The twenty-first century will be the century of complexity”. The huge leaps forward in technology in the last half century, nicely encapsulated in Moore’s Law – in telecommunications and the internet – combined with innovations in transportation such as the container and commercial jet aircraft – have catalysed globalisation and led to vastly increased trade as well as movement of people around the world.
But the resulting connections and feedback loops have in turn greatly increased complexity at the global level. There is every reason to believe that globalisation will continue unabated, and with it, complexity will grow.
Complexity and Governments
The growing complexity of the world is something that governments should not ignore. The rise of complexity will generate more uncertainty, increase the frequency of black swans and other strategic surprises, and create more wicked problems. In other words, complexity will cause big headaches for governments.
Those governments that learn to manage complexity, and how to govern effectively in complex operating environments, will gain a strategic competitive advantage. While they cannot avoid black swans altogether, they will be in a better position to subdue the impact of strategic surprise and reduce uncertainty. They will also be better placed to exploit opportunities ahead of the rest. Professor Kees van der Heijden, the Dutch scenario planner said, “There are winners because there is uncertainty. Without uncertainty there can be no winners. Instead of seeing uncertainty as a problem, we should start viewing it as the basic source of our future success.”
But governments usually ignore the complexity of their operating environment. They will deal with wicked problems as if they are amenable to simple and deterministic, even linear, policy prescriptions.
The temptation to take this approach is understandable. It seems intellectually easier, requires less resource, and may actually lead to positive outcomes – but only in the short run. At the risk of generalisation, governments tend to focus on immediate problems. They would rather defer expenditure on something that may or may not happen.
This tendency to place less emphasis on future risks and contingencies, and place more weight on present costs and benefits is a common cognitive bias called hyperbolic discounting. Many, if not all, governments indulge in it.
One example of hyperbolic discounting at work is climate change. Governments understand the theoretical need to consider effects of global warming on future generations, but tend to discount those effects and place greater emphasis on the current costs of mitigation and adaptation – leading to suboptimal policies – if one adopts a “long view”.
Governance in Complexity
So what can governments do to improve the way they manage complexity, and at the same time mitigate the effects of the various cognitive biases that afflict them?
Clearly, changes must be made to the way governments organise themselves. Their toolbox must be enlarged.
Bounded Rationality
In a hierarchy, the leader at the top receives all the information and makes the decisions. But, under stress, such as during a black swan event, hierarchies can be unresponsive – even dangerously dysfunctional – because there are decision-making bottlenecks at the top. As hierarchies, governments are at risk of paralysis when confronted by a black swan or a strategic shock. The world that governments operate in today is too complex and too fast changing for the people at the top to have the full expertise and all the answers to call all the shots. This is because of what Nobel economist Herbert Simon called bounded rationality.
In decision-making, the rationality of an individual is constrained by the information that he has and the finite time he has to make a decision. Our limited cognitive ability to access and process information further circumscribes our rationality. This means that the decision-maker cannot possibly make a rational and optimal choice. Instead he will very often choose a course of action that is somewhat acceptable, but not optimal.
Bounded rationality is particularly salient when dealing with black swans and wicked problems. The decision-maker at the top is either surprised and all his cognitive synapses saturated, or he lacks sufficient bandwidth to comprehend the full scope of the problem.
Whole-of-Government
In such situations, the natural approach is to break down a problem into smaller parts, and then leave it each agency to make its own, decentralised and bounded decisions. An example of this is the free market in which individuals making their own decisions are better for the system as a whole than a centrally planned economy.
Another example is the British Empire, clearly a complex system. Whitehall ran a sprawling global empire by leaving it to officials in the far-flung colonies to do essentially as they pleased. There was really no central colonial policy. Instead they entrusted the empire to a small group of administrators selected for shared common values of class and education, and a strong belief that Britain had a civilising role to play. It was undoubtedly messy, but I suspect that the strength of the British imperial system in those days of empire was a fault-tolerant – or safe-fail – attitude, which I shall talk about later on. It was not fault-free, nor was it expected to be. But the system tolerated and worked around faults. All this before the concept of bounded rationality was understood.
Another approach is to adapt to operating within the complex environment, by taking an agent’s perspective to detect problems and to identify strategic opportunity. This is the whole-of-government approach. People – the agents – from different organisations, from within and outside government, are brought together and pool their knowledge in order to discover potential solutions to wicked problems. Using groups within and outside government, it effectively multiplies the system’s collective cognitive capabilities and mental processing power. It harnesses the capabilities of the many to overcome the limitations of the few.
Of course, cooperative mechanisms need to be set up to enable the sharing of information and to strengthen collective action. But the usual safeguards against dominance and groupthink must be in place. A good example of this was demonstrated during the Cuban missile crisis when President John Kennedy set up an exco, chaired by his brother and Attorney General Robert Kennedy, to deal with the barrage of inputs, views and demands emanating from a multitude of sources on both sides of the confrontation. It was an antidote extracted from the lessons of the earlier Bay of Pigs disaster, which was a prime example of the dangers of bounded rationality.
In fact whole-of-government is intuitively the right way to go, because insight and good ideas are not the monopoly of a single decision-maker, agency or government acting alone. The whole-of-government approach looks messy and antithetical to the tidy organisation of a conventional hierarchy, because it injects complexity into the policy process.
In an insightful commentary, Yaneer Bar Yam, a systems scientist, wrote that “the most basic issue for organisational success is correctly matching the system’s complexity to its environment.” Whole-of-government is not meant to simplify, but to ensure that the complexity of the government matches the complexity of the operating environment.
But while the whole-of-government approach may be an imperative, it is not easily achieved. Governments, like any large hierarchical organisation, tend to optimise at the departmental level rather than at the whole-of-government level.
Therefore, vertical silos need to be broken down, so that information can flow horizontally to reach other departments. Agents and agencies should not just have access to information that they “need to know” but should know enough so that each component of the larger organisation can respond to issues and challenges as they arise.
An environment that encourages the spontaneous horizontal flow of information will enlarge and enrich the worldview of all departments. This in turn improves the chances that connections hidden by complexity, as well as emergent challenges and opportunities, are discovered early. Breaking down these silos is a necessary but Sisyphean effort.
Managing Complexity
In a complex operating environment, governments should aim to reduce the frequency of black swans and strategic shocks. An orientation towards thinking about the future in a systematic way is the right approach. Some of us call this process foresight, or futures thinking. But it is most certainly not about predicting the future, which is impossible.
Instead, foresight methodologies seek to gather data and make sense of it so that people can think in different and new ways about the future. The data might be analysed using qualitative or quantitative techniques, or both. Information emerging from this analysis and interpretation allows an organisation to better understand its past and present, including its latent assumptions and biases in perceiving the world. Understanding these provides the basis for using foresight methods to explore potential futures. This is the fundamental reason for historical analyses. They provide a way of making sense of an uncertain and complex future environment.
The practice of foresight in government is really about identifying the factors that will shape the future. This is so that policy makers can devise strategies and formulate policies to maintain positive trajectories and shift negative ones in a more positive direction. The goal is to shape the future, not to predict what it will be.
Scenario Planning
Governments often have to make big decisions, and develop plans and policies, under conditions of incomplete information and uncertain outcomes. It is not possible to prepare exhaustively for every contingency. Instead, the approach should be to reduce uncertainty where possible.
The “search and discover” approach is an important option in this regard. The military calls this approach the OODA loop (observe, orientate, decide, act), which is a recurring cycle of decision-making that acknowledges and exploits the uncertainty and complexity of the battlefield.
Scenario planning is one way of carrying out the OODA loop, in the sense that it projects futures based on our understanding of the operating environment today. Used intelligently, it can be a very important tool for planning, and can help overcome cognitive biases by challenging our mental models. Scenarios are one of the most popular and persuasive methods used to provide a plausible description of what might happen in the future. They assist in the selection of strategies through the identification of possible futures. Scenarios make people aware of problems, uncertainties, challenges and opportunities that such an environment would present, and opening up their imagination and initiating learning processes.
For the past two decades, the Singapore government has been using scenario planning to consider national scenarios. Our experience with scenario planning is that it better informs policies, plans and even budgets of the challenges and opportunities that could arise in the future.
But this approach is insufficient in a complex unordered environment, because scenario planning cannot adequately account for hidden connections and interactions. As traditionally practiced, scenario planning is limited by its focus on what is logical, such as using a systems map to plot the inter-relationships among driving forces. Scenario planning tends to undervalue the impact of the irrational on future outcomes.
In this regard, methods that focus on the non-rational drivers of change should also be part of the governance toolbox. They include back-casting (future backwards helps us to understand how shifts in values and principles can drive change), policy-gaming (which is akin to military war-gaming, but applied to the civilian policy context to condition policy-makers to complex and uncertain situations, and to help them confront their cognitive biases), and horizon scanning (which is the process of detecting emerging trends, threats and opportunities).
Applying new concepts and tools to complement scenario planning for strategic anticipation will be vital. I will now cover a few areas that may be further explored to augment foresight work.
Sentiment Analysis
First, sentiment analysis, which applies computational linguistics to determine authors’ attitudes towards topics, is growing in relevance with the emergence of new media. Sentiment analysis refers to a broad area of natural language processing, computational linguistics and text mining. It aims to determine the attitude of a speaker or a writer with respect to a particular topic or theme. Sentiment analysis can potentially help to improve public service and government policies by monitoring public opinion from behind the scenes to help policy-makers gauge the public’s pulse. In Singapore, we are experimenting with Sentiment Analysis as part of the effort to “sense-make” and characterise online sentiments in the social media. Through such experiments, we are learning how social media data can help us better understand issues of concern to people, and the effect of policies on them.
Big Data
According to IBM, we create 2.5 quintillion bytes of data every day – so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts on social media sites, digital pictures and videos, purchase transaction records, and mobile phone GPS signals, just to name a few. This data, nowadays collectively called big data, refers to large and complex data sets, which are often multidimensional, longitudinal, and digitally generated. With enhanced computational capacity, it is possible to distil vast amounts of data in new ways to find meaningful correlations to emerging patterns and trends in our environment.
One example is LIVE Singapore, a project that is part of the Future Urban Mobility research initiative. It provides people with access to real-time information about their city through dynamic visualisations. Hopefully this research will eventually enable planners and policy-makers, and even ordinary people to take more efficient decisions that are more in tune with the environment.
RAHS
Some of you might be aware of Singapore’s Risk Assessment and Horizon Scanning programme, or RAHS. It is a major initiative to strengthen Singapore’s horizon scanning capability, through the deployment of a computer-based suite of tools, particularly in searching for weak signals that could evolve into sudden shocks.
Technological advances like sentiment analysis and big data hold exciting possibilities for the evolution of RAHS. For example, enterprise search engines are getting better; they can ingest a larger variety of sources, have better extraction tools and have more sophisticated visualizations. They function as useful sieves though which large amounts of data can be filtered.
RAHS is now moving beyond conventional horizon scanning into sentiment analysis, and narrative capture – which augments traditional survey techniques – aided by big data tools for synthesis and analysis. But I should add that RAHS is based on a belief that technology ultimately services the analyst in creating added value. It cannot and will never displace human analysis.
Experimentation and Risk Management
Conventional efforts to model complex systems, such as the Club of Rome’s famous model of economic and population growth, have not proven very useful, and often get it wrong. Unlike in a complicated system, the components of a complex system interact in ways that defy a deterministic, linear analysis. In complex operating environments, exploration and experimentation are more valuable than predictions of analytical models.
So rather than plan exhaustively for every contingency before we move, we must be prepared to experiment, even if we cannot be entirely certain of the outcome. The approach is to probe, sense patterns, and to act, even in the absence of complete information. We must learn to operate not in a “fail-safe” mode, but instead to operate in a “safe-fail” mode. Pilot programmes, prototypes and “beta versions” should be the norm in dealing with wicked problems. If they succeed, then they can be expanded. If they fail, then the damage is limited.
Governments must also be able to manage the risk that is a natural result of operating in complexity. Big decisions will have to be made under conditions of incomplete information and uncertain outcomes. There will always be threats to national outcomes, policies and plans, because no amount of analysis and forward planning will eliminate the volatility and uncertainty that exists in a complex world. These threats constitute strategic risk.
In Singapore, the government is developing a unique Whole-of-Government Integrated Risk Management (WOG-IRM) framework – a governance chain that begins with risk identification and assessment at the strategic level, to monitoring of risk indicators, and finally to resource mobilisation and behavioural changes to prepare for each anticipated risk. WOG-IRM also plays an imperfect but important role in discovering the inter-connections among risk factors. This in turn helps to reduce some of the complexity.
Conclusion
The rise of complexity in the world today throws up enormous challenges for governments around the world. Black swans will confront them, and they will have to deal with wicked problems. Foresight will help governments to better deal with complexity and its challenges. But fundamental changes are needed to the organisation of government, and the toolbox for governance needs to be overhauled. A new mindset – whole-of-government – should be nurtured. The future of governance lies in such systems-level coordination, to facilitate better foresight and futures thinking.
But I should conclude by recounting Winston Churchill’s astute advice on the essential quality of a good government leader: “It is the ability to foretell what is going to happen tomorrow, next week, next month, and next year. And to have the ability afterwards to explain why it didn’t happen.”
Thank you.