One way to identify the learning pathways referred to on our sense-making page is to see them as movements or learning journeys on the modified Cynefin diagram as below.
For any complex problem there are many stakeholders, many points of view and much uncertainty. What do we do? How do we act? We see four requirements.
First we recognize that we are all in this together – an easy thing to say, a very difficult thing to do. When we are faced with genuinely unexpected surprise situations we should expect to collaborate, share ideas, pool resources and encourage creative thinking. We can only act through our level of collective understanding of the problem – there are many unknowns and potential surprises. If we use the figure above to help us identify a learning journey then we want to move our collective understanding towards the lower right. Note we are referring only to changing our understanding. We are not yet changing the reality – that will change, as it will, in the time we take to make a decision and subsequently through the decisions and actions that we will eventually take.
The second requirement is recognize the many different stakeholders and respect their many different points of view and models of understanding. Collaboration can propel us on a learning journey into unknown territory. Nevertheless we need us all to agree that, through due diligence and duty of care to each other, in a democratic society we want to decide and act by choosing the best models we can find for the problem. As the problem is almost certainly multidisciplinary, some of the models will be in harmony but some will conflict. We have collectively to realise that the only way to come to a decision and act is to attempt to a) agree a common purpose; b) agree shared values and c) negotiate to the ‘best’ decisions and actions.
Thirdly progress on our learning journey will require us to use all of the learning power at our disposal. This means above all we have minds open to new ideas, positive interpretation of the dependability of evidence and ability to make creative decisions. As we have said we should not expect necessarily to solve the situation but rather to move it along towards our collective purpose.
Fourth, our first reaction when things go wrong is not first to look for someone to blame. Of course if someone is to blame then we should root out incompetence, negligence and corruption. The point here is that we should also recognise that sometimes no single person is at fault. People can get trapped inside their little specialist departments, social groups, teams or pockets of knowledge – their silos.
Guarding against this kind of systemic failure implies a thorough collecting and testing of evidence, a sound assessment of the uncertainty and the exercise of practical foresight and wisdom. An important point is that we have to allow people to admit uncertainties and make mistakes when there are genuine unknowns. The need in these kinds of circumstances is to recognize the problem early and to act promptly to rectify it. If people are incentivized to conceal their mistakes then the situation will deteriorate. If we act on a decision and it turns out to be ineffective or just plain wrong then there will be consequences. A crucial requirement however is trust. Trust has to be earned by honest disagreement.
What follows are five typical strategies within the infinite number of possible ways of moving downwards and to the right of the figure at the top of this page. We should remember that every model must be tested appropriately to maximise the dependability of the evidence that the model is fit for purpose. If the model proves to be insufficient (as it may well be) then the consequential learning (unknowable in advance) will be invaluable. That learning will be of new knowledge, new skill or capability.
- Complex to complicated – we move by assuming that our complex issues have features of being complicated. We look for layers of generality and abstraction and then identifying the connections in and between those layers and the processes that describe the interactions. In doing so we have to assess our degree of confidence that our models of these interacting processes are sufficiently dependable for our purposes. This level of confidence in our modelling is a key distinction between complexity and complicatedness. An example is the traditional approach to the construction of a bridge. We organise the problem into layers of processes (foundations, superstructure, design, construction, maintenance, decommissioning etc.) and we are confident that we can model and manage each of those processes dependably. This transformation from complex to complicated is an essential step for the efficient management of construction. Another less clearly dependable and more contentious example is evolutionary theory where layers of organisms (atoms, molecules, cells, tissues, organs, sub-systems, living entities, families etc) are modelled as though they have no designed or predetermined purpose – those that change and become more aligned to their environment are the ones that survive and hence pass their genes to the next generation.
- Complicated to tame – we move by simplifying the number of layers of our complicated problem and focus on one (or a limited few) of them. We find simple models for the processes in each layer with precise and dependable information. An example of a tame model is the way we model the behaviour of a concrete beam in a building using linear elastic theory when we know that the stress strain relationship of concrete is non-linear. Our beam is then modelled as part of a more complicated structure of beams and columns which in turn are part of a whole building with many other requirements (such as costs, building services, architectural spaces etc.). In other words as we move our thinking upwards from tame to complicated we become aware of important emergent properties in higher layers from interactions between processes in lower layers which may not have been part of our tame modelling. For example unmanned operations, such as fly by wire or some urban railways, require very high levels of reliability. This means that all the relevant processes have to be tamed for them to become sufficiently dependable.
- Complex to tame – we move by focusing on only one aspect of a problem and model it sufficiently dependably that we can be confident in our conclusions. We offer that solution to others in our team who consider their aspects of the problem often quite independently. Without sufficient awareness of the need to collaborate this strategy runs the risk of the silo effect.
- Complex to contingent – we move by recognising that our problem depends on context. We have to use experience and judgement to make decisions. We may have rules of thumb and partial models but we are acutely aware that they may not be directly applicable to our situation and may address only part of the totality of the issues involved. An example is the kinds of decision making that boards of companies have to make when setting the strategic aims of their organisations. Partial models that such decision makers may or may not rely on may be statistical (such as market analyses or forecasts of material resources such as amounts of oil in a prospective field) or non-linear such as in deterministic chaos theory. An example is weather forecasting when the very large finite element models of the atmosphere indicate chaotic behaviour so that simulations with very slightly different initial conditions produce quite different outcomes. Such models are difficult to interpret into our situation and experience of similar situations is invaluable. Nevertheless when a weather forecaster identifies a certain pattern in the Atlantic jet stream he or she can predict that we can expect a period of settled weather but may not be able to say if it will rain at midday tomorrow.
- Contingent to tame – we move by relying on rules and models that vary from being quite complicated to very simple. For example the 2,779 pages of the Lloyds Register Rules and Regulations for the Classification of Ships are so complicated that only expert naval architects can use them whereas a rule such as where to drill a hole near the edge of a steel plate is so simple that anyone can follow it. Most of the traditional models of classical physics (as interpreted into engineering science) are tame but require expertise to a varying degree to use them. Of course practitioners are aware of this and use their judgement and experience to interpret the models in practical decision making.
Three practical examples, that are not usually described as learning journeys but would benefit from being envisaged this way, are the observational method pioneered in geotechnical engineering [Peck 1969], disaster management [UNISDR 2015] and the many variations of common auditing procedures.
The observational method was developed in response to the massive uncertainty in our understanding of the behaviour of engineering soils, local and regional geology and hydrogeology. It has three stages of contingency planning. First, a consideration of the properties and performance of the ground at both a macro- and micro- scale. Second, examining potential interactions between the parts of the systems, such as soil–structure interaction. Third, actively using feedback from performance to reflect on and learn more about the system, and then responding accordingly. The observational method is a specific example of systems-thinking. First, as in geotechnical engineering generally, it recognises the setting of geotechnical issues within the local and regional geology and hydrogeology and considers the properties and performance of the ground at various scales. Second, it considers the interaction between the parts of the systems, such as soil–structure interaction; and third, it actively uses feedback from performance to reflect on and learn more about the system.
Preparedness and response to disasters, whether natural or man-made, is another practical example of the need for learning our way through a problem. By definition disasters are unique and so our collective response to them requires us to be adaptable and flexible. The stages of reducing hazards (mitigation and preparedness), assuring prompt assistance to victims (responsiveness) and achieving rapid restoration (recovery) are not easy. Grundy has outlined six useful steps or stages in the learning journey:- a) identify the hazards and risks; b) identify weaknesses; c) retrofit for resilience against all hazards; d) plan emergency response procedures; e) educate the community to understand and implement the procedures; f) rehearse emergency responses regularly. In each of these phases individually and collectively we need to go through the single, double or triple learning loops, to make changes to people, systems and try again.
Auditing is normally thought of as an official inspection of the financial accounts of an organization by an independent body. The purpose is to ascertain that the statements are a true, fair and properly maintained view of the financial activities of that organization. However well run companies use the audit procedures to learn as much as they can about possible improvements. First audits can be extended to include performance, operations, risks, energy flows and of conformance to quality and safety standards. Second, the six stages of auditing i.e. engagement, planning, testing, analyzing, reporting and summarizing can be extended to include learning points.