WATER SECURITY DEPENDENCY MODELLING
For both Managers and Regulators is to
“Control/ (Maintain) the Quantity, Quality and Sustainability of Supply – (QQSOS)”
Where Manage –
Means being able to Monitor what’s actually happening and
- To intervene responsively, or strategically,
- Being aware in advance of the likely effects of intervening.
And Regulate –
Means to ensure the statutory requirements and objectives are
- in place,
- enforced and,
Different catchments will have different arrangements to facilitate this, depending on size, geography, regulatory regimes, number and variety of stakeholders, cultures, etc.
– COMPLICATED AS WELL AS UNIQUE?
Although having essentially the same components, the (total) catchment systems will be very different in practice and need to be modelled carefully to reflect this.
Traditionally this has been done by producing a detailed (very variable?)hydrological model of the catchment which gives an indication of flows expected under different conditions.
But these systems are always more complicated in practice and more than purely hydrological influences are in play.
To enable us to include these additional features, a dependency modelling front end will bring in both non physical and external influences as identifiable and quantifiable factors in the catchment management requirements.
Another issue is the accessibility and ease of use and accuracy of these models. You may have a very impressive spreadsheet but how often is it used in real situations and in real time?
The minimum objective of this project is then to produce a simpler, but potentially more powerful, (Neural Network) flow model to (more easily) compare and calibrate against actual (observed) behaviour; and also allow more rapid and accurate prediction of the effects of changes.
As both sets (actual and model) of data will be stored in the cloud, this will allow continuous comparison of predicted and observed behaviour to allow potential issues to be flagged up (System rather than individual component Alarms).
The objective could be phrased “to maintain statutory flows (Quantity and Quality), while satisfying the Demands of the Reserves, Soil Moisture, Ecosystem, Abstractors, Consumers and Strategic Goals ( e.g. possibility of exporting to other catchments).
We can then further model this extended system as a Bayesian Belief Network of the effects of these competing demands, which must be balanced to achieve the overall objective (in other words – our standard dependency Model)
These external dependency entities and their status will then be interfaced/ coupled to the real time data from the catchment that will be accessed and stored in our private “cloud”.
There will also be an ability to take the same set of data from the outputs of the Neural Network flow model directly into the dependency model so that any changes made to key parameters are transmitted not just as dependency status but also as quantitative predictions such as critical flows at key locations.
This will be to take the simpler (!) of the two catchments (the Afan – One dominant Manager/Abstractor/ Consumer – Tata Steel) and develop a pilot application. This will mean the design and interfacing of the different components of the real time catchment management system developing the visual and interactive aspects and testing the concepts for real practicability with a real consumer to whom QQSOS (-quality, quantity and sustainability of supply) is absolutely critical. This will require:-
- Actual data from rain gauges, levels, flows, quality, etc. in real-time (cloud) Historical and aerial survey data to “teach” the Neural Network Model A full dependency model of the system.
- Interfaces between the models to allow real time monitoring (alarms) and Intervention effect prediction and observation/ comparison/ calibration?
The lessons learned from the pilot can be incorporated in the design and implementation of the more complex Usk Catchment model.
We will develop and start to implement this approach in the next quarter, as this strategic overview of the whole “system” is crucial to the coordination and effectiveness of the work packages.
SCHEMATIC REAL TIME MODELLING FRAMEWORK
Monitor – What is?
Interactive Visual Display
TAXONOMY – WHAT ARE WE TALKING ABOUT?
Systems in general are created for a purpose. They have an Objective in mind. Making this happen could be called the Design Intent. But to achieve this goal, it is dependent on the successful operation of a number of critical FUNCTIONS.
A Function (defined here), can be carried out by mechanical/ electrical components, human beings, or Organizations. It is the delivery of the successful Outcome that defines the operation of the Function.
To carry out its role and produce a successful OUTPUT, a function needs to have its own critical needs satisfied. The FRAM methodology groups these ASPECTS into six types of dependencies.
- Input (I): that which the function processes or transforms or that which starts the function,
- Preconditions (P): conditions that must be exist before a function can be executed,
- Resources (R): that, which the function needs or consumes to produce the output,
- Control (C): how the function is monitored or controlled
- Time (T): temporal constraints affecting the function (with regard to starting time and,
- Output (O): that which is the result of the function, either an entity or a state change finishing time or duration).
The Function is thus generally represented as a Hexagon with its six Aspects. Although the number of these different aspects will vary from function to function
The Output(s) produced by the System, come from the ACTIVITIES of one or more of these Functions and the interaction of their different Aspects. These can be interdependent on other functions in the system, or external dependencies.
A successful outcome may need a sequence of Activities to occur as a defined PROCESS; and hence component activities are time sliced “maps” (INSTANTIATIONS) of system activity.
This collection of functions and their dependencies define the SYSTEM under study. Its boundaries need extend only as far as the identification (and quantification) of its critical ASPECTS and their external dependencies requires. This allows us to assemble the System Map to follow the interactions necessary for the different Activities in the Processes.
The diagram below shows a sequence of activities in a Swedish Medical Process – Birthing
This System has an Output which may be also an Aspect required by a Function in a “Higher” System. It could thus be considered as a higher Function in that overarching system.
This “Fractal” aspect of the approach is very appealing as in real life almost everything seems to be “systems within systems of systems”. Our particular System can then be developed as a node either reporting upwards, or drilling down to finer detail, as required and can be considered as a NODE in the bigger scheme of things.
It therefore does not matter which Function we start with in a system they all need to be developed and will in turn determine which additional Functions and Aspects we need for that Activity.
Similarly each Function, Activity and System analyzed can be retained, shared and refined to be as wide ranging as necessary – truly Enterprise Wide Risk Management?
The relationship of the dependencies of the Output of a function and its Aspects can in turn, usefully be structured as a conventional Dependency Model. To identify/ distinguish it as a “Structured FRAM” and a potential “cog in a wider system wheel”, we have called this quantitative FRAM a SWAN – (a System Wide Analysis Node!)
This could be best done with a HAZOP structured team (Facilitator, Secretary, Experts, Operators (max 8)).The Facilitator and Secretary will prepare the sessions by compiling design/ operational data, procedures and if necessary interviews to establish context and supporting documentation.
The Secretary’s responsibility is to record the outcomes on a standard template report showing deviations/ consequences and recommendations for follow up, systematically in a Node/ Process/ Activity/ Function,/Aspects hierarchy.For a particular Node, the team will then:-
BUILD THE SYSTEM MAP
- Define what the system has to do – Design Intent
- Identify the Processes/ Activities needed to achieve it
- Define the sequence of activities, in the process
- Start with a simple activity and identify the functions needed
- Map these functions on to a system space (Map)
ANALYZE THE FUNCTIONS
- Pick a Function and identify and assign the Aspects required for that activity ( The software will specify and uniquely label the Required Aspects as a dependency model)
- Develop the dependency model and identify any external inputs needed.
- Take the next Function, Activity, and Process until complete.
- Label this “collection” of Dependency Models in one “Organization” as a “System”; and changes to any one function will propagate throughout this system automatically.
DETERMINE THE CRITICAL INTERACTIONS/ INTERDEPENDENCIES
- Systematically examine Instantiations (Time slices) of critical activities and identify Aspects which have potential variabilities (Deviations from norm) of concern. For each “Leaf” dependency a probability distribution can then be assigned.
- With the dependency models this is done by a standard “what if” analysis of identified issues. The sensitivity to variability in key aspects can be shown quantitatively.
- Record the significant Deviations, Consequences, Recommendations (Barriers?)
LOOK FOR UNEXPECTED RESONANCES
- For each system, collection of Functions/ “Organization”, run a Montecarlo analysis to test how random distributions of Aspect values might interact to cause problems (Black Swans) (shown as anomalies in the output success probabilities). Record as deviations as above
 Erik Hollnagel “Functional Resonance Analysis Method” Ashgate Press
 Sammy Shamoun FRAM analysis in a Department of Obstetrics Munich FRAMily Meeting 20123