The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.

Glacier recession is occurring globally as a result of recent climatic change (Oerlemans, 1994; Kaser et al., 2006; Zemp et al., 2009; Bolch et al., 2011, 2012). The exposure of terminal and lateral moraine complexes is becoming increasingly commonplace as a result of glacier recession, particularly in high-mountain regions (Hambrey et al., 2008). Moraines reflect the historical maximum extent of a given glacier, and are typically composed of poorly consolidated glacial material. A latero-terminal moraine can present a physical barrier to drainage of glacial meltwater and, in such cases, result in the formation of a moraine-dammed lake (Costa and Schuster, 1988) and create the potential for a glacial lake outburst flood (GLOF) hazard (e.g. Clague and Evans, 2000; Benn et al., 2012; Westoby et al., 2014a; Worni et al., 2014).

Moraine-dammed lakes form through one of two mechanisms: recession of the
glacier terminus and ponding of water in the proglacial moraine basin (Frey
et al., 2010; Westoby et al., 2014b) or via the coalescence and expansion of
supraglacial ponds on heavily debris-covered glaciers (Reynolds, 1998, 2000;
Richardson and Reynolds, 2000; Benn et al., 2001, 2012; Thompson et al.,
2012). Following expansion, such lakes are capable of impounding volumes of
water in excess of 10

Breaching of a moraine dam can result in the generation of a GLOF (Lliboutry 1977; Vuichard and Zimmerman, 1987; Clague and Evans, 2000; Kershaw et al., 2005; Harrison et al., 2006; Osti and Egashira, 2009; Worni et al., 2012, 2014; Westoby et al., 2014a, b). These sudden-onset floods represent high-magnitude, low-frequency catastrophic phenomena that have enormous potential for geomorphological reworking of channel and floodplain environments (Cenderelli and Wohl, 2003; Worni et al., 2012; Westoby et al., 2014b). These floods can pose significant hazards, resulting in the destruction of in-channel and riparian assets, including hydroelectric power facilities, trekking routes, and impact on settlements with ensuing loss of life (Vuichard and Zimmerman, 1987; Lliboutry et al., 1977; Watanabe and Rothacher, 1996). Of the various glacial hazards, GLOFs have far-reaching, distal impacts, with destruction often reported tens or hundreds of kilometres downstream of their source (Vuichard and Zimmerman, 1987; Clague and Evans, 2000; Richardson and Reynolds, 2000).

A number of approaches to model GLOFs have been presented; these are summarised by Westoby et al. (2014a). At the outset, it is worth highlighting that it is currently not expedient to employ our modelling workflow for GLOF analysis at catchment or basin scales but rather as a logical next step based on the results from a broader hazard assessment screening exercise. Such exercises should, in the first instance, identify and quantify the risk posed by individual glacial lakes in a region through the use of multi-criteria hazard analysis techniques (e.g. RGSL, 2003; Reynolds, 2014) and may also involve the application of DEM-based flood-routing models to provide rapid, first-pass assessments of likely patterns of inundation (e.g. Huggel et al., 2004) that form the basis for subsequent detailed GLOF analysis.

Most GLOF modelling approaches typically employ an empirical or numerical dam-breach model to derive a breach outflow hydrograph, and couple this to a numerical hydrodynamic model to simulate the downstream propagation of the flood wave. Such loose model coupling, or “process chain” simulation, is now a well-established practice and has been applied for reconstructive (e.g. Osti and Egashira, 2009; Klimeš et al., 2014; Westoby et al., 2014b; Worni et al., 2014) and predictive GLOF modelling studies (e.g. Xin et al., 2008; Worni et al., 2013).

Dam-breach modelling typically relies on an empirical or analytical formulation (e.g. Walder and O'Connor, 1997). Empirical models return a single breach hydrograph descriptor, usually peak discharge or flood time to peak, to which a hydrograph is fitted. Such models are derived from historical case study data and use simple geometric descriptors of the moraine or lake, such as dam height and lake surface area, as the sole inputs. Their robustness may be questionable because of their explicit reliance on the case-study data and their representativeness. Analytical models are similar to their empirical counterparts, but require that the user know the final breach dimensions and formation time, to which an analytical solution for the rate of breach growth is fitted. This approach suffers equally where limited data are available to describe the breach and its rate of formation (i.e. in particular where dams have yet to fail and the breach form and dimensions must be estimated a priori).

Physically based numerical models represent the current state of the art in dam-breach modelling, and combine numerical erosion and sediment transport models in combination with a 1-D or 2-D flow hydraulics solver to simulate breach expansion and channel flow (Worni et al., 2012; Westoby et al., 2014a, b). Despite their advantages, these models are not yet in widespread use by the glacial hazard communities, with recent studies still preferring to adopt established empirical approaches (e.g. Byers et al., 2013). In large part, this slow rate of adoption may reflect the high data requirements of physically based models. For example, these physics-based simulation require knowledge of the mean particle size (or a full particle-size distribution), internal angle of friction, cohesion, and porosity, amongst other physical attributes of the material. Such data are scale-dependent and at best require detailed field or laboratory investigation, which may be logistically challenging in remote, high-altitude environments and may be impossible to obtain for some reconstruction studies.

Numerous sources of predictive uncertainty exist in the parameterisation of contemporary numerical dam-breach and hydrodynamic models when used in either reconstructive or predictive GLOF simulation (Westoby et al., 2014a). Reconstructions of historic events where no field-based or published data exist to describe the geometric and material characteristics of the dam represent the most poorly constrained case. However, the characteristic compositional heterogeneity of moraines implies that even data-rich scenarios are likely to undersample the actual system complexity. Moreover, reconciling the often disparate spatial scales of processes, field observations, and effective model parameters presents a persistent ambiguity even under optimal conditions. For such ill-conditioned problems, an increasingly popular approach is to accept uncertainty in the a priori model parameters, and rather than seek the optimal parameter set through calibration, numerical methods are used to quantify the associated predictive uncertainty and present simulation output probabilistically (Beven, 2005).

To date, few studies have sought to explore the predictive uncertainty of numerical dam-breach models and link this to the downstream consequences in terms of flood wave propagation and inundation (e.g. Wang et al., 2012; Worni et al., 2012; Westoby et al., 2014b). In this paper we address this research gap through the development and demonstration of a unifying framework for cascading uncertainty in a GLOF model chain, illustrated for a reconstruction of a major historical outburst flood in the Nepalese Himalaya. We demonstrate how numerical models representing components of the GLOF process can be coupled to provide probabilistic predictions of the breaching process and downstream flood propagation using generalised likelihood uncertainty estimation (GLUE; Beven and Binley, 1992).

Pervasive uncertainty surrounds almost all aspects of environmental
modelling, and may be broadly classified as

Similar sources of uncertainty exist in the dam-breach and flood-routing components of the GLOF model chain (Westoby et al., 2014a). In the case of the former, appreciable, and predominantly epistemic, sources of uncertainty surround the establishment of initial conditions (e.g. dam geometry, reservoir hypsometry), dam material parameterisation (e.g. grain-size distribution curves, material porosity, density, cohesion, roughness coefficients, internal angle of friction) and the establishment of computational constraints (e.g. model time step and grid discretisation).

The construction of high-resolution digital terrain models (DTMs) using novel geomatics technologies such as terrestrial laser scanning and low-cost, structure-from-motion photogrammetry have the potential to help constrain these uncertainties (e.g. Westoby et al., 2012, 2014b). For example, these data can be used to extract accurate models of the cross-sectional geometry of a moraine dam and the bathymetry of a (drained) lake basin. However, the accurate quantification of the material characteristics of the dam structure is more difficult to sample effectively and requires logistically challenging fieldwork (e.g. Hanson and Cook, 2004; Osti et al., 2011; Worni et al., 2012). Furthermore, the heterogeneity of moraine sediments is so high that field observations inevitably undersample this variability, so that even the most detailed field measurements must result in significant spatial averaging

The predictive performance of linked hydrodynamic models is strongly influenced by model dimensionality (e.g. Alho and Aaltonen, 2008; Bohorquez and Darby, 2008), grid discretisation strategies (e.g. Sanders, 2007; Huggel et al., 2004), DEM quality or the frequency of cross-section data (e.g. Castellarin et al., 2009), and the parameterisation of in-channel and floodplain roughness coefficients (Wohl, 1998; Hall et al., 2005; Pappenberger et al., 2005, 2006). Recent studies have undertaken various forms of sensitivity analyses and uncertainty estimation to quantify the effects of uncertainty on numerical dam-breach model output (e.g. IMPACT, 2004; Xin et al., 2008; Dewals et al., 2011; Zhong et al., 2011; Worni et al., 2012).

The uncertainties surrounding model parameterisation are likely to translate into varying characteristics of the breach-outflow hydrograph, specifically peak discharge, time to peak, and the hydrograph form. When these data are then used subsequently as the upstream boundary condition for hydrodynamic modelling, their effects can be manifest in significantly different simulations of downstream flood propagation. This can give rise to strong variations in the celerity of the flood wave, time-varying flow depths and velocities, and variations in inundation extent. Importantly, it is these data that form the basis for flood hazard assessment. Consequently, it is essential not just to better quantify the predictive uncertainty associated with the component models but also to extend this to consider how uncertainty propagates through the model chain and ultimately might influence the development of effective flood mitigation and management strategies (Beven et al., 2014).

One consequence of the parametric and structural complexity of many environmental models is the possibility to obtain similar or even identical model outputs through different combinations of input parameters, initial conditions, and model structures (Beven and Binley, 1992; Beven and Freer, 2001; Beven, 2005). Beven and Binley (1992) termed this behaviour model “equifinality”. The concept is linked to the more generic form of landscape equifinality used in geomorphology, in which interactions between different processes can give rise to similar landscapes or landform assemblages through differing but equally plausible genetic mechanisms (e.g. Nicholas and Quine, 2010; Stokes et al., 2011). Both forms of equifinality have their origins in systems theory (von Bertalanffy, 1968) and has been identified and quantified in a range of geoscience settings (e.g. Beven and Binley, 1992; Kuczera and Parent, 1998; Romanowicz and Beven, 1998; Beven and Freer, 2001; Blazkova and Beven, 2004; Hunter et al., 2005; Hassan et al., 2008; Vasquez et al., 2009; Vrugt et al., 2009; Franz and Hogue, 2011).

For event-specific reconstructions, calibration of input parameters is often undertaken to identify optimal model parameter sets (e.g. Kidson et al., 2006; Cao and Carling, 2002). Such calibration typically involves a search of the model parameter space to identify optimal model performance with respect to an observed set of observations (e.g. Refsgaard, 1997; Beven and Freer, 2001; Hunter et al., 2005; Westerberg et al., 2011). However, a problem emerges when the observed calibration data contain insufficient information to uniquely identify a single optimal parameter set, and may highlight the presence of multiple optima within the parameter space that may correspond to different mechanistic solutions to achieve the same end results obtained in the calibration data. This “equifinality” is particularly problematic in the case of complex, spatially distributed models where the number of model parameters vastly outweighs the information content of calibration data, which are often spatially averaged model responses, such as flood wave celerity of peak discharge.

Multiple strategies can be adopted to minimise these effects, and range from reducing model complexity to increasing the variety of calibration criteria (for example involving multiple data streams). However, where an imbalance in between model parameterisation and calibration is persistent, Beven and Binley (1992) argued that any combination of model input that reproduces the observed outcome, within acceptable limits, must be considered equally likely as a simulator of the system under investigation (Beven and Binley, 1992). This principle has been used to advocate a quantitative method to assess model performance, and present results in a probabilistic framework, based on a method termed GLUE (Beven and Binley, 1992; Beven, 2005). In this article, we adopt this numerical approach to quantify the predictive uncertainty associated with a physically based dam-breach model for the reconstruction of a historical outburst flood in the Nepalese Himalaya. In light of the considerable uncertainty that surrounds dam-breach model parameterisation, the application of the GLUE method enables the quantification of the influence which this uncertainty exerts over derivation dam-breach outflow hydrographs. Our approach to probabilistic dam-breach modelling is described below.

Simulations of incipient breach development were based on HR BREACH, a physically based, numerical dam-breach model. This simulation tool predicts the progressive growth of a dam breach initiated by either the overtopping or piping of non-cohesive and cohesive embankment materials (Morris et al., 2008; Westoby, 2013; Westoby et al., 2014b). The model employs physically based hydraulic, sediment-erosion and discrete embankment stability modules to calculate the evolving breach geometry and associated drainage outflow hydrograph (Morris et al., 2008). This approach offers a significant advance over simplified, empirically derived breaching models or analytical methods that fit the pattern of breach expansion to user-defined final breach dimensions and formation time.

Breach enlargement is simulated through the interaction of two mechanisms:
(i) continuous hydraulic erosion based on either equilibrium
sediment-transport equations or erosion-depth equations and (ii) discrete
mass failures as a consequence of side-slope instability (Mohamed et al.,
2002). In this study, erosional processes were modelled using the
erosion-depth equation for non-cohesive embankments, after Chen and Anderson (1986):

Subaerial flows across and through the evolving breach are represented using
a steady-state one-dimensional flow model (Mohamed et al., 2002; Morris et
al., 2008). This model combines a weir discharge equation and a simplified
version of the Saint-Venant equations (Chanson, 2011) to simulate breach
flow and flow descending the distal face of the dam, respectively. A
variable weir discharge coefficient (

Application of the model requires definition of the following key boundary and initial conditions: (i) geometric descriptors of the dam, specifically proximal and distal dam face slope, dam length, crest width, spillway dimensions (width and height), and downstream valley slope; (ii) lake or reservoir hypsometry based on either stage–area or stage–volume curves. In addition, parameter range estimates and their likelihood distributions (either uniform, linear, or triangular), and computational settings (model time step, total run time, and minimum wetted depth threshold) must be defined. The version of the model used in this study incorporates a Monte Carlo parameter sampling routine to automatically search the model parameter set and generate multiple realisations of the potential drainage hydrographs and predicted breach geometry at specific time steps. The input scenarios modelled are described in Sect. 5.3.

The Dig Tsho moraine-dammed lake complex is located at the head of the
Langmoche Valley in the western sector of Sagarmatha (Mt. Everest) National
Park in the Khumbu Himal, Nepal (Fig. 1; 27

The moraine possesses steep (25–30

The Dig Tsho moraine dam and upper reaches of the Langmoche Khola.

On 4 August 1985 an ice avalanche from the receding Langmoche Glacier
traversed the steep (

Existing observational data for the site include first-hand descriptions of
the condition of the glacial lake prior to, and immediately following,
moraine dam failure in 1985, as well as a description of the pre-GLOF nature
of the valley-floor sedimentology immediately downstream (Vuichard and
Zimmerman, 1987). From these observations, an estimate of the total mass of
material removed from the breach was calculated as 9

In order to quantify the predictive uncertainty associated with the coupled
modelling system used here, a sequential workflow was developed that
involved the following steps:

Topographic data were derived from digital terrain modelling using terrestrial structure-from-motion (SfM) photogrammetry to reconstruct pre-existing moraine and floodplain topography and extract glacial lake bathymetry and dam geometry data.

A priori parameter values for the numerical dam-breach model were estimated using material properties obtained from field investigation and from the published literature.

A series of potential breach initiation scenarios were hypothesised to account for uncertainty in the principal driving factors behind failure (overtopping and mechanical failure).

For each scenario, the multiple model outputs were weighted using a multi-criterion likelihood statistic based on a comparison of the modelled and observed final breach geometries. This provided a means to weight the ensemble of predicted discharges at each time step and derive probabilistic hydrograph forecasts for each scenario.

Two-dimensional hydrodynamic modelling was used to simulate the downstream propagation of the probabilistic outflow hydrographs, using upper and lower 95 % confidence intervals to describe the range of potential flows. These simulations provide a range of estimates for the potential inundation extent, flow velocity, and flow depth data for use as input to probabilistic GLOF hazard mapping.

The following sections describe each method in detail.

Bathymetric data describing

Three topographic models are required for this analysis: (i) a reconstruction of the pre-failure moraine dam and lake basin; (ii) detailed models of the breach topography to determine the position, width, length, and slope of the breach and reconstruct the drainage volume; and (iii) the downstream floodplain topography. These models were derived using terrestrial photogrammetry, based on novel structure-from-motion and multi-view stereo (SfM-MVS). A full description of the image processing is beyond the scope of this paper and the reader is referred to more comprehensive accounts elsewhere (James and Robson, 2012; Westoby et al., 2012; Fonstad et al., 2013; Javernick et al., 2014). Briefly, SfM-MVS involves the reconstruction of 3-D point cloud data from highly redundant photographic data sets that have a high degree of image overlap. Unlike traditional photogrammetry, SfM uses feature tracking between images and bundle adjustment methods to reconstruct the camera alignment and reconstruct the 3-D scene geometry (Lowe, 2004; Snavely, 2008; Snavely et al., 2008).

In this study, we use published high-resolution reconstructions of the pre- and post-flood moraine topography derived by Westoby et al. (2012). These terrain models were used to quantify the moraine geometry and lake basin hypsometry (Fig. 2) for the establishment of initial conditions for numerical dam-breach modelling.

A topographic model of the downstream floodplain was obtained using similar
methods. The analysis was based on two photosets, incorporating 226 and
303 photographs obtained from south- and north-facing transects along the valley
flanks. Photographs were acquired with a Panasonic DMC-G10 (12 MP) digital
camera. Automatic focusing and exposure settings were enabled during
photograph acquisition. The freely available software bundle SFMToolkit3
(Astre, 2010) was used to process the input images using feature extraction
(Lowe, 2004) and sparse and dense SfM-MVS reconstruction algorithms.
Following manual outlier removal and editing, final dense point clouds
numbered 4.8

Georeferenced point cloud data were merged and decimated to improve data
handling whilst preserving sub-grid statistics using the
C

The first step in the GLUE workflow is to establish the parameters for inclusion and their respective ranges. In the absence of any prior information regarding parameter and range choice, all available input parameters and their entire simulation range should be included. In practice, complete uncertainty regarding parameter and range choice is unlikely, since a combination of initial sensitivity analyses, modelling guidelines, and basic intuition and reasoning can typically be used to assist in constraining their choice (Beven and Binley, 1992). The parameters and their ranges used in this study (including computational settings) are displayed in Table 1.

Information regarding initial, or a priori, parameter distributions is also required, reflecting the modeller's prior knowledge of the parameter space (Beven and Binley, 1992). In the absence of any information pertaining to a priori parameter probability distributions at Dig Tsho, uniform distributions were used for stochastic sampling of all available parameter spaces.

In many instances, the specific mechanisms of moraine dam failure are poorly documented, often as a result of a lack of direct observation. Post-event glaciological and geomorphological analysis of a moraine, its parent glacier, and their surroundings may provide clues as to the triggering event that causes moraine failure or a stand-alone GLOF event (e.g. Hubbard et al., 2005; Osti et al., 2011). However, a degree of uncertainty may still surround the establishment of the precise trigger(s) and the nature of dam failure.

Parameter ranges and geometric characteristics of the Dig Tsho
moraine dam used for input to the HR BREACH numerical dam-breach model.
Input parameter ranges were established from a combination of initial
experimentation and parameter sensitivity analysis (E), in situ field observation (F)
and data from similar sites stated in the literature (L). All dam
geometry data were extracted from the SfM-DTMs of the moraine and floodplain
(SfM-DTM), with the exception of downstream Manning's

Our approach to dam-breach modelling comprises the simulation of three modes of breach initiation, namely (i) a control scenario where lake waters have risen gradually to a point where the overtopping discharge is large enough to trigger sustained down-cutting and breach development; (ii) breach initiation by a series of overtopping waves, such as those resulting from the rapid input of a mass of rock or ice, that traverse the lake and overtop the moraine, thereby initiating its failure; and (iii) instantaneous mechanical failure of the dam face. All three modes are documented triggers for moraine-dam failure (Vuichard and Zimmerman, 1986; Richardson and Reynolds, 2000; Worni et al., 2012). The trigger mechanism for the failure of the Dig Tsho moraine is generally accepted to be repeated overtopping and down-cutting of the moraine by waves generated from an ice avalanche entering the lake (Vuichard and Zimmerman, 1986). The purpose of including two additional breach initiation scenarios was to explore whether equifinal final breach morphologies would be produced from a variety of breach initiation scenarios. Scenario design is described in the following sections.

A control scenario was formulated in which breach formation was initiated through down-cutting of a predefined spillway. Inclusion of an existing spillway conforms to pre-GLOF observations of the Dig Tsho moraine by Vuichard and Zimmerman (1986), and we note that many extant moraine-dammed lakes are drained in this manner (e.g. Hambrey et al., 2008). This down-cutting is a result of flow produced by the pressure head associated with our specified initial lake level (which mirrored the reconstructed dam crest elevation of 4356 m a.s.l.). The modelled spillway measured 0.5 m wide and 0.5 deep and extended from the upstream end of the moraine crest to the dam toe. We note that, in the absence of detailed observations of the spillway prior to dam failure (Vuichard and Zimmerman, 1987), these dimensions are hypothetical and do not necessarily replicate the precise spillway conditions prior to dam failure in 1985.

In addition to a control scenario (DT_control), two styles of system perturbation scenario were introduced to the dam-breach models to explore and quantify the impact of system-scale perturbations on model output. These perturbations were (i) the introduction of overtopping waves of varying magnitude and (ii) the instantaneous removal of material from the downstream face of the dam immediately prior to breach development

The failure of Dig Tsho has been attributed to the overtopping of the
terminal moraine by waves produced by an ice avalanche from the receding
Langmoche Glacier (Vuichard and Zimmerman, 1987). Presently, most numerical
dam-breach models, including HR BREACH, are unable to explicitly simulate
the dynamic effects of avalanche–lake interactions. This necessitated an
inventive yet relatively simple approach to reconstructing overtopping
behaviour in HR BREACH. Instead of simulating the passage of a series of
gradually attenuating displacement or seiche waves and dynamic interaction
with the dam structure, a solution was devised which involved the rapid
increase and subsequent decrease of the lake water surface elevation. These
artificial water level variations prompted short-lived overtopping of the
dam structure at predefined intervals. Temporary increases in lake level
were achieved through systematic variation of reservoir inflow to introduce
maximum overtopping discharges of 4659, 3171, and 1809 m

Richardson and Reynolds (2000) suggested that the initial failure of a
moraine dam may be “explosive” in nature. This explosive force is reflected
by the mass of individual transported clasts, which can exceed

HR BREACH is currently unable to simulate “explosive” or rapid, large-scale rotational mass failures. In order to simulate the instantaneous removal of material from the dam structure, three mass-removal scenarios were developed. HR BREACH requires that the user specify an initial breach spillway in the dam crest and downstream face of the dam. We use these spillway dimensions to represent the mass of morainic material “instantaneously” removed from the crest and distal face of the dam. However, this is far from an ideal numerical realisation of the precise mechanics of catastrophic, near-instantaneous dam failure, and represents a highly simplified approximation.

The default spillway dimensions used in the control and overtopping
experiments were 0.5 m wide and 0.5 m deep. Perturbed spillways
cross-sectional dimensions were 1, 3, and 5 m

Although a range of stochastic sampling methods are available for use in the GLUE workflow (e.g. Metropolis, 1987; Iman and Helton, 2006), perhaps the most widely used approach is Monte Carlo analysis (e.g. Beven and Binley, 1992; Kuczera and Parent, 1998; Aronica et al., 1998, 2002; Blazkova and Beven, 2004). With the Monte Carlo method, values from individual parameter spaces are sampled in a truly random manner, thereby eliminating any subjectivity which might be introduced at this stage. The method is fully capable of accounting for predefined probability distribution data, making it a simple yet effective tool for rapidly generating the random parameter ensembles required for model input. However, with a poorly defined prior distribution, or a small number of model simulations, point clustering in specific regions of the parameter space may occur. Clustering can be easily avoided by undertaking a straightforward investigation into patterns of histogram convergence of an output variable (or variables), whereby the minimum number of simulations required for the production of an acceptable level of convergence is established. Equally, alternative stochastic sampling methods, including Latin hypercube sampling, may be employed (e.g. Hall et al., 2005; Iman and Helton, 2006). When faced with multi-dimensional problems, the Latin hypercube method can be used to partition the probability distribution of each input parameter, before proceeding to sample from each partition. This method thereby avoids the clustering that is associated with an insufficient number of Monte Carlo samples and ensures that the final parameter ensembles are representative of the full sampling space of each parameter.

Schematic diagrams of initial (pre-flood) and an example of
post-flood geometry for the Dig Tsho moraine dam (left and right columns,
respectively), as modelled by HR BREACH. Initial dam face slope angle is
represented as a ratio of the form 1 :

In this study, sustained histogram convergence became minimal after the execution of 1000 model runs. This number of simulations was therefore deemed acceptable for stochastic sampling. However, we note that this number of simulations might not be directly applicable to other sites, and we highly recommend that modellers undertake their own preliminary sensitivity analyses before establishing a sufficient number of simulations to undertake.

Model evaluation is achieved through quantification of how well a parameter ensemble performs at reproducing a series of observable system-state variables, or “likelihood measures”. Parameter ensembles that are unable to do so are deemed to be non-behavioural and are assigned a likelihood score of zero. In contrast, ensembles that reproduce these variables within acceptable limits are deemed to be behavioural and accepted for further analysis. It is not uncommon for all ensembles to be rejected (e.g. Parkin et al., 1996; Freer et al., 2002), thereby suggesting that it is the model structure that is incapable of reproducing the observed data, instead of individual parameter combinations (Beven and Binley, 1992). Such a situation may be overcome by widening the limits of acceptability, but at the potential cost of decreasing confidence in any newly accepted ensembles (Beven and Binley, 1992).

Behavioural ensembles were assigned positive likelihood values in the range 0–1, where 1 represents an ensemble that is capable of perfectly replicating the observed data. Likelihood functions are specific to each likelihood measure used for model evaluation. Three likelihood measures were used to evaluate model performance: (i) final upstream breach depth (LH1); (ii) the residual sum of squared errors of the final longitudinal elevation profile of the breach (LH2); and (iii) the location of the critical flow constriction (LH3, Fig. 3). These morphological variables are directly quantifiable by comparing final modelled breach geometry with that extracted from the SfM-DTM of the breached moraine dam. Dam breaching is a fully three-dimensional problem that involves progressive backwasting, mass slumping, and down-cutting of morainic material by escaping lake waters. The likelihood measures described above are two-dimensional approximations of the breach geometry, and were deemed appropriate descriptors of the breaching process since in combination they describe the breach end states in the vertical and both horizontal dimensions and relate to the modelled and observed lateral, longitudinal, and vertical expansion of the breach.

An observed final upstream breach depth of 16

The second likelihood measure is a direct comparison between observed
(post-GLOF) and modelled elevation profiles of the breach thalweg. This
measure represents a distributed method of quantifying the performance of HR
BREACH in producing post-GLOF thalweg elevation profiles that replicate that
observed in the field. Thalweg elevation data were directly extracted from
the SfM-DTM (Fig. 12a; Westoby et al., 2012). The residual sum of squares
(RSS) method was used to quantify the deviation between the observed and
modelled data:

Choice of an observed value for the location of the critical flow constriction was complicated by the asymmetry of the observed breach planform, whereby flow constrictions on either side of the breach are offset by a distance of approximately 40 m. This asymmetry is most likely a function of complex flow hydraulics and patterns of erosion of the moraine during development and expansion of the breach. Specifically, the wide grain-size distribution of the moraine, which comprises material ranging from silts and sands to boulders with intermediate diameters greater than 5 m, combined with its unconsolidated nature causes the breach enlargement process to differ markedly from a systematic, largely uniform style of expansion typically modelled by numerical breach models. Specifically, side-wall detachment and emplacement of large boulders in the breach thalweg may serve to impede or divert breach flow, such that the breach planform adjusts in response to locally altered flow directions and magnitudes. This behaviour may be one explanation for the observed breach asymmetry. Whilst our numerical model accounts for undercutting and mechanical failure of the breach walls, it is unable to resolve flow obstructions caused by individual large clasts.

Use of a trapezoidal likelihood function would have been possible, whereby
any modelled values that fell within the observed constriction “offset
zone” would be assigned a likelihood of 1. However, such an approach was
deemed inappropriate, because it would render a significant proportion of
parameter ensembles as absolutely behavioural. Instead, a triangular
likelihood function was used. The mid-point of the observed offset was used
as the central, observed value, and a range of

Where multiple likelihood measures are used, it is necessary to arrive at a
final, global likelihood value for each behavioural parameter ensemble.
Bayesian updating represents a statistically robust method for combining
multiple likelihood values. It is able to account for the influence of prior
likelihood values on the generation of updated values as more data become
available. In the context of this study, the initial prior likelihood value
relates to final breach depth. Likelihood values from the second
(breach-elevation profile) and third (location of the critical flow
constriction) are subsequently combined with this initial likelihood through
implementation of Bayesian updating, which is summarised as (modified from
Lamb et al., 1998)

Final likelihood values associated with each behavioural parameter ensemble reflect the confidence of the modeller in the ability of each ensemble to reproduce an observed data set. Considering the cumulative distribution of these global likelihood values as a probabilistic function facilitates an assessment of the degree of uncertainty associated with the behavioural predictions (Beven and Binley, 1992). These data are referred to as cumulative distribution functions (CDFs).

DT_control cumulative density function (CDF) data for

Measure-specific likelihood values for each behavioural ensemble were re-scaled to sum to unity. The final measure can be treated as a surrogate for true probability, but cannot be used for subsequent statistical inference (Hunter et al., 2005). Weighted and re-scaled ensembles are ranked and plotted as a CDF curve (Fig. 4), from which cumulative prediction limit data can be extracted (Beven and Binley, 1992). The generation of weighted CDFs is unique to the GLUE approach and represents a multivariate, additive method that accounts for ensemble performance.

The shallow-water flow model ISIS 2-D (Halcrow, 2012) was used to simulate
GLOF propagation. The model includes alternating direction implicit (ADI)
and MacCormack total variation diminishing (TVD) two-dimensional solvers for
hydrodynamic simulation. The ADI scheme solves the shallow-water equations
(SWEs) over a regular grid of square cells. Water depth is calculated at
cell centres, and flow discharges at cell boundaries. The SWEs are solved by
subdividing the computation into

In contrast, the MacCormack-TVD scheme uses predictor and corrector steps to
compute depth and discharge for successive time steps. A TVD term,
Var(

Hillshaded DTM of the Langmoche Valley floor (0–2.2 km from
breach), produced using terrestrial photography from the valley flanks in
combination with structure-from-motion photogrammetric processing
techniques. Elevation data were extracted at 1 m

A comparison was carried out to assess which of the two-dimensional solvers
would be more appropriate for GLOF simulation. The comparison comprised the
simulation of a single dam-breach hydrograph across a reconstructed digital
terrain model of the Langmoche Khola (Fig. 5). A 4 m

Routing of a breach hydrograph of 5 h duration took approximately 2.5 and
0.5 h of simulation time for the TVD and ADI solvers, respectively. The
computational burden of the ISIS 2-D TVD solver far exceeds that of the ADI
scheme for identical model setups, owing to the finer temporal resolution
required by the TVD solver. Depth-based inundation maps for the results of
each solver were created in ISIS Mapper^{®} and exported for
display in ArcGIS (Fig. 6). A difference image of inundation was also
created. Floodwaters follow the channel thalweg for both solvers during the
1 h. Thereafter, increasing flow stage results in the inundation of a wide
reach between 1.1 and 1.7 km. Total inundation of this particular reach is
achieved by 01:15 h for the ADI solver, and

Inundated area is greater for the ADI solver for all time steps (Fig. 6).
Maximum difference in inundation extent (122 816 m

The results support the use of the two-dimensional TVD solver for GLOF simulation. The lack of any significant numerical instability, otherwise prevalent in the ADI results, is the predominant advantage of the TVD solver. Although processing times are considerably lengthier for the TVD solver, these were deemed acceptable. The solver used in this study simulates only clear-water flows, with no consideration of sediment entrainment, transfer, and depositional dynamics, including any impact on flow rheology. By default, our breach model does not output time-step-specific sediment outflow discharges. It is possible to undertake a crude calculation of time-varying sediment production through the interpolation and differencing of successive breach cross-sectional geometries (see Fig. 7 in Westoby et al., 2014a). These data would in theory be suitable for input to multi-phase hydrodynamic modelling.

Comparison of GLOF inundation in the upper reaches of the Langmoche Khola using the ISIS 2-D alternating implicit direction (ADI) and total variation diminishing (TVD) solvers at selected time steps. Also shown is a difference image of inundation, where black and grey shading corresponds to areas inundated exclusively by the TVD and ADI solvers, respectively. See Fig. 1a for location. Inset tick marks spaced at 200 m intervals.

Scatter, or “dotty”, plots of likelihood values for HR BREACH input
parameters for behavioural Monte Carlo simulations of DT_control, conditioned
on final breach depth. The bottom two panels represent the Manning's

In addition, the DTM that was used to represent the floodplain domain immediately downstream of the moraine dam reflects post-GLOF valley-floor topography. As such, derived maps of inundation and flow depth should not be taken as indicative of the passage of the 1985 GLOF.

The following sections present the results of our probabilistic, coupled dam-breach–GLOF simulation experiments. The performance of the dam-breach model at reproducing observed breach geometry is first evaluated (Sect. 6.1.1) before attention turns to issues associated with the extraction of useful, probabilistic breach hydrograph data for use as input to GLOF modelling (Sects. 6.1.2 and 6.1.3). A variety of approaches for translating the impact of dam-breach model parameter uncertainty into probabilistic maps of GLOF inundation and hazard are presented in Sect. 6.2.

Simulations that were deemed non-behavioural were assigned a likelihood
value of 0 and not considered for further analysis (Table 2). Analysis of
parameter-specific likelihood data reveal that weak correlations exist for
all input parameters except Manning's

Whilst the number of simulations retained for the overtopping scenarios was broadly similar, the rather more extended range possessed by the instantaneous mass-failure scenarios reflects an inverse correlation between the initial volume of material removed and number of simulations retained as behavioural. Full hydrographs were obtained for each of the retained simulations. The range of behavioural peak discharges of similar magnitude to previous estimates provided for the Dig Tsho GLOF (Vuichard and Zimmerman, 1987; Cenderelli and Wohl, 2001, 2003), and of equivalent or lower magnitude to palaeo-GLOFs reported from other regions (e.g. Clague and Evans, 2000; Huggel et al., 2004; Kershaw et al., 2005; Worni et al., 2012).

The maximum and minimum range of behavioural

Final centreline breach elevation profiles for the Dig Tsho simulations. Black: observed profile; grey: modelled (behavioural) profiles.

Final breach planforms (grey lines) for all modelled scenarios. Precise locations of critical flow constriction are highlighted by black dots. Simulations possessing flow constriction locations that fell between the red dashed lines (distance 505–585 m) were deemed to be behavioural and assigned positive likelihood values.

Maximum and minimum behavioural likelihood scores after the data had been conditioned using the RMSE of modelled elevation profile of the breach thalweg varied from 0.97 to 0.014. Within this range, the distribution of likelihood scores was comparable for the control scenario and all overtopping scenarios (0.970, 0.969, 0.969, and 0.970 for DT_control, min, mid, and max overtopping scenarios, respectively). Whilst the maximum likelihood score for DT_instant_1 was almost equally as high (0.821), this value decreases with increasing volume of mass removed (0.819 and 0.744 for DT_instant_3 and DT_instant_5). All of these scenarios possessed minimum likelihood scores that were appreciably lower than the control and overtopping scenarios. Within these ranges, likelihoods were distributed relatively evenly. Accordingly, the instantaneous mass-removal scenarios, particularly DT_instant_3 and DT_instant_5 exhibited the poorest performance against this likelihood function.

Behavioural elevation profiles are displayed in Fig. 8. The well-defined
break in slope which is identifiable in the observed data at

Modelled planforms are broadly similar in form both between and within each scenario (Fig. 9). Observed and modelled planforms gradually taper towards a flow constriction, beyond which the breach width expands to form a bell-shaped exit. Flow-constriction location varies considerably, with a substantial number of parameter ensembles deemed non-behavioural following conditioning using this likelihood measure (Table 2). The majority of non-behavioural simulations located the flow constriction upstream of the behavioural limit (Fig. 9). However, no discernible relationship between input parameters and flow constriction location was identified in the parameter-specific likelihood data. Only seven DT_control parameter ensembles were retained (0.7 % of the original simulation pool), following conditioning on this likelihood measure, and only two (0.2 %) of the DT_instant_5 simulations remained. Further reductions in the number of retained simulations were imposed for all scenarios (Table 2).

Scenario-specific behavioural simulation count for individual likelihood measures. Note: all simulations deemed to be behavioural after application of LH1 (final breach depth) were retained for conditioning on LH2 (breach thalweg elevation profile).

CDF curves were extracted from behavioural, scenario-specific likelihood
data (Fig. 4). Using these data, time-step-specific percentile discharges
were extracted and combined to construct probabilistic breach outflow
hydrographs for each scenario (Fig. 10). Similarities between the percentile
hydrographs for each scenario are striking, particularly between data
conditioned on modelled final upstream breach depth (LH1) and modelled
upstream breach depth and breach centreline elevation profile data
(LH1

Individual overtopping waves are preserved for each percentile in the
relevant scenarios (Fig. 10). Median (50th) percentile hydrographs
exhibit slightly more variation, both between scenarios and
following conditioning using the second likelihood measure. This conditioning step
results in a decrease in median percentile

The most noticeable impact on percentile hydrograph form is caused by the
additional conditioning of the data on the final likelihood measure.
Mass-removal scenarios appear to be affected to a far lesser degree than the
control and overtopping scenarios. The exception is DT_instant_5, where discharges for
50th and 5th percentile hydrographs are increased in the first

Scenario-specific maximum (

Total percentile hydrograph outflow volumes for individual scenarios and following clustering.

Percentile hydrographs derived from behavioural Dig Tsho
simulations, for successive likelihood updating steps (“LH1”, “LH1

Crucially, percentile hydrograph form is dictated by the time-step-specific
CDF data. In turn, CDF form is determined by variations in the likelihood of
individual behavioural hydrographs, and the cumulative distribution of their
associated discharges (for each time step). The vast number of simulations
which, following conditioning on flow constriction location, were
subsequently deemed to be non-behavioural (Table 1) serves to alter the form
of scenario-specific CDFs. This effect is particularly dramatic for
DT_control, where the number of behavioural simulations reduces from 76 to 7 following
conditioning on final breach depth, breach centreline elevation profile, and
flow constriction location (LH1

Issues of mass conservation arose with the extraction of behavioural,
percentile-derived breach hydrographs. Both 5th and 50th percentile
hydrographs for all scenarios consistently under-predicted the
volume of lake water released during breach development
(

From a hydrodynamic modelling perspective, an additional and equally important observation is the form of the percentile hydrographs, which generally do not mirror the form of any of the behavioural hydrographs used as input. When combined, issues of mass conservation and the unrepresentative form of the percentile breach hydrographs render them largely unsuitable for use as an upstream boundary condition for subsequent hydrodynamic modelling.

In an effort to further refine the behavioural simulations and improve the
representativeness of the derived percentile hydrographs, all behavioural
data were clustered, regardless of their inclusion of any modelled system
perturbations (Fig. 11). Data clustering was undertaken in an effort to
characterise “styles” of breaching, such as those characterised by low peak
discharge magnitude and lengthy time to peak or high peak discharge and
short, sharp rise to peak. Clustering used

Data clustering. ^{®};

Clusters are broadly defined by

Cluster membership is not as clear-cut as might be anticipated (Fig. 11). The clustering results appear to imply that pigeonholing different breaching scenarios by hydrograph type, or style, is virtually impossible. However, the exceptions to this rule are the instantaneous mass-removal scenarios, which almost exclusively produce high-magnitude, short-duration hydrographs. This finding would appear to imply that alternative factors are required to explain the similarity in the range of hydrograph forms that are produced by each scenario.

Percentile hydrographs were also extracted from the clustered data.
Deviations between observed and modelled median percentile

Inundation extent and flow depth distribution for selected time steps for the DT_control, DT_overtop_max, and DT_instant_5 optimal flood even. See Fig. 1a for location. Inset tick marks spaced at 200 m intervals.

Clustering was largely unsuccessful at improving the utility of percentile-based breach hydrographs for use as hydrodynamic input. This result necessitated the exploration of alternative methods for cascading likelihood-weighted estimates of dam-breach parameter ensemble performance through to the simulation and mapping of GLOF inundation and hazard.

Deterministic approaches to flood reconstruction require the identification of the optimal model, and its subsequent use for predictive flood-forecasting. To illustrate the variability between scenario-specific optimal hydrograph routing patterns, the optimal hydrographs for DT_control, DT_overtop_max and DT_instant_5 were used as upstream input for simulation in ISIS 2-D. Maps of inundation extent and flow depth (Fig. 12) reveal prominent inter-scenario differences in the spatial extent of inundation as the respective hydrographs and GLOF floodwaters progress downstream. Variations include the initial downstream transmission of the DT_overtop_max overtopping wave, which triggers rapid inundation of the entire reach. However, initially high-flow stages are not maintained; these only rise once again with increasing breach discharge associated with breach expansion. Use of the DT_instant_5 hydrograph produces spatial and temporal patterns of inundation and wetting front travel time similar to that of DT_overtop_max (Fig. 12).

Probabilistic maps of inundation extent and flow depth were constructed through the retention and evaluation of scenario-specific and likelihood-weighted breach hydrographs. In the example presented herein, we simulated the propagation of 76 individual moraine-breach hydrographs using the ISIS 2-D TVD solver (with an 8 m topographic grid discretisation and a 0.04 s time step), representing the behavioural DT_control parameter ensembles after conditioning on final breach depth. However, the method is equally applicable to the use of several, hundreds, or thousands of individual simulations. For each time step, per-cell CDF curves of flow depth were assembled, from which percentile flow depths were extracted and plotted (Fig. 13). Given the inherent uncertainty surrounding the precise mode of moraine-dam failure and outflow hydrograph form, these data effectively convey the resulting variability in likelihood-weighted predictions of reconstructed inundation extent, whilst preserving time-step-specific percentile flow depths. Because of the nature of their construction, these data do not relate to a specific event or hydrograph but instead provide an indication as to the potential uncertainty in GLOF inundation extents and flow depths associated with a range of behavioural breach hydrographs.

The final output of a GLOF hazard assessment comprises the production of maps of flood hazards, conditioned by one or more directly quantifiable flood-intensity indicators (e.g. Aronica et al., 2012). Whilst inundation depth is arguably the most significant flood-intensity indicator for predicting monetary losses associated with individual flood events (Merz and Thieken, 2004; Vorogushyn et al., 2010, 2011), its combination with flow velocity is regarded as an improved indicator of hazard to human life (Aronica et al., 2012).

GLUE-based percentile maps of inundation for DT_control. 5th, 50th, and 95th percentiles maps of inundation represent the water depth that would be exceeded with 95, 50, and 5 % probability, respectively. See Fig. 1a for location. Inset tick marks spaced at 200 m intervals.

Percentile flood-hazard maps, based on a global hazard index forwarded by Aronica et al. (2012). Such data facilitate a probabilistic evaluation of the evolution of GLOF flood hazard. See Fig. 1a (this paper) for location, and Fig. 1b in Aronica et al. (2012) for description of the hazard index.

A unifying, “end-to-end” framework for probabilistic GLOF reconstruction incorporating high-resolution photogrammetry and probabilistic, GLUE-based numerical dam-breach and hydrodynamic modelling approaches.

A global hazard index proposed by Aronica et al. (2012) was used to
construct maps of GLOF hazard (Fig. 14). Taking probabilistic flow depth and
velocity data as input, probabilistic GLOF hazard maps were produced for the
DT_control scenario. Four hazard classes are defined and shaded for distinction
(

We have demonstrated that the propagation, or cascading, of the parametric
uncertainty and equifinality through the dam-breach and hydrodynamic
modelling components of the GLOF model chain is not only possible but may
also be of considerable value to flood-risk practitioners. A key contribution of
the research is the demonstration that the predictive limits of numerical
models, in this instance applied to the reconstruction of historical moraine
breaching, can be quantified through the use of a weighted, probabilistic
modelling framework (Fig. 15). Our approach illustrates how parametric
uncertainties may be propagated through the GLOF model chain when the output
from one model is used as the input to another. The approach can therefore
be used to isolate the most sensitive components of a predictive
system – e.g. Manning's

Our results highlight the primary influence of the material roughness of
moraine material indicating HR BREACH parameter ensemble, and therefore
breach-hydrograph, performance. Specifically, behavioural Manning's

The observation that broadly similar behavioural peak discharges are associated with different modes of breach initiation is a particularly significant one (Table 3). This finding appears to suggest that additional factors, such as the sampling ranges for the various input parameter ranges and model boundary conditions (dam geometry and lake bathymetry), exert an overriding influence on this breach hydrograph characteristic. However, the significance of breach initiation mode is a primary control over downstream GLOF wetting front travel times and inundation extent. This is illustrated in Fig. 12, and reinforces the need for moraine-dam failure modelling exercises to consider, wherever possible, the complete range of potential breach-initiation scenarios at the model design stage.

The numerical physicality of the dam-breach model used in this research represents a notable improvement over empirical or analytical models. However, many of the geometric and material characteristics of the moraine and lake complex remain highly simplified. This simplification is a necessary compromise related to our use of a dam-breach model, the primary intended application of which is the investigation of (relatively) simple artificial earthen or concrete dam constructions. Alternative dam-breach models, including unstructured mesh-based variants (e.g. Worni et al., 2012), have been demonstrated to perform well at reproducing historical moraine-dam failure dynamics. In combination with stochastic parameter sampling functionality, they would represent a powerful combination that could be easily incorporated into the framework we present herein (Fig. 15).

The model chain presented in this article is parametrically complex, and whilst there remains an imbalance between the parametric degrees of freedom in the model setup and the objective descriptors of post-GLOF geometry used to evaluate model performance, this imbalance is one common to many areas of numerical modelling in the Earth surface sciences. The value of the GLUE method in this respect is its suitability for actively exploring and quantifying the extent of uncertainty and equifinality in dam-breach model output that results from poorly constrained conditioning of the model input parameters. Similarly, this problem is common to virtually all dam-breach modelling efforts, not only moraine dam breaching. This research has provided specific tools, in the form of probability-based maps of GLOF inundation and hazard, to communicate the effects of model uncertainty to potential end users in a manner that is both open and objective.

Whilst the extraction of percentile maps of inundation extent and flow depth is not necessarily an entirely new concept (e.g. McMillan and Brasington, 2007, 2008; Vorogusyhn et al., 2010, 2011), the application to GLOF reconstruction presented here is an original and novel one. This approach represents a significant improvement in the effective communication of the likelihood associated with a range of moraine-dam failure scenarios. Significantly, our probability-based flood hazard maps (Fig. 14) can be visualised in a GIS environment, and provide a clear picture of patterns of flood hazard zonation, at varying levels of confidence and at successive model time steps, that would prove useful to disaster risk managers.

The likelihood of multiple GLOFs occurring from an individual moraine-dammed
lake complex is low. In most cases, the breached moraine dam can comfortably
accommodate relict lake discharges. Therefore, the identification and use of
posterior parameter distribution data for predictive GLOF forecasting is of limited
utility if these ranges prove to be highly site-specific. The identification
of a suite of universal or region-specific material characteristics and
their probabilistic distributions would facilitate their use in predictive
GLOF simulation efforts. We believe that the Dig Tsho failure is regionally
representative of comparable styles of glacial lake systems, and as such
our results are of value for extension of the technique to similar glaciated
regions, with the caveat that a first-pass assessment of GLOF hazard should
be first undertaken to identify “priority” glacial lakes (Huggel et al., 2004). In
identifying the importance of constraining specific parameters for
dam-breach model parameterisation (e.g. Manning's

Probabilistic approaches have clear advantages over the deterministic approaches traditionally used for GLOF reconstruction such as the use of palaeohydraulic techniques for at-a-point or reach-scale peak discharge estimation (e.g. Cenderelli and Wohl, 2003; Kershaw et al., 2005; Bohorquez and Darby, 2008). Probabilistic methods embrace and attempt to convey the influence of uncertainty and equifinality in model input on subsequent output. It might be argued that their value outweighs the additional processing time required for their implementation, which may involve the execution of hundreds or thousands of individual simulations.

In considering the source of uncertainty in the GLOF modelling process, we have focused on its influence over the moraine dam-failure process. However, numerous additional sources of uncertainty are present at various stages in the workflow, such as the reconstruction of lake-basin bathymetry, and merit further investigation (Westoby et al., 2014a). The logistical impracticalities of identifying and addressing all sources of uncertainty in the GLOF model chain are currently a significant hindrance to applied modelling efforts. However, simple sensitivity analyses remain of value to quantify the impacts of individual sources of uncertainty on numerical dam-breach and hydrodynamic simulation, and might be incorporated straightforwardly into our modelling framework (Fig. 15).

This paper has outlined and presented results from a workflow for cascading uncertainty and equifinality through the glacial lake outburst flood (GLOF) model chain using a combination of advanced, physically based numerical dam-breach and hydrodynamic models. Dam material roughness is the dominant influence on outflow hydrograph form. Morphological characteristics of a GLOF breach are appropriate measures for assessing the performance of individual simulations, or parameter ensembles, at reproducing observed breach morphology. Breach morphology is reproducible by parameter ensembles associated with differing breach-initiation scenarios, lending support for the adoption of probabilistic, as opposed to deterministic, methods for dam-breach outburst-flood reconstruction. We also demonstrate an effective approach for cascading dam-breach simulation likelihood data through to the construction of probability-based maps of GLOF inundation extent and flow depth, and the subsequent derivation of event-specific maps of flood hazard.

M. J. Westoby was funded by a NERC Open CASE award (NE/G011443/1) in partnership with
Reynolds International Ltd. Additional funding to support field activities
from the Department of Geography and Earth Sciences Postgraduate
Discretionary Fund (Aberystwyth University) is duly acknowledged.
J. Balfour, P. Cowley, S. Doyle, H. Sevestre, C. Souness, R. Taylor, and guides
and porters from Summit Trekking, Kathmandu, assisted with data collection
in the Khumbu Himal. HR Wallingford Ltd and the Halcrow Group Ltd are
thanked for the provision of academic licences for the use of HR BREACH and
ISIS 2-D, respectively. GeoEye imagery (Fig. 1b) was provided free of
charge by the GeoEye Foundation. ASTER GDEM data were downloaded free of
charge from ^{®}.
We thank the associate editor, Katharine Huntington, for handling the
manuscript, and the reviewers for providing constructive comments on initial
and re-submitted iterations.
Edited by: K. W. Huntington

^{®}: MATLAB (version 7.6), available at: