Articles | Volume 12, issue 3
https://doi.org/10.5194/esurf-12-691-2024
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
Geomorphic risk maps for river migration using probabilistic modeling – a framework
Download
- Final revised paper (published on 08 May 2024)
- Preprint (discussion started on 13 Dec 2023)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2023-2190', Keith Beven, 19 Dec 2023
- AC1: 'Reply on RC1', Omar Wani, 18 Mar 2024
-
RC2: 'Comment on egusphere-2023-2190', Anonymous Referee #2, 06 Feb 2024
- AC2: 'Reply on RC2', Omar Wani, 18 Mar 2024
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Omar Wani on behalf of the Authors (18 Mar 2024)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (21 Mar 2024) by Sagy Cohen
ED: Publish as is (21 Mar 2024) by Tom Coulthard (Editor)
AR by Omar Wani on behalf of the Authors (01 Apr 2024)
This paper represents a valuable first attempt to provide a practical approach to the probabilistic modelling of meander plan migration, with an application to satellite data for the Ueayali river in the Amazon basin. As such, it is a bit outside my normal expertise, but I do have more experience of trying to apply probabilistic methods to deterministic knowledge in hydrological and other environmental applications. As such, one of the things that is lacking here is any recognition of the past discussions of epistemic and aleatory uncertainties in environmental applications and their consequences for model testing and uncertainty quantification. The main lesson learned, in fact, is that there is no right answer – what comes out depends on the assumptions made. Here, I suspect that a professional statistician would be reasonably happy with the assumptions made since the problem has been shoehorned into a formal statistical framework (with consequent discussion in the paper about the possibility of variability of parameters in time and space).
But are those assumptions correct? A comment on Figure 6 suggests that “the parameters gravitate towards the same values” (L347). Even in the hypothetical case, where the assumptions are mostly me by definition this is surely not the case – there is a move of the migration coefficient away from the true value. Is the result therefore being biased by the likelihood function? The differences for the actual application are even more marked (the later discussion is more realistic in this respect.
But in Figure 8, the model actually seems to fail since the observed channel moves outside of the uncertainty bounds. This makes me wonder if it was necessary to restrict the model in both time (why only to 1995, that is nearly 30 years ago now, what about the data since?) and space (why only 4 meanders?) to demonstrate some degree of success.
So one of the questions (again with much discussion elsewhere but not here) is how far the uncertainty component is actually compensating for the model deficiencies and at what point should the underlying model be considered invalid (see the discussion of model invalidation in Beven and Lane, HP 2022, and references therein).
Certainly other possibilities for model evaluation and uncertainty estimation would be possible within a less formal Bayesian framework (and not only ABC with a similar formal likelihood). The additive formulation with imposed smoothing here, for example, implicitly imposes a correlation structure in the spatial random component (figure 6) that might not be stationary and could perhaps be better considered explicitly, given that the random component might not be independent of the parameter set as assumed in Eqn.2.
In fact, each model run has its own set of residuals that will not necessarily have common structure or parameters. To assume that they have certainly simplifies the analysis – but it is again an assumption (not “by definition” as stated on L206 – other choices would be possible).
One of the questions that has been discussed in the hydrological literature is whether the input data might be disinformative in model evaluations. In fact in your case, there are no real inputs as such apart from the initial plan form, but there is perhaps a certain possibility here in terms of the sequencing for meander forming events relative to the times at which images are available. Experience suggests that sometimes extreme events can be significant in migration (perhaps less so in this Amazon case for a 10 year period?). But this will be an additional epistemic uncertainty associated with the modelling assumptions.
So, in conclusion, I suggest that some revision of the paper is needed to reflect some of the issues raised above, both in querying the choice of assumptions as the methods are presented, and in the discussion (especially in how epistemic uncertainties are being formulated as if they are purely aleatory). I would very much like to see extension to more meanders and longer time scales (surely the data are available) as I suspect that this might reveal more limitations of the assumptions – but I accept that might not be possible. This is already a useful first attempt at uncertainty estimation of such a problem.
[As an aside, perhaps for future studies you might consider a limits of acceptability approach to model evaluation?]
Keith Beven
Some other comments
Equation 2. Theta should be included in g(), even if you then later assume independence
L135. The thing about epistemic errors including model structural errors due to oversimplification is that they are not necessarily systematic – that is what makes strong statistical assumptions often difficult to justify.
L141. The equifinality thesis has quite a long history in hydrology (see e.g. Beven, 2009, Environmental Modelling – An Uncertain Future?)
L154. There is a lot of experience with model evaluations and uncertainty estimation of flood risk maps (some mentioned in the Beven and Lane paper)
L157. Follow some parametric distribution in the limit. You can assume that of course, but this is a nonlinear model subject to epistemic uncertainties so will not necessarily foloow in tie or space.
L374/5. Going out in the field. An interesting comment since you are not using a process model and you can get satellite images (and therefore explicitly quantify actual patterns of residuals) relatively frequently – so what would you actually measure? Might you might not better suggest allowing data assimilation in updating the forecasts (or will that be the next paper using more up to date images?)?
This could obviously be done in space too, working from bend to bend (L388ff) rather than as a spatially distributed inverse problem with all the interactions between bend parameter sets – since the best prior estimate of the distribution of parameters for each bend should be that of the upstream bend (unless there is information otherwise).
L438. But in your case the observations are informative in the hypothetical case because the assumptions are consistent. In the real case they are not but the observations might still be informative – for example in showing that your model is wrong (as suggested by Figure 8).
Reference
Beven, K. J. and Lane, S., 2022. On (in)validating environmental models. 1. Principles for formulating a Turing-like Test for determining when a model is fit-for purpose. Hydrological Processes, 36(10), e14704, https://doi.org/10.1002/hyp.14704.