31 May 2023
 | 31 May 2023
Status: this preprint is currently under review for the journal ESurf.

Short Communication: Motivation for standardizing and normalizing inter-model comparison of computational landscape evolution models

Nicole M. Gasparini, Katherine R. Barnhart, and Adam M. Forte

Abstract. This manuscript is a call to the landscape evolution modeling community to develop benchmarks for model comparison. We illustrate the use of benchmarks in illuminating the strengths and weaknesses of different landscape evolution models (LEMs) that use the stream power process equation (SPPE) to evolve fluvial landscapes. Our examples compare three different modeling environments—CHILD, Landlab, and TTLEM—that apply three different numerical algorithms on three different grid types. We present different methods for comparing the morphology of steady-state and transient landscapes, as well as the time to steady state. We illustrate the impact of time step on model behavior. There are numerous scenarios and model variables that we do not explore, such as model sensitivity to grid resolution and boundary conditions, or processes beyond fluvial incision as modeled by the SPPE. Our examples are not meant to be exhaustive. Rather, they illustrate a subset of best practices and practices that should be avoided. We argue that all LEMs should be tested in systematic ways that illustrate the domain of applicability for each model. A community effort beyond this study is required to develop model scenarios and benchmarks across different types of LEMs.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this preprint. The responsibility to include appropriate place names lies with the authors.
Nicole M. Gasparini, Katherine R. Barnhart, and Adam M. Forte

Status: final response (author comments only)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Comment on esurf-2023-17', John Armitage, 21 Jun 2023
  • RC1: 'Comment on esurf-2023-17', Anonymous Referee #1, 06 Jul 2023
  • RC2: 'Comment on esurf-2023-17', Andrew Wickert, 15 Jul 2023
  • CC2: 'Comment on esurf-2023-17', Risa Madoff, 22 Jul 2023
  • AC1: 'Comment on esurf-2023-17', Nicole Gasparini, 24 Aug 2023
Nicole M. Gasparini, Katherine R. Barnhart, and Adam M. Forte
Nicole M. Gasparini, Katherine R. Barnhart, and Adam M. Forte


Total article views: 998 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
736 236 26 998 18 17
  • HTML: 736
  • PDF: 236
  • XML: 26
  • Total: 998
  • BibTeX: 18
  • EndNote: 17
Views and downloads (calculated since 31 May 2023)
Cumulative views and downloads (calculated since 31 May 2023)

Viewed (geographical distribution)

Total article views: 972 (including HTML, PDF, and XML) Thereof 972 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
Latest update: 23 May 2024
Short summary
Computational landscape evolution models (LEMs) show how landscapes change through time. There are many LEMs in the scientific community, but there are no standards for testing whether LEMs produce correct solutions or comparing output among LEMs. We present a comparison of three LEMs, illustrating both strengths and weaknesses. We hope our examples will motivate the LEM community to develop methods for inter-model comparison, which could help to avoid current and future modeling pitfalls.