the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Full 4D Change Analysis of Topographic Point Cloud Time Series using Kalman Filtering
Lukas Winiwarter
Katharina Anders
Daniel Schröder
Bernhard Höfle
Abstract. 4D topographic point cloud data contain information on surface change processes and their spatial and temporal characteristics, such as the duration, location, and extent of mass movements, e.g., rockfalls or debris flows. To automatically extract and analyse change and activity patterns from this data, methods considering the spatial and temporal properties are required. The commonly used M3C2 point cloud distance reduces uncertainty through spatial averaging for bitemporal analysis. To extend this concept into the full 4D domain, we use a Kalman filter for point cloud change analysis. The filter incorporates M3C2 distances together with uncertainties obtained through error propagation as Bayesian priors in a dynamic model. The Kalman filter yields a smoothed estimate of the change time series for each spatial location, again associated with an uncertainty. Through the temporal smoothing, the Kalman filter uncertainty is, in general, lower than the individual bitemporal uncertainties, which therefore allows detection of more change as significant. In our example time series of bi-hourly terrestrial laser scanning point clouds of around 6 days (71 epochs) showcasing a rockfall-affected high-mountain slope in Tyrol, Austria, we are able to almost double the number of points where change is deemed significant (from 14.9 % to 28.6 % of the area of interest). Since the Kalman filter allows interpolation and, under certain constraints, also extrapolation of the time series, the estimated change values can be temporally resampled. This can be critical for subsequent analyses that are unable to deal with missing data, as may be caused by, e.g., foggy or rainy weather conditions. We demonstrate two different clustering approaches, transforming the 4D data into 2D map visualisations that can be easily interpreted by analysts. By comparison to two state-of-the-art 4D point cloud change methods, we highlight the main advantage of our method to be the extraction of a smoothed best estimate time series for change at each location. A main disadvantage of not being able to detect spatially overlapping change objects in a single pass remains. In conclusion, the consideration of combined temporal and spatial data enables a notable reduction in the associated uncertainty of the quantified change value for each point in space and time, in turn allowing the extraction of more information from the 4D point cloud dataset.
Lukas Winiwarter et al.
Status: final response (author comments only)
-
RC1: 'Comment on esurf-2021-103', Roderik Lindenbergh, 04 Mar 2022
Review comments by Roderik Lindenbergh and Mieke Kuschnerus for
Full 4D Change Analysis of Topographic Point Cloud Time Series using Kalman Filtering
Short summary
This manuscript considers how changes can be extracted from near-continuous time series of laser scan data. To overcome the problem of missing epochs, and to incorporate measurement uncertainty, time series at each of many so-called core points are approximated using a Kalman filter approach. This approach has the advantage that it provides a way to interpolate the time series to moments when no data is available, and that error propagation is well defined. Given the full time series at many core points, different features, like moment of first significant or maximal change can be extracted. Time series can now also be clustered by grouping points with similar features. The developed approaches are demonstrated on a time series of ~70 consecutive point cloud of an area affected by rockfall undergoing earth moving work.
Major remarks
Our advice is: Major Revision
The presented work contains a lot of interesting ideas and visualizations and is definitely pushing information extraction from time series of 3D data a good step forward. Especially the processing steps of spatial smoothing (M3C2-EP) in combination with temporal smoothing (Kalman filter) to generate regularly sampled, smooth time series are very innovative. These smoothed time series could be used for a variety of applications and in combination with many other methods. Here the authors choose to use feature extraction and clustering to find regions of similar deformation behavior. The explanation of these last two steps lacks focus and should be concentrated on one (maximal two) sets of features and one clustering method. A separate section of the results should then deal with the comparison to other methods.
Some major comments:
- The Kalman filter is one way of interpolating a time series. Directly related is Kriging, which aims at assessing and exploiting spatial and/or temporal correlation. Kriging also enables error propagation. Time series could also be approximated using Fourier or B-Spline polynomials. This could be better discussed in the Intro (as it is related work).
- In your Kalman filter implementation you use three parameters, location, velocity and acceleration. First it should be clearly stated somewhere that you use these parameters to model change in the vertical direction (right?). That said, using velocity and acceleration to model change at a location that changes due to digging is not directly intuitive to me, as such change would better modeled as a step function, please comment. Or more general, how should instantaneous change be incorporated in your setup? And do you really need acceleration as a third parameter, would location and velocity not provide similar results in an easier way?
- I find Figure 1 difficult to understand at first sight, would be good to also include a photo or point cloud colored by height as a fist impression of the site.
- Sect. 2.5, from line 263 onwards and Sect. 2.6: Looks like you are trying many different methods at once. Why not choose one clear method of (1) pre-processing (2) uncertainties with M3C2-EP (3) smoothed time series with Kalman filter (4) feature extraction (5) clustering. Where you choose one set of (most relevant) features for the clustering. In my understanding the main goal of this paper is not to compare different features and/or clustering algorithms but to introduce the two previous steps and highlight the improvements that they yield during clustering. Possibly also add the workflow chart as shown in the readme file that comes with your code on Github. The comparison with clustering on unfiltered time series and without feature detection can then be part of the discussion
- It would be good if some or all of the features in Tables 1 to 4 could be illustrated on 2 two 3 example time series (e.g. RTS-SE-0.5) of representative locations, one in the excavation area, one in a rockfall gully and some third one.
- Results section and Figures 4-8: some subsections are needed here to make it more accessible. The first part deals with the results of processing steps 1-4. From line 300 it goes into feature extraction and comparison/visualization of different features. As mentioned above: better focus on one set of features. Then a subsection called ‘comparison’ is needed. Here it should be clear, what is ‘your own presented method’ and what other methods do you compare with. I would suggest to only compare end results and not different steps in between. Make a selection out of Figures 4 to 8 and show the most relevant results.
- P14, Fig 7 is hardly discussed, discuss if relevant, or omit the figure.
Minor remarks
- The testing framework you mention in the 3rd paragraph of Ch.1 we applied to two-epoch TLS data iDeformation Analysis of a bored tunnel by means of Terrestrial Laser Scanning, Rinske van Gosliga, Roderik Lindenbergh and Norbert Pfeifer, IASPRS Volume XXXVI, Part 5, Dresden 25-27 September 2006
- For significant change extraction, also the terrain roughness could be incorporated as a variance value, compare Kraus, K., Karel, W., Briese, C., & Mandlburger, G. (2006). Local accuracy measures for digital terrain models. The Photogrammetric Record, 21(116), 342-354
- In Figure 2, the velocity and accelaration could also be omitted, (or shown once, in a separate image) as the graphs have a lot of details now.
- Line 90: data set from 2020
- Line 97: not clear what kind of comparison is meant here. Comparison of uncertainty estimation, clustering approach or change detection in general?
- Figure 1: caption discusses II and III, but these are not in the figure. No reference to Fig. 1b in the text, suggested to add to section 2.3
- Line 124: ‘methods […] are based on a part of recorded data […]’ -> Methods are applied to the data, tested on the data, or similar.
- Line 133 – 139: part of 2.3? not clear why it is mentioned here before M3C2-EP has been introduced.
- Sect. 2.4: not explicitly mentioned in the text what are t and x_t
- p8, r205: what exactly is the “uncertainty in point cloud distance obtained by M3C2-EP?
- P8r227: no variance of position in null epoch: would it not be more realistic to involve a measurement error?
- Figure 2: where is the example core point located in the area? You could mark the location in Figure 1, so it is visible what kind of change to expect.
- From Section 2.4 it was no clear to me why RTS was discussed, later I found out that this was actually used to obtain (additional) results, this could be better announced.
- Is the last series of features (last paragraph of Section 2.5) necessary for this paper? In my opinion these could be omitted and focus could be on the features in Tables 1 to 4.
- Figure 3b: red points (only bitemporal change) are difficult to see and it is a bit confusing that the borders of the area have the same color.
- Caption Fig.4: -> “At grey points no significant change could be detected’
- Caption Fig. 5: ‘residuals: between what and what?
- P16: ”Fig 8 depicts a bird’s eye view”: this is the same view as all the other figures, and is not focusing on the lower part of the slope: wrong figure?
- P16: there is no ‘II’ in Fig.1.
- P16: I could not find Fig. 8c unfortunately
- P19: “Recovered velocities and accelerations”: I would use the word “estimated”
- P19: what do you mean by “manually extracted features”? I though all work was automated?
Citation: https://doi.org/10.5194/esurf-2021-103-RC1 -
RC2: 'Comment on esurf-2021-103', Dimitri Lague, 19 Mar 2022
Winiwarter and co-authors propose a new method to analyze 3D point cloud time series (so called 4D data) by combining a spatial smoothing (the existing M3C2 distance measurement with a specific error model recently plublished by the authors M3C2-EP) and a temporal smoothing using Kalman filtering. Kalman filtering is typically used to interpolate and smooth the trajectory of moving objects (planes, vehicules…), and even extrapolate for short time periods their trajectory. Specific points of the scene (regular core points) record the complete temporal evolution of the topographic change which is smoothed and interpolated with Kalman filtering to create a time serie of topographic change with regular sampling. Following a string of recent papers describing spatio-temporal clustering of 4D data, the authors use various approaches to cluster the 2D map of features extracted from the time series, to create maps of clusters (e.g., timing of the first event, amplitude of the largest event….). They use a real dataset of a cliff monitored over several days with TLS.
Evaluation: while the general idea of smoothing temporally the signal to improve the signal to noise ratio and the detectability of potentially smaller events is interesting (but not new in itself, e.g., Komer et al., 2015), I find that the paper do not demonstrate clearly the benefits of the complex Kalman filtering and its associated error model compared to previous approaches (Kromer et al., 2015) or more simpler approach such as bi-temporal analysis, or simple linear interpolation when regular temporal sampling of data is needed. The paper also lacks information and discussion on key aspects of the clustering approach., and use a very complex set of features derived from time series without clear justification and in-depth analysis of the results.
The introduction is very good, but the result section is not well organized and many figures are not informative, or of limited quality, or not fully described in the text. A simple figure illustrating the principle of the method is also lacking.
I think it is possible for this MS to be published at some point, but it needs very significant work to better present the results (both in terms of figure quality and analysis), better demonstrate the advantages of the method compared to simpler approaches, which could be done for instance on synthetic data. Also focusing the clustering approach on one method with a meaningfull set of features that would be easy to interpret would make the paper simpler.
General comment :
- Kalman filtering is used for predicting system states that vary smoothly. I do not see why it would apply to an excavator removing rocks, rockfall or climate-driven surface erosion given that these events tends to be highly discrete in time, and thus inconsistent with a smooth evolution. Moreover, the use of a backward pass limits the ability of the method to accurately detect the timing of an event. Why can’t you simply define local velocities or accelerations from a finite-difference calculation (e.g., v=(P(t2)-P(t1))/(t2-t1) where P is your point location. You’d get a better temporal localization of events, at the expense of a lower detectability of small events. As for the clustering, as you mention in the discussion, a simple linear interpolation would suffice.
- The part of the paper using features extracted from time series is quite superficial. There is no discussion on which features are actually important in the clustering.
- The choice of the number of cluster is not discussed at all. This is a critical point as the issues of over or under-segmentation are critical in clustering, and not addressed at all here in the paper.
- Some figures have poor quality, with details that are difficult to see (fig. 7,8)
- Figure 8 seems to miss 1 sub-pictures that is mentioned in the text, but not shown
- The results section does not have a clear organization, and many figure are not described and exploited to their full extent.
- The discussion do not address the choice of the number of clusters. Also it does not discuss the limits of the approach, when it comes to the reduced temporal resolution, the need to choose a state variance, and the general complexity of the approach. In particular, it is ill adapted to detect precursors which can be critical in real-time monitoring because of the smoothing effect of the signal. In general, I find that there is a tendency in the discussion to assume that because the method is more complex, it is better. However, it is not clearly supported by the evidence shown in the paper. Apart from one figure (fig. 3), the superiority of the new approach compared to a simple bi-temporal approach is not clearly demonstrated (and I actually have doubts on the results of fig. 3). The benefits in terms of lower level of change detection are not obvious and would benefit from synthetic data simulation to evaluate quantitatively how each method is able to recover a known change.
- The discussion does not describe the benefit of Kalman filtering compared to the Kromer et al., (2015) approach.
- Also the fact that the clustering is done in 2D, while the core points are inherently 3D is not discussed.
Detailed comments
The introduction is very good, and states clearly the objectives of the paper with the necessary references to previous work.
L113 : please specify the typical point spacing. This is a critical information that is missing to understand why you are not able with bi-temporal analysis to detect a 5-10 cm change with a sensor with 0.005 m precision !
L127 : could you explain why you needed to realign the data if the sensor was on a fixed pillar, and arguably, all scans were acquired in the same reference frame ? or is it specifically related to using M3C2-EP and estimating the alignment uncertainty ? In that case, mention it in the text.
L136 : why 0.5 m ?
L138 : are there any correction for temperature effects on ranging measurement (that start to be significant over 800 m) ? Also, I suspect the 0.005 m ranging accuracy is certainly not at 800 m distance ! have you better constrained on the actual ranging accuracy at 800 m ?
Section 2.1 : could you give an estimate of the mean point density of the scans ?
Section 2.2 seems like an introduction to the algorithm you present to analyses PC series, with a bit of state of the art in spatio-temporal clustering. Then subsequent section (M3C2-EP etc…à) should be sub-section on this one (2.2.1, 2.2.2….) otherwise section 2.2 by itself is not part of the method.
L174 : while I know M3C2-EP, I suspect it would help less specialist readers to have a bit more explanation on the extra steps needed for the uncertainty calculation in M3C2-EP, and the benefits compared to the standard uncertainty model of M3C2. No need to go into too much detail, but the M3C2-EP paper being a tough one to read, it would help to have a self consistent paper.
L179 : you should specify how k is going to be defined, as it needs to be manually chosen for k-means clustering.
L180 : I’m roughly familiar with Kalman filtering owing to airborne LiDAR data processing, however, I suspect many readers won’t, and they may have trouble following this part. Maybe a sketch of the basis of kalman filtering applied in your specific case would help.
L220 : see major comment 1. I really have trouble reconciling the smooth nature of Kalman filtering with the highly discrete nature of erosion events
L241 : this sentence is not clear to me. How do you turn the 4D data into 2D ?-> ok I get it, it’s an introduction to the subsequent section. Maybe rephrase to make things clearer.
L254-258 : making sense of the attributes in relation to the expected geomorphic processes would be great. For instance, it is not obvious at this stage why the total curvature is importante (compared to a more straightforward measure such as cumulative change) ?
L263 : FFT on a signal which is have periodic pattern does not really make sense especially if you’re not detrending the signal and using filters to account for the finite dimension of the time series. Maybe theres’s a reason I don’t see, but in that case it seems important to give a little intuition as to why you suggest such features. Wavelet analysis might make more sense as it combines temporal location (when an event happen) and frequency analysis (~ duration of an event), but it’s hard to come up with simple integrative features to be used for subsequent clustering.
L275 : I do not see at all, how the clustering based on the features, which are potentially very numerous and do not contain any relation to “physics” or “drivers” of cliff erosion (precipitation, local cliff geometry, ….) can actually lead to a more “physical interpretation” than analyzing the estimated change directly. The authors need to back this statement.
L277 : you should mention that the number of clusters need to be specified, in case non-specialist readers think that unsupervised clustering is just pushing a button and getting a result. An you should explain here, how you choose the number of clusters (as you did for GMM. It’s critical.
L293-299 / figure 2 : the description of the figure needs to be improed. You’re first sentence stating “appropriately filters daily effects” gives a sense that 0.005 m/day² is initially the best value, while indeed you choose 0.05 m/day². Also for such an important parameter, your search of the optimum is rather qualitative. I don’t think plotting acceleration helps at all. You do not discuss the occurrence of clear oscillations in the signal prior to the change. Are they real signal, or variations of the scanner position (+-2 cm, that’s huge) ? It seems that another criteria for choosing σ is that it must be large enough to not trigger a detection for these oscillations.
Fig3 : It is hard to tell without having the information on the point cloud spacing, but I’m extremely surprised that a bi-temporal analysis is not able to detect change in the channels where distances more than 5 cm are measured by the multitemporal approach. It might be that the ICP registration has an issue on the two epochs used for testing for significant change and translated into a large registration error increasing the LoD. But 5 cm over a few cm² should be detected easily with a sensor with 1 cm ranging error (an estimate at 800 m) and a 1 cm registration error. It’s very odd.
Fig 4b: use also greyed color for the area with non significant change to facilitate comparison with 4a. It would be interesting, following fig. 3 to show if the onset of change detection differs significantly from the bi-temporal approach compared to the multi-temporal approach. This would better emphasize the interest of your method.
L318 : which “value” ? it’s not clear
L321 : I fail to grab the interest in showing fig. 5. What do you learn and how important it is ?
L325 : Ok, figure 6 tells us you use 50 clusters in one case and 100 in the second case, but it is not even mentioned in the text and you do not justify your choices. It’s a critical point to discuss. Also why can’t you simply create a linear interpolation between two epochs to fil in the gap for your clustering ? it would solve your problem of temporal spacing without having to rely on a complex Kalman filtering.
Question: is there correspondence between the large pink area and the area where no change is detected ?
L330 & fig 7 : this description of figure 7 is insufficient. It is not up to the reader to analyze the results. You must highlight much more key results, otherwise it means the figure is useless (actually I’m not convinced it’s actually useful, because the quality of the visualization is poor, and we don’t know why you 150 clusters and not a lower or larger number).
L334 : the whole subsequent section is really hard to follow.
L340 : How dependent are the so-called “distinct” features on the number of cluster. As you are using a large number of cluster, you are artificially producing many features. But this may simply result from over-segmentation. Here the choice of your number of cluster should be discussed in depth.
Figure 8 : the visualization is extremely poor, and this figure is not really usable.
L343 : Fig 8c is missing
L345 : this last statement seems to contradict previous sentences in the very same paragraph. So in the end, your method detect the same things than the others. What is really its interest beyond interpolating slightly (which could simply be done with linear interpolation between 2 epochs…) ?
L355 : smoothing of the time series is debatable advantage, as it decreases the temporal resolution of event detection. Also “predicting” future states when it comes to natural environments seems hardly feasible, especially when considering rockfalls or rain-related erosion which are by nature not really predictable.
L362 : velocity and acceleration are meaningfull for estimating a plane trajectory as it by nature smooth, however it is not useful, and probably not desirable, for interpolating the occurrence of discrete erosion events.
Citation: https://doi.org/10.5194/esurf-2021-103-RC2 - AC1: 'Authors' comment in response to the RCs', Lukas Winiwarter, 25 Apr 2022
Lukas Winiwarter et al.
Model code and software
lwiniwar/kalman4d: v0.0.2 Lukas Winiwarter https://doi.org/10.5281/zenodo.5788526
Lukas Winiwarter et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
899 | 350 | 20 | 1,269 | 19 | 13 |
- HTML: 899
- PDF: 350
- XML: 20
- Total: 1,269
- BibTeX: 19
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1