1. Introduction
The United Nations Office for Disaster Risk Reduction (UNDRR) Guidelines for the reduction of flood losses states that “The operation of a flood warning and response system is the most effective method for reducing the risk of loss of life and economic losses”. Scientific evidence also demonstrates the cost effectiveness of flood warnings [
1]. In flow forecasting for flood early warning, real-time river observations play a key role, as these provide significant skill for short-term flood forecasting. Furthermore, river flow observations are a key requirement to understand, plan and manage water resources, design infrastructure cost-effectively and up to standards, and study changes in the world’s water balance, and establish and calibrate scenario models (see e.g. Hirpa et al., 2018).
Yet hydrological monitoring station networks are declining [
2]. In particular in the Global South, the Global Runoff Data Centre, the WMO body for distributing global hydrometric data, is observing a strong decline in the number of stations [
3]. Continuous engagement by the authors with entities that are responsible for managing water resources have confirmed a wide variety of reasons why monitoring remains difficult for them. These include costs, risks of vandalism, lack of human resources with specific technical skills, and lack of spare parts. Moreover, a possible clear lack of connection to value adding use cases is one of the often-overlooked reasons for the declining networks. Finally, accessibility of hardware and software is limited, as monitoring equipment is highly specialized while software is often of a proprietary nature.
Non-contact observation methods that rely on relatively simple and widely available camera systems have enormous potential to close this gap. In essence, the principle is to track movements on the water surface, translate these into surface velocities, integrate these over a measured depth cross-section to obtain a river flow estimate. The first part was coined by Fujita [
4] as Large-Scale Particle Image Velocimetry (LSPIV). It uses cross-correlation principles to estimate velocities. Several other methods were developed after this including Particle Tracking Velocimetry (PTV) [
5], Space-Time Image Velocimetry [
6] (STIV) and variations on the aforementioned. Ref [
7] provides a more complete overview of these methods.
There are several existing software packages built around these techniques that facilitate a more user-oriented processing of image-based river flow estimates. Examples include FUDAA-LSPIV [
8], which is a free user-interface for applying LSPIV, including stabilization, orthorectification, and discharge processing; KLT-IV [
9], relying on Optical Flow methods, and including methods for image stabilization; DischargeKeeper [
10], relying on an adapted form of LSPIV called Surface Structure Image Velocimetry (SSIV) and including (besides features mentioned for the other packages) optical water level estimates; and Hydro-STIV [
11] which applies the mentioned STIV method.
These packages are already in use by many agencies but the uptake of image-based flow methods in operational use cases remains limited to date, in particular in the Global South. Here, image-based methods may be highly advantageous to use due to the semi-arid nature of many rivers, and the strong differences between high and low flow associated with this. This makes observation stations subject to difficult circumstances. There may be many reasons why the uptake is limited, including a limited exposure to the methods, limited application experience, calibration and validation with local institutes [
7] and the limited availability, applicability of, and trust in software.
The mentioned software packages, although mostly presented with a user interface and usually having documentation to facilitate handling the software, have several challenges in their uptake. First and foremost, all mentioned packages are closed-source and with this, focusing on a limited set of use cases. Hence the software cannot be further developed for locally specific needs (e.g. use of local surveying practices, locally specific devices and applications for collecting and transmitting raw data, and use of local language). For instance, FUDAA-LSPIV is well capable of providing a discharge estimate in a stream with many tracers and on incidental observations, but with less tracers (i.e. requiring longer integration times) and with many observations on the same spot, the software may become too slow in use and processing time for efficient operations, and may require too much disk space to practically use it for operational needs. These were choices made in the design of the software, narrowing the possible use cases.
Another issue may be the chosen business case, tied to a software package. The DischargeKeeper system provides an excellent interface for operational use cases (e.g. operational discharge monitoring for flow and flood forecasting), through an operational dashboard that allows a user to interrogate multiple sites and very rapid processing. The offer for use of DischargeKeeper is a Service Level Agreement with agreed costs per site and time (SEBA, Photrack, personal communication). While such a business model is very much fit for operational agencies with high staff costs and little staff to spare, for cases where staff rates are much lower, such as in the Global South, it would be more advantageous to pursue a model where local people can fulfill many of the tasks of maintenance of sites and even server infrastructure.
Finally these software stacks are provided as is, without any access to its code base, database or other. This means that changes in the software, and ability to do research and development with it is only possible by the core developer. There is also no guarantee that the software remains available in the future. Given the closed data and software model, this may make the use for an operational entity risky as one has to rely on the assumption that the software and its data will remain in development. For researchers, only “as is” computations can be done.
All three issues mentioned advocate for the freedom of an open-source and modular software ecosystem in which different apps, serving different use cases may be developed. Use cases we collected from our experience may include incidental monitoring of floodplain velocities for monitoring effectiveness of groin fields, monitoring of debris and plastic flow, critical to understand the journey of plastic into the ocean, dike breach monitoring, and fixed site operational discharge monitoring, e.g. by water authorities or dam authorities. These use cases may require specific functionalities and user experiences that hence the requirement of modularity is important. The issue of local languages should also not be underestimated.
We wish to stress that the three issues mentioned may not be a problem for all potential users. It is the choice of the developers to follow a closed source model and for commercial cases this is very understandable given that the costs of development somehow need a return on investment. It may however limit the uptake due to use case obstructions, language obstructions, limits of applicability of the business case elsewhere, limits in ability to perform research, and limits in financial resources to acquire the software.
With this paper, we focus on what is needed for the uptake of the OpenRiverCam methods world-wide (including the Global South) with limited software development. We propose a framework for software development for image-based surface velocity and discharge measurements, that collects methods in an open-source environment, and works in the form of building blocks, to allow a user to select a level of entry most fit for local use. The remainder of this paper is organized as follows: In
Section 2, we first describe the methods we followed in designing the software ecosystem, the modularity and choices made in licensing. We apply several of the Principles for Digital Development in this approach, to identify requirements for a successful development of software with intended goals in mind. Furthermore, we describe methods by which we scientifically confirm that the software can be used in three different application cases. In
Section 3, we describe the results of the software development and the current state of the OpenRiverCam ecosystem. We also describe the results of the scientific validation for the three cases. In
Section 4, we discuss required future developments in the ecosystem, based on the current state and recent discussions with potential end users.
4. Discussion
4.1. Software development
In this paper we demonstrated how, by listening to end users, and by acknowledging requirements from the Digital Principles, software can be designed in such a way that it fits use cases, user context, and available user resources. Based on the last use case, we consider our software to be at Technological Readiness Level (TRL) 7 as the software has been operationally tested in a real-world environment with an end user. However, despite the fact that important developments have been made, only a small number of the components of our roadmap is currently in place. In order to reach a strong user base, and guarantee long term sustainability of the software, we see the following as essential to become successful:
Start implementing use cases as soon as possible, even during low TRL phases. This enables short feedback loops from users within the use cases and ensures we develop what truly fits to their needs.
Develop training materials, ideally multi-lingual. Currently we ensure that documentation remains up to date as a first step, but training materials will be essential to get started. This can be in the form of DIY online, videos, instructables or on-site training materials for dedicated training on-site, ideally with a real-world site visit and data collection.
Develop and demonstrate data collection methods. Even though this paper focused strongly on the design and first applications of our software framework, its use will strongly depend on the ability of people to perform local data collection. Providing versatility in how this is done is of great importance in order to facilitate as many users as possible. E.g. an operational IP-camera with modem, power and internet facilities may in many cases prove to be complicated to maintain, and subject to vandalism.
Develop a community of practice. Our GitHub issues pages are a good starting point, but a community forum would create much more interaction.
Stay on top of the latest science and continuously improve and implement new methods. This is essential to stay relevant in this field. As new methods and approaches are developed, we will seek to implement these to ensure the latest science is available. This may include new methods to trace velocities e.g. [
14], scientific developments in data assimilation and combination with hydraulic models [
15,
16], inclusion of satellite proxies or videos, and machine learning for data infilling, segmentation (e.g. for habitat studies) and more. Authors should discuss the results and how they can be interpreted from the perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted.
4.2. Application development
To ensure that (besides the currently available command-line interface) more user-oriented applications become available, it is important to start focusing on scalable web applications. A prerequisite for this is the development of nodeOpenRiverCam, which will lead to abilities to link one or more dashboard environments to the methods. This will greatly enhance abilities to run many videos, possibly from many different users, within one platform and will offer local entrepreneurs the opportunity to develop a dedicated environment on top of our software, for dedicated use cases and business cases. For operational cases in remote areas, edgeOpenRiverCam will become very important and should communicate in the same manner with platforms so that edgeOpenRiverCam and nodeOpenRiverCam can work together on one dashboard platform if needed.
4.3. Potential outcomes
Given the free and open-source model chosen, the technology becomes favorable for use in less resource-rich environments. For instance, data collection with a smartphone and sending the data off through an OpenDataKit form, is a very low-key yet powerful solution requiring relatively little investment on the side of the hosting agency. Such a solution would also have almost no problems with theft or other vandalism. Another possible setup could be a fixed setup using openly available hardware such as raspberry pi running edgeOpenRiverCam. For use cases, more driven by incidental observations, e.g. during unforeseen floods, or in complex areas with drones, a more dedicated dashboard may be built. Finally, such flexible and simple systems will not stand in the way of setting up more technologically advanced hardware on-site where needed, with the added benefit that, potentially, the software platform can run locally, creating local entrepreneurship within the local economy, therefore also making it more locally affordable.
To make this happen and to make this software development endeavor sustainable, we will need a strong increase in the number of users, more interaction with developers, and more feature requests. The foreseen user forum may become a catalyst for this growth of the user base. The OpenRiverCam ecosystem sets the stage for a wide range of potential use cases in environmental monitoring, small-scale hydropower, river flow observations, humanitarian community-based monitoring, and early flood warnings. These use cases will play the central role in choices for further development in the forthcoming decade. We highly welcome contributions from the community.
Figure 1.
Situation overview of camera objective at the Geul River. (a) overview of site and area of interest. (b) area of interest in image frame.
Figure 1.
Situation overview of camera objective at the Geul River. (a) overview of site and area of interest. (b) area of interest in image frame.
Figure 2.
Roadmap of the OpenRiverCam ecosystem, with its current status.
Figure 2.
Roadmap of the OpenRiverCam ecosystem, with its current status.
Figure 3.
Camera configuration workflow.
Figure 3.
Camera configuration workflow.
Figure 4.
Velocimetry workflow.
Figure 4.
Velocimetry workflow.
Figure 5.
Validation with in-situ point measurements. The numbers with each point display the in-situ measured velocity in m/s. The relative difference is computed as r = (vpiv - vinsitu) / vinsitu where r is the relative difference, vpiv is the velocity [m/s] by pyOpenRiverCam, and vinsitu is the insitu measured velocity [m/s].
Figure 5.
Validation with in-situ point measurements. The numbers with each point display the in-situ measured velocity in m/s. The relative difference is computed as r = (vpiv - vinsitu) / vinsitu where r is the relative difference, vpiv is the velocity [m/s] by pyOpenRiverCam, and vinsitu is the insitu measured velocity [m/s].
Figure 6.
Interface for selecting ground control points.
Figure 6.
Interface for selecting ground control points.
Figure 7.
Velocity estimates for 5 April 2023, 12.55 PM local time. Each subplot shows the 2D velocity fields in small quivers, the transect sampled velocities perpendicular to the cross-section in larger quivers, and a scatter plot with cross section velocities compared against ADCP velocities. Top-left: drone at 30m altitude nadir; Top-right: drone at 25m altitude under 45 degrees from the side; Bottom-left: drone at 70m altitude nadir; Bottom-right: GoPro camera mounted on bridge.
Figure 7.
Velocity estimates for 5 April 2023, 12.55 PM local time. Each subplot shows the 2D velocity fields in small quivers, the transect sampled velocities perpendicular to the cross-section in larger quivers, and a scatter plot with cross section velocities compared against ADCP velocities. Top-left: drone at 30m altitude nadir; Top-right: drone at 25m altitude under 45 degrees from the side; Bottom-left: drone at 70m altitude nadir; Bottom-right: GoPro camera mounted on bridge.
Figure 7.
Same as
Figure 6, but for 5 April 2023, 3PM local time.
Figure 7.
Same as
Figure 6, but for 5 April 2023, 3PM local time.
Figure 8.
Correlation of discharge estimates with pyOpenRiverCam and ADCP for different perspectives. ADCP on x-axis, PIV median of OpenRiverCam on y-axis. The level of transparency indicates the percentage of the discharge that was resolved from PIV. For instance 70% would be plotted with a 0.7 level transparency and means that 70% of the flow in m3/s was estimated from PIV and the remaining 30% comes from filled in missing values. Error bars are highly conservatively estimated from the variability in velocity on frame-by-frame basis. (a) Drone videos at 35 meter nadir. (b) drone videos at 25 meters from the side under 45 degree angle. (c) Drone videos at 70 meters at nadir. (d) GoPro videos from the bridge deck.
Figure 8.
Correlation of discharge estimates with pyOpenRiverCam and ADCP for different perspectives. ADCP on x-axis, PIV median of OpenRiverCam on y-axis. The level of transparency indicates the percentage of the discharge that was resolved from PIV. For instance 70% would be plotted with a 0.7 level transparency and means that 70% of the flow in m3/s was estimated from PIV and the remaining 30% comes from filled in missing values. Error bars are highly conservatively estimated from the variability in velocity on frame-by-frame basis. (a) Drone videos at 35 meter nadir. (b) drone videos at 25 meters from the side under 45 degree angle. (c) Drone videos at 70 meters at nadir. (d) GoPro videos from the bridge deck.
Figure 9.
scatter plots of water level (x-axis) and discharge (y-axis), each point representing the analysis of one video. (a) all videos (including nighttime). (b) only videos that have at least 25 frames per second and resolve at least 80% of the flow through optical velocimetry.
Figure 9.
scatter plots of water level (x-axis) and discharge (y-axis), each point representing the analysis of one video. (a) all videos (including nighttime). (b) only videos that have at least 25 frames per second and resolve at least 80% of the flow through optical velocimetry.
Table 1.
The digital principles used throughout the design and development and how these are applied.
Table 1.
The digital principles used throughout the design and development and how these are applied.
Principle |
Specific implementation in OpenRiverCam design and development |
Design with the user |
Through interaction on social media, in particular LinkedIn, and discussion with interested stakeholders, virtually and through in-person meetings, we gather user feedback and potential use cases, and make choices for the design that lead to fulfillment of an as wide as possible set of use cases. We encapsulated the interactions into so-called “user stories” that led to the design of the OpenRiverCam ecosystem. |
Design for scale |
OpenRiverCam is designed in an entirely modular fashion. Several building blocks are developed that can be used as stand-alone or in combinations so that uses at many different levels are assured, such as research, stand-alone application or scaled application through compute nodes, edge processing, and dashboard interfaces. In particular the compute nodes and edge processing abilities will make OpenRiverCam scalable. |
Build for sustainability |
We use widely accepted and well-maintained libraries only and provide software entirely with a copyleft license so that anyone can contribute to the software, or further develop the software. |
Use Open Standards, Open Data, Open Source and Open Innovation |
We use the Python language, as this is very broadly known and can therefore be used by many users. We use the NetCDF-CF convention and OGC standards as open standard for data, allowing for easy integration with many other platforms. |
|
|
Reuse and improve |
In order to guarantee that a small group of people can maintain the software indefinitely, we adopt well-maintained data models such as those embedded in the well-known xarray library. REST API development is planned in the well-known and maintained Django framework. |
Be collaborative |
We are in the process of defining a forum for user and designer/developer interaction and a code of conduct. We use the GitHub framework with continuous integration for unit testing and releases to assure quality of the code, striving for a minimum of 80% unit test coverage, and accepting issues of users on the platform. We already accept issues from users on Github. |
Table 2.
Short description case, data + fieldwork used, functionality ORC used, validation means.
Table 2.
Short description case, data + fieldwork used, functionality ORC used, validation means.
Case study |
Short description |
Data Acquisition |
Processing with OpenRiverCam |
Validation approach |
Alpine River (Perks et al., 2020) |
Incidental observation with UAS, with partly artificial partly natural tracers |
DJI Phantom FC40 at nadir orientation, with GoPro Hero3+ 4K camera, stabilized, resampled to 12.5Hz, orthorectified to 0.021 m / pixel |
Background noise reduction with subtraction of time-averaged frames. PIV, masking of spurious velocities with correlation and outlier filter |
Comparison against in-situ propeller observations in m s-1
|
Tidal channel at “De Waterdunen” Zeeland - The Netherlands |
Several videos from drone perspectives and a mobile camera on nearby bridge, at different moment in time during incoming tide |
DJI Phantom 4, at 4096x2160 resolution |
Stabilization, background noise reduction with time differencing and thresholding (minimum intensity >= 5), orthorectification (0.03 m), PIV (15 pixels window size), masking of spurious velocities using correlation, outlier filtering, and minimum valid velocity counting, cross section derivation and integration to discharge |
Surface velocities: comparison with ADCP surface velocities; Discharge: comparison with ADCP discharge estimates. |
Geul stream at “Hommerich”, Limburg - The Netherlands |
5 months collected videos with 15-minute frequency |
FOSCAM FI9901EP IP camera with power cycling scheme and FTP server |
Automated processing. background noise reduction with time differencing and thresholding negative values to zero, orthorectification (0.02 m), PIV (20 pixels window size), masking of spurious velocities using correlation, outlier filtering, and minimum valid velocity counting, cross section derivation and integration to discharge. |
Comparison of water level discharge pairs collected through hand measurements by the Waterboard Limburg. |
Table 3.
User stories for OpenRiverCam.
Table 3.
User stories for OpenRiverCam.
As a… |
I want… |
So that… |
Core requirements |
remotely operating user |
to process videos immediately in the field |
I know for sure I have the right results before going back from the field |
Processing from a thin client |
drone surveyor |
to make videos of streams in the field and send them through smartphone |
|
Dashboard, operated from a smartphone |
drone surveyor |
to not have to cross a river to receive accurate velocity results |
I can safely collect data from the banks, with only a drone as material |
Get a result with only drone-based data collection, ideally no control points |
hydrologist |
to combine velocities with bathymetry sections |
I can measure river flows |
Integrate velocity estimates with cross-section surveys (x,y,z and s,z) |
hydrologist |
to combine several videos of velocity and discharge of single sites |
I can reconstruct rating curves |
single site processing with multiple videos |
hydrologist |
to combine videos from a smart phone that are more or less from the same position, without redoing control points all the time |
I can quickly make videos during an event without a fixed camera |
image co-registration and corrections on known points, or let user select these |
environmental expert/hydrologist/other |
to show velocities in a GIS environment |
I can combine my data with other data such as land use changes, bathymetric charts, and so on |
exports to known GIS raster or mesh formats |
environmental expert |
to show habitat suitability of certain species based on velocity results |
I can use this for monitoring habitats |
allow for a visualization of classes, and raster export of habitat suitability (API + dashboard) |
engineer |
to see how structures in the water influence the velocity patterns |
I can show if structures do what they need to do, or show locations prone to erosion |
combined visualization of velocity and CAD drawings of structures |
engineer |
to understand over large multi-km stretches where flow velocities are suitable for hydrokinetics |
I can provide guidance where a local hydrokinetic system can be installed |
multi-video processing with enough geospatial accuracy to enable seamless combination of results |
engineer |
to see differences between pre and post construction velocities for infrastructure or river restoration projects |
I can provide monitoring as a product to my clients (engineering firms, environmental agencies, etc.) |
comparability between videos, accurate co-registration, GIS interoperability |