Project Description:
Estimating streamflow generally requires the development of a rating
curve, which allows high frequency stream stage measurements to be
transformed into a ‘continuous’ discharge series of
values. The development of a
rating curve involves acquiring at least thirty in-field measurements
across a wide range of flow levels, which can be costly and impractical
in remote regions with limited seasonal access. Here we showcase an
automated system for collecting rating curve data which allows the
estimation of streamflow multiple times each day, greatly facilitating
the development of rating curves for remote sites.
The system uses an in-field camera to take videos of the stream surface,
and then uses particle image velocimetry (PIV) techniques to estimate
surface velocity. This technique was first developed in the 1980s
(Adrian 1984; Pickering and Halliwell 1984; Grant 1997) and has
attracted attention in recent years for its great practicality (Bradley
et al. 2002; Creutin et al. 2003; Tauro et al. 2016). Videos used for
PIV analysis are typically captured manually; however automated camera
systems have been attempted by for large rivers > 100 m
wide (e.g. Bechle et al., (2012). Hydrological imaging has been
attempted on smaller, more turbulent streams (Leduc et al. 2018)
primarily to assess stream height and width. No documented cases were
found of stream velocity estimation from video analysis on small streams
(i.e. < 5 m wide) where bank roughness takes up a large
percentage of cross stream the flow field. Here we deploy an inexpensive
automated camera systems on two small streams, Charles Creek and White
Partridge Creek in south-central Ontario, Canada, to test the
feasibility of PIV for discharge estimation.
The automated camera system followed a process of measuring stream
stage, assessing if data were required for that stage to selectively
trigger the distribution of particles and video capture. Stream stage
was measured every hour during periods of maximum sunlight – 10:00
until 15:00 – and a video was recorded if stage was observed to change
by more than 5 cm. Polarising filters were used to limit glare as
recommended by previous stream video velocimetry studies (Bradley et al.
2002; Bechle et al. 2012; Tauro et al. 2017). PIVlab was used in
conjunction with the Rectification of Image Velocity Results (RIVeR)
software to calculate velocity vectors from stream videos (Patalano et
al. 2017). PIVlab analyzed videos frame by frame to vectorize the change
in position of objects and flow structure features on the mobile water
surface to generate vector grids. RIVeR then corrects the geometry of
these vectors with respect to specified ground control points and uses
the time between frames to calculate surface velocity vectors within the
control point footprint. Streamflow was then estimated using a surface
velocity to average velocity transformation and cross section area,
calculated using stage and stream bathymetry data. PIV velocity and
discharge estimates were calibrated and validated with field based
measurements taken using a FlowTracker (SonTek 2009).
PIV streamflow estimates at the Charles Creek site were similar to
in-field measurements taken with a FlowTracker. These field measurements
doubled the range of flows on the rating curve relative to the in-field
measurements, which are difficult to acquire during peak flows due to
access issues and safety concerns, and allowed for the development of a
rating curve encompassing the entire range of flows in 2018. Streamflow
estimates were generally underestimated, which is common with PIV
analysis (Tauro et al. 2017), however this was overcome by calibrating
the adjustment factor (α) which relates surface to average velocity.
After calibrating the adjustment factor, PIV estimates were similar to
FlowTracker measurements for both sites. Particle distribution was not
necessary at the Charles Creek site as ideal lighting and stream
morphology created visible stream texture, making the site perfect for
tracking water movement.
Conversely, the White Partridge Creek site required that particles be
distributed into the stream, as stream surface texture was limited by
the lack of visible water surface texture related to the lack of water
surface roughness and turbulent boils. Further, the polarizing filter
removed any differential sunlight reflection caused by the little
surface roughness that was present within the reach. Thus, during low
flow periods, particles were distributed into the stream via an
automated feeding system to increase movement detection. Popular tracer
particles are typically expensive and synthetic (Melling 1997; Grant
1997), which was unsuitable for this project. Tauro (2016) developed
inexpensive biodegradable particles made from beeswax, but the scent
from these particles had the potential to attract wildlife to the site.
Fine woody particles were chosen as the seeding material, as they were
buoyant, bio-degradable, and did not have a scent which would attract
wildlife. While we recorded and processed 1.25 minutes of video for non
seeded cases, some experimentation with video length required for
optimal PIV processing revealed that thirty second videos provided the
most accurate streamflow estimates when seeding is used in this narrow
stream case, because particles near the stream banks slowed to a stop
when rafting of particles occurred as the near bank region became
progressively saturated with particles. Further, gaps in particle
coverage opened in the center of the channel after 30 seconds and the
swirly seed free areas increased in size over time. Thus, surface
velocity estimates were less accurate and generally lower when averaged
over longer videos. Seeking an automated camera site with surface
textures over a full range of flows is the best strategy to avoid having
to deal with seeding issues.
The camera system detailed here allows for inexpensive stream gauging,
which can be valuable for measuring streamflow in small rivers in remote
areas. The system had a battery life of approximately one month,
capturing an average of two videos each day, and this could likely be
extended by using a higher power efficiency Raspberry Pi model (e.g.
Zero), incorporating a more sophisticated sleep cycle, reducing the
frequency of videos captured, and increasing solar power.