Figure 2. The content of the bounding boxes calculated using
the YOLO algorithm can be seen in (a)-(c) for three different cases.
Color threshold segmentation has been used making it easier to locate
the paint damages in the bounding boxes. Note that a mask is added to
the image of TP to cover the identity of the owner.
AI sometimes falsely finds paint damage on other surfaces such as water
or other TPs. An example of this is shown in Figure 3 (a) below. Here
the small bounding box is placed on the TP of interest while the larger
bounding box is placed on a distant TP. Only the content of the bounding
box on the center TP should be mapped to the 3D model of the tower.
Masking the images can solve this problem if the TPs are not placed too
close to each other, this condition is always met when the TPs are
placed in the wind farm. The images with paint damage have been masked
using three steps. Segmentation using the color threshold approach is
applied to the image. The largest coherent area is then found in the
corresponding binary mask. This binary mask is then applied to the
original images.
The physical asset here the TP will always be the biggest item in the
images. The photogrammetry technique requires that this is the case. If
this is not the case the image will be discarded. If the asset has
different colors then the color threshold approach cannot be used. A
segmentation candidate could in this case be a graph-based segmentation
technique like lazy-snapping which makes it possible to segment an image
into foreground and background regions. The foreground will here be the
transition piece. Different segmentation techniques should be selected
based on the specific circumstances. However, the color threshold
segmentation technique is very suitable for paint damage detection cases
because only objects with a specific color are of interest and it is
therefore not important that objects with other colors get removed using
the color threshold segmentation technique.
The original image with two YOLO bounding boxes is shown in Figure 3 (a)
while the masked image is shown in Figure 3 (b). The first bounding box
is placed in the dark area of the masked image and the corresponding
pixels will not be mapped to the TP. Only the correctly placed bounding
box pixels will be mapped to the TP. This is done using an approach
discussed in the following section. All images with paint damage should
be masked using this approach. Depending on the light conditions during
the capture of the drone images it can be necessary to calculate and use
a few different binary masks on the original images.