Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment

Aerial imaging from UAVs (Unmanned Aerial Vehicles) permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest.However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds. Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures.

This study aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification. In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks: AlexNet and VGGNet.

In contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes. 

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了