SuperMap GIS GeoAI Application - Graph Analysis
GeoAI
GeoAI is based on the basic theories and algorithms of artificial intelligence such as statistics, machine learning, and deep learning. It is oriented towards geospatial issues. SuperMap has innovatively implemented a series of artificial intelligence GIS functions to serve GIS spatial data processing, analysis、mining and integrated modeling. SuperMap GIS products are based on rich spatial statistical functions and mainly deepen and enrich GeoAI functions in two aspects: spatial machine learning and spatial deep learning, supporting artificial intelligence GIS applications.
1. Graph Analysis
1.1 Spatial Deep Learning
SuperMap GIS provides algorithms for image data detection, classification, and extraction based on deep learning, including object detection, binary classification, land use/cover classification, and scene classification. It can be used for buildings/road extraction, land use classification, local climatic zoning, and can be widely used in urban planning, meteorological modeling and other fields.
Figure. Spatial Deep Learning Algorithm
1.2 Object Detection
Similar to object detection in remote sensing images, graph object detection uses deep learning algorithms (Faster-CNN, Yolo V3) to automatically determine and recognize the category and location of one or more targets in the image, and marks them in the form of target box to determine their category and location.
The whole process only needs to draw the objects in the picture in batches, generate models by quickly learning, and use the model to infer the pictures to be identified, and the relevant information of interest targets can be obtained quickly.
Figure. Graph Detection
At the same time, the graph object detection technology can also be applied to the video to quickly identify the target in the video. This function is widely used in the field of transportation, which can quickly identify and extract the vehicles and personnel in the video, and can also realize cross shot tracking and quickly locate the spatial information of the target.
Figure. Identify vehicles
1.3 Graph Classification
In order to reduce the threshold of machine learning and expand the applicable scene and scope, SuperMap based on high-level visual information, uses the depth network model EfficientNet to extract the abstract semantic information of the image, and classifies the image attributes. After full training, the graph classification effect is also ideal.
At the same time, in order to make the training model more convenient and easy to use, after the model is generated on the desktop, the model can be transformed to generate the model available at the mobile side, so as to better carry out the field work. This function has been widely used in the work of mobile side assistant inspection and intelligent law enforcement. It can automatically identify the pictures acquired by mobile terminal and classify and archive them without manual record or traditional inspection means, and only simple operation is needed on mobile devices.
Figure. Electricity meter classification
2. Graph Analysis Workflow
Figure. Graph analysis workflow
Figure. Graph analysis tools
2.1 Sample Management
Sample management is divided into two parts: data preparation and sample production. Data preparation is mainly through external access or existing images stored in a variety of environments. After the storage, it enters the sample production process. Based on the different types of image data provided by the data preparation process, it provides training data for model training. Taking graph classification as an example, the dataset to be prepared includes graph data. By creating a sample library, the sample graphs are imported and classified in batches. By selecting the categories one by one, the label of graph classification is made. Finally, the training data in fixed format is generated by exporting the sample library. When the object detection is carried out, after the pictures are imported in batches, the target boxes of the objects to be identified can be drawn by the sample label extraction tools, and then the quick box selection can be done by batch drawing, and the sample production can be finished by exporting.
The input graph supports common data type formats: *. BMP, *. JPG, *. PNG, *. TIF, etc.
a. Graph classification
b. Graph detection
Figure. Sample library
After exporting the sample library, a configuration file of *.sda will be generated to reuse the sample library. When using this sample library, there is no need to repeat classification. Only the configuration file needs to be read through the import sample library tool, and the sample library of the classification can be obtained.
2.2 Model Training
Model training is to train the neural network model based on the training data samples generated in the previous sample management process. At the same time, the training model is continuously iteratively evaluated through the validation set, so as to achieve the actual application accuracy requirements. For example, SuperMap provides image classification based on EfficientNet model, and it can also select more suitable deep learning network model for image classification according to specific application scenarios. Because the model training process involves complex numerical calculation, it is recommended to use the server environment supporting GPU calculation.
Figure. Model training
2.3 Model Inference
The model application engine provided by SuperMap supports CPU and GPU computing modes, and GPU mode is recommended. At the same time, SuperMap supports both the output model of native training engine and the output model of third-party framework, which improves the flexibility of actual project execution. Based on the model training results, SuperMap AI GIS can provide image classification function. Only input the image set to be classified and the trained model, the classification results can be output.
Figure. Graph classification tool and classification result
2.4 Model Convert
In the process of using machine learning graph classification, the function of the mobile terminal is used to expand the practical application scenarios. In order to make the training result model more portable and easy to use, we transform the model on the desktop, and quickly build the model that can be used by the mobile terminal for subsequent reasoning applications. For example, in mobile law enforcement, we can quickly take photos and classify mobile devices, such as garbage stacking in public places, illegal advertising and other information collection and rapid positioning; We can also use the camera to quickly obtain evidence, identify license plate and other related applications, so that the graph classification is more widely used.
Figure. Model convert tool
We are looking for distributors, resellers, and partners all over the world. For any inquiry, please contact us at: [email protected]