AI Model Evaluation in iDesktopX

AI Model Evaluation in iDesktopX

Background

Artificial Intelligence (AI) is a trending topic which attracts many people over many years. Its development also yet to see finish line. One of its branch, Deep Learning (DL), has become breakthrough to many aspect in GIS such as tree counting, land cover classification, etc. DL’s workflow basically includes:

No alt text provided for this image

Figure 1 Machine Learning Workflow (src: https://ml-ops.org)

  1. Data preparation : collecting, labelling and wrangling
  2. Building model : choosing model, train and testing model
  3. Validating model : validate model over non trained data

Validating model is a part which easily ignored for people who uses mainstream software and don’t have prior experience in deep learning. This is because, this tools is so rare to be found in GIS software. Due to its importance in developing ML/DL model, SuperMap has launched new feature in iDesktopX to evaluate model and we will discuss how to use it in this post.

To evaluate model, we need:

  1. Result of non-training data?
  2. Ground truth label of non-training data

It is important to use non training data to understand model behaviour on various data. In this post we will only discuss to specific topic which is tree counting of palm tree.?

So, the model evalution steps for tree counting are:

  1. Import 2 data mentioned above
  2. Open Model Evaluation toolbox

No alt text provided for this image

3. Fill inputs:

a. Inference Result: result of inferencing non training data

b. Real Label: ground truth of the inferenced data

c. Paramater Settings: choose object detection because tree counting is object detection model

d. Result Data: location to save the evaluation table

No alt text provided for this image

4. Then run by pressing the play button and wait for the process to finish.

5. Open the evaluation table

No alt text provided for this image

6. Check the result

No alt text provided for this image

As you can see in the image above there is 4 column which are:

a. Precision : measures how well model can find true positives out of all positive predictions

b. Recalls : measures how well you can find true positives out of all predictions

c. mAP : mean Average Precision, this is metric which is commonly used for object detections model. its best value at 1 and worst score at 0

d. f1 : can be interpreted as a harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal

So, that is the steps to evaluate object detection model in iDesktopX and gain the evaluation metric such as f1, mAp, etc. This is one of many techniques to evaluate model. For instance, to understand the model further we can use the training log model and keeping track of its accuracy and loss via tensorboard which already included in iDesktopX . See you for another series on AI in SuperMap.


Note: We are looking for distributors, resellers, and partners all over the world. For any inquiry, please contact us at: [email protected].

要查看或添加评论,请登录

社区洞察

其他会员也浏览了