??AI for Corn: Maize Seedling and Weed Differentiation using Drone??

??AI for Corn: Maize Seedling and Weed Differentiation using Drone??

The accurate identification of maize seedlings in agricultural fields is essential for effective crop management, early seedling replenishment, and weed control.

However, distinguishing maize seedlings from weeds remains a significant challenge due to their similar morphological characteristics, particularly in the early growth stages.

Traditional image processing methods, such as threshold-based segmentation and shape analysis, have proven ineffective in complex field environments due to variations in lighting, soil conditions, and weed density.

Recent advances in deep learning, particularly convolutional neural networks (CNNs), have improved object detection in agriculture. However, existing deep learning models still struggle with small target recognition, feature extraction limitations, and weed occlusion. UAV-based remote sensing provides a potential solution, but current methods predominantly rely on RGB imagery, which lacks the spectral detail necessary for effective weed differentiation.

Multispectral imaging offers a broader range of spectral information, including near-infrared (NIR) and red-edge (RE) bands, which enhance vegetation classification.

However, the integration of multispectral data with deep learning models for maize seedling recognition under varying weed coverage levels remains an open research challenge. Furthermore, existing object detection models, such as YOLOv3, YOLOv5, and YOLOv6, exhibit precision limitations under heavy weed interference.

Therefore, this study aims to develop an improved deep learning model (CGS-YOLO) that leverages UAV-based multispectral imagery for maize seedling recognition under different levels of weed disturbance.         

The research focuses on:

  1. Enhancing feature extraction by incorporating Principal Component Analysis (PCA) for multispectral image processing.
  2. Improving object detection performance using an upgraded YOLO architecture with CARAFE up-sampling, Global Attention Mechanism (GAM), and a Small Target Detection Layer (SLAY).
  3. Evaluating model performance under various weed coverage conditions to determine its effectiveness in real-world agricultural environments.



Methodology (Key Points)

The study was conducted at the Xiaotangshan National Precision Agriculture Research Base in Beijing, China, where UAV-based multispectral and RGB images were collected to distinguish maize seedlings from weeds.


The location of the study area and the design of the experimental site treatments. Source: Tang et al., 2025

In the figure above: (a) Geographical location of the study area. (b) 15?m altitude multispectral image of UAV.


Restructure every four images using Mosaic data enhancement technology: a - RGB images. b - Multispectral PCA images. Source: Tang et al., 2025

The dataset included 816 images, which were annotated and divided into training (70%), validation (20%), and testing (10%) sets. To enhance feature extraction, Principal Component Analysis (PCA) was applied to multispectral images, reducing redundancy and highlighting key spectral differences. Additionally, Mosaic Data Augmentation was used to improve model generalization and prevent overfitting.


Flow chart of data preprocessing, model testing, model improvement and result analysis. Source: Tang et al., 2025

The deep learning model CGS-YOLO was developed as an enhancement of YOLOv8, incorporating three key modifications. CARAFE Up-sampling Operator was introduced to improve feature retention, particularly for small and occluded seedlings. The Global Attention Mechanism (GAM) combined spatial and channel attention to strengthen feature extraction across different image regions. Furthermore, a Small Target Detection Layer (SLAY) was added to enhance the recognition of small-scale maize seedlings, which are often missed due to weed occlusion. These improvements aimed to address the limitations of existing models in distinguishing maize seedlings from weeds in complex field conditions.


a - CARAFE up-sampling operator up-sampling prediction module. b - CARAFE up-sampling operator feature recombination module. Source: Tang et al., 2025


a - RGB image. b - Multispectral image. c - Multispectral PCA image. Source: Tang et al., 2025

The model was trained using Pytorch, CUDA acceleration, and an NVIDIA RTX 3070 GPU, with hyperparameters fine-tuned for learning rate, confidence threshold, and batch size. Training was conducted for 120 epochs, monitoring performance using Precision-Recall curves. To evaluate effectiveness, CGS-YOLO was compared against YOLOv3, YOLOv5, YOLOv6, and YOLOv8 under four levels of weed coverage (ranging from low to high). The performance was assessed using Precision, Recall, Mean Average Precision (mAP), and FPS (Inference Speed) to determine the model's accuracy and efficiency in real-world agricultural scenarios.


a - RGB image test set sample. b - Multispectral PCA image test set samples. Source: Tang et al., 2025



Top Findings

  1. Multispectral PCA images outperform RGB images in maize seedling recognition, improving feature differentiation and reducing weed interference, leading to a 3.1 percentage point increase in mAP.
  2. CGS-YOLO significantly outperforms YOLOv3, YOLOv5, YOLOv6, and YOLOv8, achieving higher precision, recall, and robustness, especially under high weed coverage where it maintains an mAP of 72%.
  3. The combination of CARAFE up-sampling, Global Attention Mechanism (GAM), and Small Target Detection Layer (SLAY) enhances small-seedling detection, reducing omission errors and improving recognition accuracy in complex field conditions. ??


The visualization results of maize seedling recognition of different models under four levels of weed disturbance in RGB images. Source: Tang et al., 2025



The recognition effect of maize seedling details under high weed coverage. a - RGB images. b - Multispectral PCA images. Source: Tang et al., 2025


The recognition effect of maize seedling details under high weed coverage. (a) RGB images. (b) Multispectral PCA images. Source: Tang et al., 2025

Reference


??Case Study: How AI can practically help with assessment of biochar applications on plants?

Based on collaboration between Petiole Pro and Earth Biochar, we would like to briefly introduce the comparison of Cucumber Leaf Analysis. It was done based on a video records from cucumber trials with control plants and plants treated with different amount of biochar products.

Personal thank you Nadav Ziv, as an expert in biochar applications, for collaboration and soon we will publish this case study in more details.

The screenshot of the video input for analysis. Source: Earth Biochar
Page 1 of the Cucumber Leaf Analysis Comparison Report. Control Regular Treatment is WA0000. Source: Petiole Pro


One of four datasets with extraction of the cucumber foliage for analysis. Source: Petiole Pro

?? What's On?

If you'd like to receive the regular 'AI in Agriculture' newsletter in your inbox, simply add your email to my mailing list.

Join over 10,730 readers who enjoy weekly updates on AI advancements in agriculture!

Get the 'AI in Agriculture' newsletter delivered straight to your inbox! Simply click the image above and enter your email to join my mailing list

Petiole Pro - Free Web-Tools for Plant Phenotyping

Check our own discoveries at Petiole Pro.

Leaf area measurement & LAI. Petiole Pro Poster has been demonstrated at AI + Environment Summit 2024 in October last year

You can find this poster in better resolution at our ResearchGate profile.

Petiole Pro: Leaf Area Measurement, the tool, which has been cited in 100+ research papers.
To access Petiole Pro leaf area web tool, go https://leaf-area.petiole.pro/


Petiole Pro uses Dark Green Colour Index for leaf greenness measurement
Petiole Pro: Leaf Greenness Measurement (DGCI) is available at https://leaf-dgci.petiole.pro/


Place the soybean seeds on or next to the calibrating plate to obtain an accurate seed count, average seed area, and standard deviation.


要查看或添加评论,请登录

Maryna Kuzmenko的更多文章