??AI for Lettuce: Accurate Weight Estimation in CEA ??

??AI for Lettuce: Accurate Weight Estimation in CEA ??

The global demand for food is increasing due to population growth, urbanization, and climate change, compounded by challenges such as reduced arable land and geopolitical conflicts. Controlled-environment agriculture (CEA), like plant factories, has emerged as a promising solution, offering year-round cultivation with minimal resource inputs.

However, existing weight measurement techniques in such environments are labour-intensive, invasive, and unsuitable for large-scale industrial settings. Most conventional methods require crops to be harvested or physically moved, which can inhibit growth and increase costs.

This study aims to overcome these limitations by developing a machine vision-based system that operates in confined, high-humidity, and multi-layered industrial plant factories.


Narrow passages and the height of moving racks for cultivating work in the industrial plant factory. Source: Kim et al., 2024


Source: Kim et al., 2024

In the figure above: (A) A linear motion guide mounted on a cultivating rack and a camera rail with five different cameras (two RGBs, two IRs, and a depth) connected to the linear motion guide; (B) the location of the rack installed with the linear motion guide in the industrial plant factory; (C) setup of the connected camera rail with five different cameras to the linear motion guide.


Methodology

The researchers implemented a linear motion guide system with cameras mounted above the cultivation beds to automatically capture top-view images of butterhead lettuce.

The system used Raspberry Pi cameras for RGB and infrared imaging, controlled by a Raspberry Pi 4 microcontroller.

The data acquisition system was designed to withstand the harsh conditions of plant factories, such as high humidity and confined spaces, ensuring consistent image quality. Images were captured hourly, and the ground-truth weights of the crops were manually measured at regular intervals to build a dataset of 376 annotated images.

Two modelling approaches were explored:

  1. manual feature extraction and
  2. automatic feature extraction.

For manual extraction, image preprocessing techniques like GrabCut segmentation were applied to derive features such as area, perimeter, and axes lengths. These features were then used in various regression models, including linear and polynomial regressions.

For automatic feature extraction, deep learning models, such as multilayer perceptrons (MLP) and convolutional neural networks (CNN), were employed. The CNN model, based on ResNet18 architecture, was optimized to process unstructured image data and automatically learn relevant features for weight prediction.


Sketch (unit: mm) of a confined cultivating bed used in the industrial plant factory. Source: Kim et al., 2024

In the figure above: (A) A three-dimensional front view; (B) A side view showing the location of a camera rail, cameras, and LEDs.

In the figure below: (A) A linear motion guide with two motors; (B) Front side of a driving part connected to a rail frame; (C) Rear side of a driving part connected to a rail frame; (D) Inside of the housing case attached to the outside surface of a driving part.


Overall framework and components of the Automatic Image Acquisition System. Source: Kim et al., 2024


The number of manually measured harvests and the number of corresponding images by date. Source: Kim et al., 2024


Two different images of the same crop from different locations of cameras with two examples. Source: Kim et al., 2024

In the figure above: (A) a plant with a fresh weight of 34?g; (B) a plant with a fresh weight of 2.4?g.


Image preprocessing workflow: Resize, image segmentation, median filtering, and feature extraction. Source: Kim et al., 2024
Architectures of deep learning models. Source: Kim et al., 2024

Results and Findings

The CNN-based model achieved the best performance with an R2 value of 0.95 and an RMSE of 8.06 g, indicating high predictive accuracy.

However, for practical on-site use, the MLP_2 model was favoured due to its faster inference time of 0.003 milliseconds per image, compared to 0.026 milliseconds for the CNN model, while maintaining a competitive ( R2) of 0.93 and RMSE of 9.35 g. In comparison, the manual feature-based regression models performed less effectively, with a maximum \( R^2 \) of 0.90 and RMSE of 10.63 g for a third-degree polynomial regression model.


Source: Kim et al., 2024

In the figure above: (A) Pair plot between variables (area, perimeter, major axis length, minor axis length, and weight); (B) Heatmap of Pearson’s correlation coefficients between variables.

The study highlighted the advantages of automatic feature extraction using unstructured image data over manual feature-based methods. The use of a CNN allowed for a deeper understanding of complex visual patterns, significantly improving model performance. The findings also underscored the need for lightweight models that can run efficiently on low-power devices in confined industrial settings.        

Conclusion and Implications

The innovative use of machine vision and deep learning provides a scalable solution for real-time crop monitoring, enabling growers to optimize harvest schedules and enhance productivity.

The system's adaptability allows it to be deployed in various agricultural contexts, from large-scale industrial operations to small-scale or home-based cultivation systems.

By reducing labour costs and minimizing crop stress, this technology represents a significant step forward in the automation and sustainability of modern agriculture. The study's methodology and findings lay the groundwork for future innovations in precision farming and controlled-environment agriculture.


Positions of plant pots in the cultivating rack and grown crops. Source: Kim et al., 2024

In the figure above: (A) zigzag positions of plants and vacant pots; (B) well-grown lettuce without overlapping problem


Reference

Kim J-SG, Moon S, Park J, Kim T and Chung S (2024) Development of a machine vision-based weight prediction system of butterhead lettuce (Lactuca sativa L.) using deep learning models for industrial plant factory. Front. Plant Sci. 15:1365266. doi: 10.3389/fpls.2024.1365266


?? What's on

If you'd like to receive the regular 'AI in Agriculture' newsletter in your inbox, simply add your email to my mailing list.

Join almost 9,600 readers who enjoy weekly updates on AI advancements in agriculture!


Get the 'AI in Agriculture' newsletter delivered straight to your inbox! Simply click the image above and enter your email to join my mailing list
Get the 'AI in Agriculture' newsletter delivered straight to your inbox! Simply click the image above and enter your email to join my mailing list

AI for Lettuce Phenotyping and Quality Assurance

Free mobile application Petiole Pro brings AI to leafy greens phenotyping and quality assurance of food produce
To get more information about tomato phenotyping capabilities with mobile - ask Petiole Pro
Katidhan Tech

Social Media Manager at Katidhan

4 天前

Fantastic efforts by Maryna Kuzmenko, Ph.D ???? Kuzmenko in highlighting the importance of innovative solutions for sustainable agriculture.

Maryna Kuzmenko

Petiole 联合创始人。关注我,了解有关农业、林业、可持续发展领域人工智能的帖子以及我的旅程

1 周

要查看或添加评论,请登录