Pub Date : 2025-09-30eCollection Date: 2025-12-01DOI: 10.1016/j.plaphe.2025.100121
Zihan Ge, Xijian Fan, Jingcheng Zhang, Shichao Jin
Accurate segmentation of areas affected by pests and diseases is essential for precisely assessing the severity and spread of infections, thereby facilitating the development of effective management and intervention strategies. Obtaining high-quality pixel-level annotations for training deep learning models in agricultural environments poses considerable challenges. To overcome this limitation, the present work introduces a novel semantic segmentation approach (SegPPD-FS) that employs few-shot learning techniques to reduce annotation demands while effectively segmenting plant pests and diseases. The proposed SegPPD-FS comprises two key components: the similarity feature enhancement module (SFEM) and the hierarchical prior knowledge injection module (HPKIM). The SFEM refines foreground targets by employing a lightweight attention mechanism to mitigate irrelevant background interference in natural images and further enhances the discriminative capability of query features. The HPKIM is designed to address the difficulties associated with identifying pests and diseases that vary widely in terms of shape and size within field images, which is achieved through a hierarchical integration of multiscale contextual data into the query feature representations. In addition, this study constructed and publicly released a high-quality few-shot semantic segmentation (FSS) dataset that included 101 distinct categories of plant pests and diseases, which supports further research on the precise monitoring of plant health issues. The experimental results demonstrate that the proposed method achieves mIoU values of 71.19 % and 71.58 % with the 1-shot and 2-shot settings, respectively, on the released dataset. This performance surpasses that of other FSS techniques, such as SegGPT and PerSAM, providing a promising and label-efficient solution for pest and disease monitoring. The collected dataset, which focuses on plant pests and diseases, has been publicly released at https://doi.org/10.5281/zenodo.15114159, providing a valuable resource for evaluating various FSS techniques.
{"title":"SegPPD-FS: Segmenting plant pests and diseases in the wild using few-shot learning.","authors":"Zihan Ge, Xijian Fan, Jingcheng Zhang, Shichao Jin","doi":"10.1016/j.plaphe.2025.100121","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100121","url":null,"abstract":"<p><p>Accurate segmentation of areas affected by pests and diseases is essential for precisely assessing the severity and spread of infections, thereby facilitating the development of effective management and intervention strategies. Obtaining high-quality pixel-level annotations for training deep learning models in agricultural environments poses considerable challenges. To overcome this limitation, the present work introduces a novel semantic segmentation approach (SegPPD-FS) that employs few-shot learning techniques to reduce annotation demands while effectively segmenting plant pests and diseases. The proposed SegPPD-FS comprises two key components: the similarity feature enhancement module (SFEM) and the hierarchical prior knowledge injection module (HPKIM). The SFEM refines foreground targets by employing a lightweight attention mechanism to mitigate irrelevant background interference in natural images and further enhances the discriminative capability of query features. The HPKIM is designed to address the difficulties associated with identifying pests and diseases that vary widely in terms of shape and size within field images, which is achieved through a hierarchical integration of multiscale contextual data into the query feature representations. In addition, this study constructed and publicly released a high-quality few-shot semantic segmentation (FSS) dataset that included 101 distinct categories of plant pests and diseases, which supports further research on the precise monitoring of plant health issues. The experimental results demonstrate that the proposed method achieves mIoU values of 71.19 % and 71.58 % with the 1-shot and 2-shot settings, respectively, on the released dataset. This performance surpasses that of other FSS techniques, such as SegGPT and PerSAM, providing a promising and label-efficient solution for pest and disease monitoring. The collected dataset, which focuses on plant pests and diseases, has been publicly released at https://doi.org/10.5281/zenodo.15114159, providing a valuable resource for evaluating various FSS techniques.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100121"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional(3D) seedling reconstruction technology can provide critical technical support for monitoring plant growth, phenotyping high-throughput plants, and conducting precision agriculture. However, multiview image-based reconstruction methods, which rely on image registration and feature matching, are susceptible to issues such as similar textures and viewpoint differences, leading to matching errors and the loss of key structural information. This can result in local deficiencies and reduced accuracy in the reconstructed models. Therefore, to attain improved reconstruction accuracy under low-cost constraints, deep learning-based feature extraction and matching methods are employed in this study, the SuperPoint network is utilized to increase the robustness of the feature point detection and description processes, and the LightGlue algorithm is introduced to improve the accuracy and stability of matching. Additionally, to reduce the impact of shooting and platform jitter on image quality, a dedicated plant 3D reconstruction platform is designed and constructed, and a dataset of densely planted rice seedlings under light stress conditions is collected, comprising three factors (light quality, light quantity, and the photoperiod) × three levels, totaling nine groups. Experimental results show that the proposed method achieves optimal performance in terms of its point cloud completeness and reprojection error. The phenotypic parameters (e.g., plant height) extracted from the reconstruction data are strongly correlated with the actual measurements (R2 = 0.989, RMSE = 4.54 mm), validating the potential of the proposed method for applications related to simulating plant growth processes, analyzing the effects of environmental factors (e.g., light), and optimizing crop cultivation schemes.
{"title":"Three-dimensional reconstruction of densely planted rice seedlings based on MultiView images.","authors":"Zhigang Zhang, Liwei Wang, Weiqi Ren, Shoutian Dong, Shaowen Liu, Haoran Xu, Yubo Yang, Rui Gao, Zhongbin Su","doi":"10.1016/j.plaphe.2025.100122","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100122","url":null,"abstract":"<p><p>Three-dimensional(3D) seedling reconstruction technology can provide critical technical support for monitoring plant growth, phenotyping high-throughput plants, and conducting precision agriculture. However, multiview image-based reconstruction methods, which rely on image registration and feature matching, are susceptible to issues such as similar textures and viewpoint differences, leading to matching errors and the loss of key structural information. This can result in local deficiencies and reduced accuracy in the reconstructed models. Therefore, to attain improved reconstruction accuracy under low-cost constraints, deep learning-based feature extraction and matching methods are employed in this study, the SuperPoint network is utilized to increase the robustness of the feature point detection and description processes, and the LightGlue algorithm is introduced to improve the accuracy and stability of matching. Additionally, to reduce the impact of shooting and platform jitter on image quality, a dedicated plant 3D reconstruction platform is designed and constructed, and a dataset of densely planted rice seedlings under light stress conditions is collected, comprising three factors (light quality, light quantity, and the photoperiod) × three levels, totaling nine groups. Experimental results show that the proposed method achieves optimal performance in terms of its point cloud completeness and reprojection error. The phenotypic parameters (e.g., plant height) extracted from the reconstruction data are strongly correlated with the actual measurements (R<sup>2</sup> = 0.989, RMSE = 4.54 mm), validating the potential of the proposed method for applications related to simulating plant growth processes, analyzing the effects of environmental factors (e.g., light), and optimizing crop cultivation schemes.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100122"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30eCollection Date: 2025-12-01DOI: 10.1016/j.plaphe.2025.100124
Maria Isabella Prigigallo, Giovanni Bubici, Giorgia Batelli, Antonello Costa, Monica De Palma, Maria Teresa Melillo, Angelo Petrozza, Alessandra Ruggiero, Giorgia Sportelli, Stephan Summerer, Pasqua Veronico, Francesco Cellini, Marina Tucci, Livia Stavolone, Stefania Grillo, Fabrizio Cillo
In the context of precision agriculture, high-throughput phenotyping (HTP) aims to rapidly and effectively identify factors that affect crop yield, enabling timely and appropriate interventions. However, interpreting data from HTP remains challenging. We performed a proximal red-green-blue (RGB)-based HTP on several tomato genotypes exposed to abiotic stress (drought) or biotic stress induced by tomato spotted wilt virus (TSWV), Pseudopyrenochaeta lycopersici (corky root rot; CRR), or Meloidogyne incognita (root-knot nematode; RKN). We aimed to determine if RGB-based HTP is effectively able to: a) distinguish the effects of biotic from abiotic stress; b) differentiate resistant/tolerant from susceptible genotypes. Our HTP data analysis produced 12 morphometric and eight colorimetric indices. Principal Component Analysis (PCA; P < 0.0001; 83 % variation explained by three PCs) showed that factors such as shoot area solidity and certain color-based indices, including the senescence index and green area, effectively differentiated biotic from abiotic stress. Morphometric parameters, including plant height, projected shoot area, and convex hull area, proved to be applicable for identifying the stress status regardless of the type of stress. HTP effectively distinguished the genotype resistant to TSWV from the susceptible ones. This task was more challenging for below-ground stresses like CRR and RKN. Different profiles of HTP indices were observed among the genotypes assayed for drought tolerance, indicating variability in their ability to withstand drought conditions. In conclusion, our findings highlight the value of RGB-based HTP as a tool for precision farming of tomatoes, enabling the identification of both biotic and abiotic stressors.
在精准农业的背景下,高通量表型(HTP)旨在快速有效地识别影响作物产量的因素,从而实现及时和适当的干预。然而,解释来自http的数据仍然具有挑战性。我们对几种暴露于非生物胁迫(干旱)或生物胁迫的番茄基因型进行了近端红绿蓝(RGB) HTP,这些胁迫由番茄斑点枯萎病毒(TSWV)、葡萄根腐病(CRR)或根结线虫(RKN)诱导。我们的目的是确定基于rgb的HTP是否能够有效地:a)区分生物和非生物应激的影响;B)区分耐药/耐药基因型和敏感基因型。我们的HTP数据分析产生了12个形态测量指数和8个比色指数。主成分分析(PCA; P < 0.0001; 83%的变异由3个pc解释)表明,茎部面积坚固度和某些基于颜色的指标,包括衰老指数和绿面积,可以有效地区分生物和非生物胁迫。无论胁迫类型如何,形态计量参数,包括株高、投影梢面积和凸壳面积,都被证明适用于识别胁迫状态。HTP有效地区分了抗TSWV基因型和易感基因型。对于像CRR和RKN这样的地下应力,这项任务更具挑战性。在抗旱性测试中,不同基因型的HTP指数分布不同,表明它们的抗旱能力存在差异。总之,我们的研究结果突出了基于rgb的HTP作为番茄精准农业工具的价值,使生物和非生物应激源的识别成为可能。
{"title":"High-throughput plant phenotyping identifies and discriminates biotic and abiotic stresses in tomato.","authors":"Maria Isabella Prigigallo, Giovanni Bubici, Giorgia Batelli, Antonello Costa, Monica De Palma, Maria Teresa Melillo, Angelo Petrozza, Alessandra Ruggiero, Giorgia Sportelli, Stephan Summerer, Pasqua Veronico, Francesco Cellini, Marina Tucci, Livia Stavolone, Stefania Grillo, Fabrizio Cillo","doi":"10.1016/j.plaphe.2025.100124","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100124","url":null,"abstract":"<p><p>In the context of precision agriculture, high-throughput phenotyping (HTP) aims to rapidly and effectively identify factors that affect crop yield, enabling timely and appropriate interventions. However, interpreting data from HTP remains challenging. We performed a proximal red-green-blue (RGB)-based HTP on several tomato genotypes exposed to abiotic stress (drought) or biotic stress induced by tomato spotted wilt virus (TSWV), <i>Pseudopyrenochaeta lycopersici</i> (corky root rot; CRR), or <i>Meloidogyne</i> incognita (root-knot nematode; RKN). We aimed to determine if RGB-based HTP is effectively able to: a) distinguish the effects of biotic from abiotic stress; b) differentiate resistant/tolerant from susceptible genotypes. Our HTP data analysis produced 12 morphometric and eight colorimetric indices. Principal Component Analysis (PCA; <i>P</i> < 0.0001; 83 % variation explained by three PCs) showed that factors such as shoot area solidity and certain color-based indices, including the senescence index and green area, effectively differentiated biotic from abiotic stress. Morphometric parameters, including plant height, projected shoot area, and convex hull area, proved to be applicable for identifying the stress status regardless of the type of stress. HTP effectively distinguished the genotype resistant to TSWV from the susceptible ones. This task was more challenging for below-ground stresses like CRR and RKN. Different profiles of HTP indices were observed among the genotypes assayed for drought tolerance, indicating variability in their ability to withstand drought conditions. In conclusion, our findings highlight the value of RGB-based HTP as a tool for precision farming of tomatoes, enabling the identification of both biotic and abiotic stressors.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100124"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the continuous progress in micro-optical machine technology, miniature spectral imaging devices have been rapidly developed; however, three-dimensional (3D) imaging measurement technology has become increasingly mature and widely used. The evolution of these technologies has established a robust foundation for the integration of three-dimensional imaging and spectral information. To achieve accurate alignment between 3D data and spectral information to obtain a more comprehensive spectral representation of objects in 3D space, we developed a binocular multispectral stereo imaging (BMSI) system. This system acquires images in synchrony with a binocular multispectral imager, thereby ensuring accurate alignment between 3D data and spectral data at the pixel level and facilitating the construction of a four-dimensional (4D) dataset. The segmentation of leaf regions from shadow backgrounds in two distinct plant species was achieved through optimal band fusion and hue-saturation value (HSV) color space transformation, significantly improving the segmentation accuracy, processing efficiency, and robustness across different plant species. A systematic evaluation was conducted to quantify the reconstruction precision and system stability at different measurement distances. The designed system acquired 4D image spectral data with plants as the objects to be tested. The distribution characteristics of chlorophyll (Chl) on the 3D surface of plants were obtained by first-order derivatives of the spectral data and the normalized difference red edge (NDRE) index. This technique provides a new means for plant phenotyping research and a more effective technical approach for the digitalization and precision monitoring of the agricultural industry.
{"title":"Design of a binocular multispectral stereo imaging system and its application in plant phenotyping.","authors":"Wenxiu Wan, Zhiyuan Liu, Ziru Yu, Jiahui Wang, Xiangyang Yu","doi":"10.1016/j.plaphe.2025.100105","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100105","url":null,"abstract":"<p><p>With the continuous progress in micro-optical machine technology, miniature spectral imaging devices have been rapidly developed; however, three-dimensional (3D) imaging measurement technology has become increasingly mature and widely used. The evolution of these technologies has established a robust foundation for the integration of three-dimensional imaging and spectral information. To achieve accurate alignment between 3D data and spectral information to obtain a more comprehensive spectral representation of objects in 3D space, we developed a binocular multispectral stereo imaging (BMSI) system. This system acquires images in synchrony with a binocular multispectral imager, thereby ensuring accurate alignment between 3D data and spectral data at the pixel level and facilitating the construction of a four-dimensional (4D) dataset. The segmentation of leaf regions from shadow backgrounds in two distinct plant species was achieved through optimal band fusion and hue-saturation value (HSV) color space transformation, significantly improving the segmentation accuracy, processing efficiency, and robustness across different plant species. A systematic evaluation was conducted to quantify the reconstruction precision and system stability at different measurement distances. The designed system acquired 4D image spectral data with plants as the objects to be tested. The distribution characteristics of chlorophyll (Chl) on the 3D surface of plants were obtained by first-order derivatives of the spectral data and the normalized difference red edge (NDRE) index. This technique provides a new means for plant phenotyping research and a more effective technical approach for the digitalization and precision monitoring of the agricultural industry.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100105"},"PeriodicalIF":6.4,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instance segmentation of plant leaves plays a crucial role in plant phenotyping, leveraging the rapid advancements in neural network research. A significant challenge in leaf instance segmentation lies in the preparation of training datasets, which typically require manual annotations comprising numerous pairs of ground-truth masks and corresponding plant photographs. Recently, segmentation models pre-trained on large-scale datasets, e.g., Segment Anything, have enabled training-free (i.e., zero-shot) instance segmentation accessible to the public. However, applying these models to leaf segmentation often yields unsatisfactory results, as the training datasets for these foundation models may lack sufficient plant imagery to accurately segment leaves exhibiting heavy occlusions and similar textures. To address this issue, we propose a fully automatic method for generating training datasets for leaf instance segmentation, combining an off-the-shelf zero-shot model with structure-aware image generation. Specifically, given a set of plant images and an L-system growth rule representing the structural pattern of the target plant, the proposed method automatically produces an arbitrary number of instance mask and photorealistic plant image pairs, eliminating the need for manual annotation. To maximize usability, we also provide a GUI front-end that integrates the entire pipeline of our method. Experiments on Arabidopsis, Komatsuna, and Rhaphiloepsis plants demonstrate that our method achieves more accurate segmentation compared to state-of-the-art zero-shot models, attaining AP@50 scores of 74.8, 76.0, and 88.2 for leaf instance segmentation of Arabidopsis, Komatsuna, and Rhaphiloepsis, respectively-without any manual annotation.
{"title":"LeafGen: Structure-aware Leaf Image Generation for Annotation-free Leaf Instance Segmentation.","authors":"Naoki Asada, Xinpeng Liu, Kanyu Xu, Ryohei Miyakawa, Yang Yang, Hiroaki Santo, Yosuke Toda, Fumio Okura","doi":"10.1016/j.plaphe.2025.100092","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100092","url":null,"abstract":"<p><p>Instance segmentation of plant leaves plays a crucial role in plant phenotyping, leveraging the rapid advancements in neural network research. A significant challenge in leaf instance segmentation lies in the preparation of training datasets, which typically require manual annotations comprising numerous pairs of ground-truth masks and corresponding plant photographs. Recently, segmentation models pre-trained on large-scale datasets, <i>e.g.</i>, Segment Anything, have enabled training-free (<i>i.e.</i>, zero-shot) instance segmentation accessible to the public. However, applying these models to leaf segmentation often yields unsatisfactory results, as the training datasets for these foundation models may lack sufficient plant imagery to accurately segment leaves exhibiting heavy occlusions and similar textures. To address this issue, we propose a fully automatic method for generating training datasets for leaf instance segmentation, combining an off-the-shelf zero-shot model with structure-aware image generation. Specifically, given a set of plant images and an L-system growth rule representing the structural pattern of the target plant, the proposed method automatically produces an arbitrary number of instance mask and photorealistic plant image pairs, eliminating the need for manual annotation. To maximize usability, we also provide a GUI front-end that integrates the entire pipeline of our method. Experiments on <i>Arabidopsis</i>, Komatsuna, and <i>Rhaphiloepsis</i> plants demonstrate that our method achieves more accurate segmentation compared to state-of-the-art zero-shot models, attaining AP@50 scores of 74.8, 76.0, and 88.2 for leaf instance segmentation of <i>Arabidopsis</i>, Komatsuna, and <i>Rhaphiloepsis</i>, respectively-without any manual annotation.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100092"},"PeriodicalIF":6.4,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18eCollection Date: 2025-12-01DOI: 10.1016/j.plaphe.2025.100107
Shichao Jin, Dawei Li, Ting Yun, Jianling Tang, Ke Wang, Shaochen Li, Hongyi Yang, Si Yang, Shan Xu, Lin Cao, Haifeng Xia, Qinghua Guo, Yu Zhang, Dong Jiang, Yanfeng Ding
Plant phenomics, the comprehensive study of plant phenotypes, has gained prominence as a vital tool for understanding the intricate relationships between genotypes and the environment. Image-based plant phenomics has progressed rapidly, and three-dimensional (3D) phenotyping is a valuable extension of traditional 2D phenomics. However, the increased data dimensionality poses challenges to feature extraction and phenotyping. In recent decades, deep learning has led to remarkable progress in revolutionizing 3D phenotyping. Therefore, this review highlights the importance of using deep learning in 3D plant phenomics. It systematically overviews the capabilities of deep learning for 3D computer vision, covering 3D representation, classification, detection and tracking, semantic segmentation, instance segmentation, and generation. Additionally, deep learning techniques for 3D point preprocessing (e.g., annotation, downsampling, and dataset organization) and various plant phenotyping tasks are discussed. Finally, the challenges and perspectives associated with deep learning in 3D plant phenomics are summarized, including (1) benchmark dataset construction by using synthetic datasets and methods such as generative artificial intelligence and unsupervised or weakly supervised learning; (2) accurate and efficient 3D point cloud analysis by leveraging multitask learning, lightweight models, and self-supervised learning; and (3) deep learning for 3D plant phenomics by exploring interpretability, extensibility, and multimodal data utilization. The exploration of deep learning in 3D plant phenomics is poised to spur breakthroughs in a new dimension of plant science.
{"title":"Deep learning for three-dimensional (3D) plant phenomics.","authors":"Shichao Jin, Dawei Li, Ting Yun, Jianling Tang, Ke Wang, Shaochen Li, Hongyi Yang, Si Yang, Shan Xu, Lin Cao, Haifeng Xia, Qinghua Guo, Yu Zhang, Dong Jiang, Yanfeng Ding","doi":"10.1016/j.plaphe.2025.100107","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100107","url":null,"abstract":"<p><p>Plant phenomics, the comprehensive study of plant phenotypes, has gained prominence as a vital tool for understanding the intricate relationships between genotypes and the environment. Image-based plant phenomics has progressed rapidly, and three-dimensional (3D) phenotyping is a valuable extension of traditional 2D phenomics. However, the increased data dimensionality poses challenges to feature extraction and phenotyping. In recent decades, deep learning has led to remarkable progress in revolutionizing 3D phenotyping. Therefore, this review highlights the importance of using deep learning in 3D plant phenomics. It systematically overviews the capabilities of deep learning for 3D computer vision, covering 3D representation, classification, detection and tracking, semantic segmentation, instance segmentation, and generation. Additionally, deep learning techniques for 3D point preprocessing (e.g., annotation, downsampling, and dataset organization) and various plant phenotyping tasks are discussed. Finally, the challenges and perspectives associated with deep learning in 3D plant phenomics are summarized, including (1) benchmark dataset construction by using synthetic datasets and methods such as generative artificial intelligence and unsupervised or weakly supervised learning; (2) accurate and efficient 3D point cloud analysis by leveraging multitask learning, lightweight models, and self-supervised learning; and (3) deep learning for 3D plant phenomics by exploring interpretability, extensibility, and multimodal data utilization. The exploration of deep learning in 3D plant phenomics is poised to spur breakthroughs in a new dimension of plant science.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100107"},"PeriodicalIF":6.4,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17eCollection Date: 2025-12-01DOI: 10.1016/j.plaphe.2025.100109
Bowei Xu, Le Liu, Rumeng Zhao, Jiajie Yang, Bin Wu, Lili Lu, Xiantao Ai, Jingshan Tian, Fuguang Li, Kai Zheng, Liqiang Fan, Zuoren Yang
Pre-harvest defoliation of cotton is a key agricultural measure to improve mechanical harvesting efficiency and raw cotton purity. Collecting data on cotton defoliation traits for genetic localization and thus breeding defoliation-prone varieties is an essential alternative to traditional defoliant spraying. Nevertheless, it is hampered by low throughput and artificial error in manual field surveys. In this study, a framework for collecting high-throughput defoliation data in large fields was established. Three spectral indices (MTCI, VDVI, CI) and leaf area index (LAI) were first screened as core predictors through hierarchical segmentation analysis in three levels: leaf number (LN), leaf number difference (LND), and defoliation rate (DR). Four deep learning architectures (CNN, BiGRU, CNN-BiGRU, and CNN-BiGRU-Attention) were developed, and the CNN-BiGRU-Attention hybrid model demonstrated superior performance at all three levels, with R2 values exceeding 0.85. Importantly, the inversion accuracy of this model at the LN and LND levels was superior to that at the DR level, which was also confirmed by the results of the genome-wide association study (GWAS). We combined GWAS and transcriptome results to identify a new gene, GhDR_UAV1, associated with defoliation traits. The overexpression of GhDR_UAV1 significantly promoted the wilting of cotton leaves, indicating that GhDR_UAV1 plays a positive regulatory role in cotton defoliation. This study proposed a strategy to invert cotton defoliation data at three levels using deep learning fusion of UAV remote sensing data and LAI data and confirmed that LND can provide accurate phenotypic data for GWAS analysis. This study provides a new theoretical basis for cotton defoliation regulation and genetic improvement by integrating cotton high-throughput defoliation phenomics and genomics from an innovative perspective.
{"title":"Establishment of a high-throughput field defoliation data survey strategy combined with genome-wide association studies to reveal the genetic basis of defoliation in cotton.","authors":"Bowei Xu, Le Liu, Rumeng Zhao, Jiajie Yang, Bin Wu, Lili Lu, Xiantao Ai, Jingshan Tian, Fuguang Li, Kai Zheng, Liqiang Fan, Zuoren Yang","doi":"10.1016/j.plaphe.2025.100109","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100109","url":null,"abstract":"<p><p>Pre-harvest defoliation of cotton is a key agricultural measure to improve mechanical harvesting efficiency and raw cotton purity. Collecting data on cotton defoliation traits for genetic localization and thus breeding defoliation-prone varieties is an essential alternative to traditional defoliant spraying. Nevertheless, it is hampered by low throughput and artificial error in manual field surveys. In this study, a framework for collecting high-throughput defoliation data in large fields was established. Three spectral indices (MTCI, VDVI, CI) and leaf area index (LAI) were first screened as core predictors through hierarchical segmentation analysis in three levels: leaf number (LN), leaf number difference (LND), and defoliation rate (DR). Four deep learning architectures (CNN, BiGRU, CNN-BiGRU, and CNN-BiGRU-Attention) were developed, and the CNN-BiGRU-Attention hybrid model demonstrated superior performance at all three levels, with R<sup>2</sup> values exceeding 0.85. Importantly, the inversion accuracy of this model at the LN and LND levels was superior to that at the DR level, which was also confirmed by the results of the genome-wide association study (GWAS). We combined GWAS and transcriptome results to identify a new gene, <i>GhDR_UAV1</i>, associated with defoliation traits. The overexpression of <i>GhDR_UAV1</i> significantly promoted the wilting of cotton leaves, indicating that <i>GhDR_UAV1</i> plays a positive regulatory role in cotton defoliation. This study proposed a strategy to invert cotton defoliation data at three levels using deep learning fusion of UAV remote sensing data and LAI data and confirmed that LND can provide accurate phenotypic data for GWAS analysis. This study provides a new theoretical basis for cotton defoliation regulation and genetic improvement by integrating cotton high-throughput defoliation phenomics and genomics from an innovative perspective.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100109"},"PeriodicalIF":6.4,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15eCollection Date: 2025-12-01DOI: 10.1016/j.plaphe.2025.100106
Wentao Song, He Huang, Fang Qu, Jiaqi Zhang, Longhui Fang, Yuwei Hao, Chenyang Peng, Youqiang Sun
Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding. Due to the species diversity of plants, existing methods heavily rely on large-scale high-precision manually annotated data. For self-occluded objects at the grain level, unsupervised methods often prove ineffective. This study proposes IPENS, an interactive unsupervised multi-target point cloud extraction method. It utilizes radiance field information to lift 2D masks, segmented by SAM2 (Segment Anything Model 2), into 3D space for target point cloud extraction. A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction. On a rice dataset, IPENS achieves a grain-level segmentation mean Intersection over Union (mIoU) of 63.72 %. For phenotypic trait estimation, it achieves a grain voxel volume coefficient of determination R2 = 0.7697 (Root Mean Square Error, RMSE = 0.0025), leaf surface area R2 = 0.84 (RMSE = 18.93), and leaf length and width prediction accuracies of R2 = 0.97 and R2 = 0.87 (RMSE = 1.49 and 0.21). On a wheat dataset, IPENS further improves segmentation performance to a mIoU of 89.68 %, with exceptional phenotypic estimation results: panicle voxel volume R2 = 0.9956 (RMSE = 0.0055), leaf surface area R2 = 1.00 (RMSE = 0.67), and leaf length and width predictions reaching R2 = 0.99 and R2 = 0.92 (RMSE = 0.23 and 0.15). Without requiring annotated data, IPENS rapidly extracts grain-level point clouds for multiple targets within 3 min using single-round image interactions. These features make IPENS a high-quality, non-invasive phenotypic extraction solution for rice and wheat, offering significant potential to enhance intelligent breeding.
{"title":"IPENS: Interactive unsupervised framework for rapid plant phenotyping extraction via NeRF-SAM2 fusion.","authors":"Wentao Song, He Huang, Fang Qu, Jiaqi Zhang, Longhui Fang, Yuwei Hao, Chenyang Peng, Youqiang Sun","doi":"10.1016/j.plaphe.2025.100106","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100106","url":null,"abstract":"<p><p>Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding. Due to the species diversity of plants, existing methods heavily rely on large-scale high-precision manually annotated data. For self-occluded objects at the grain level, unsupervised methods often prove ineffective. This study proposes IPENS, an interactive unsupervised multi-target point cloud extraction method. It utilizes radiance field information to lift 2D masks, segmented by SAM2 (Segment Anything Model 2), into 3D space for target point cloud extraction. A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction. On a rice dataset, IPENS achieves a grain-level segmentation mean Intersection over Union (mIoU) of 63.72 %. For phenotypic trait estimation, it achieves a grain voxel volume coefficient of determination <i>R</i> <sup>2</sup> = 0.7697 (Root Mean Square Error, RMSE = 0.0025), leaf surface area <i>R</i> <sup>2</sup> = 0.84 (RMSE = 18.93), and leaf length and width prediction accuracies of <i>R</i> <sup>2</sup> = 0.97 and <i>R</i> <sup>2</sup> = 0.87 (RMSE = 1.49 and 0.21). On a wheat dataset, IPENS further improves segmentation performance to a mIoU of 89.68 %, with exceptional phenotypic estimation results: panicle voxel volume <i>R</i> <sup>2</sup> = 0.9956 (RMSE = 0.0055), leaf surface area <i>R</i> <sup>2</sup> = 1.00 (RMSE = 0.67), and leaf length and width predictions reaching <i>R</i> <sup>2</sup> = 0.99 and <i>R</i> <sup>2</sup> = 0.92 (RMSE = 0.23 and 0.15). Without requiring annotated data, IPENS rapidly extracts grain-level point clouds for multiple targets within 3 min using single-round image interactions. These features make IPENS a high-quality, non-invasive phenotypic extraction solution for rice and wheat, offering significant potential to enhance intelligent breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100106"},"PeriodicalIF":6.4,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant pigment content is a crucial indicator for assessing photosynthetic efficiency, nutritional status, and physiological health. Its spatial distribution is significantly influenced by variety, location, and environmental factors. However, existing methods for measuring pigment content are often destructive, inefficient, and costly, making them unsuitable for the demands of modern precision agriculture. This study proposes a cross-scale, non-destructive detection method for lettuce pigments by integrating hyperspectral imaging (HSI) technology with deep learning algorithms, addressing the limitations of existing techniques in high-throughput and spatial resolution analysis. In this study, we built a multidimensional dataset based on eight different types of lettuce and developed a deep learning model named LPCNet to predict the contents of chlorophyll a (Chl a), chlorophyll b (Chl b), carotenoids (Car), and total pigment content (TPC) in lettuce. The LPCNet model integrates convolutional neural networks (CNN), bidirectional long short-term memory networks (BiLSTM), and multi-head self-attention (MHSA) mechanisms, enabling automatic extraction of pigment-related key features and simplifying the complex preprocessing and feature selection procedures required in traditional machine learning. Compared to multivariate analysis methods in machine learning, LPCNet demonstrated superior predictive accuracy, with coefficients of determination ( ) of 0.9449, 0.8613, 0.9121, and 0.8476 for Chl a, Chl b, Car, and TPC, respectively. Additionally, by combining the hyperspectral reflectance of lettuce canopies with the leaf-level inversion model, we visualized the spatial distribution of pigment content on the canopy of lettuce, achieving cross-scale analysis from leaf to canopy. This study provides an innovative approach for the rapid and accurate assessment of lettuce pigment content and offers an effective visualization tool for revealing the physiological processes and growth development of lettuce.
{"title":"From leaf to canopy: Inversion of lettuce pigment distribution using hyperspectral imaging technology combined with deep learning algorithms.","authors":"Yue Zhao, Jiangchuan Fan, Xianju Lu, Ying Zhang, Weiliang Wen, Guanmin Huang, Yinglun Li, Xinyu Guo, Liping Chen","doi":"10.1016/j.plaphe.2025.100104","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100104","url":null,"abstract":"<p><p>Plant pigment content is a crucial indicator for assessing photosynthetic efficiency, nutritional status, and physiological health. Its spatial distribution is significantly influenced by variety, location, and environmental factors. However, existing methods for measuring pigment content are often destructive, inefficient, and costly, making them unsuitable for the demands of modern precision agriculture. This study proposes a cross-scale, non-destructive detection method for lettuce pigments by integrating hyperspectral imaging (HSI) technology with deep learning algorithms, addressing the limitations of existing techniques in high-throughput and spatial resolution analysis. In this study, we built a multidimensional dataset based on eight different types of lettuce and developed a deep learning model named LPCNet to predict the contents of chlorophyll <i>a</i> (Chl a), chlorophyll <i>b</i> (Chl b), carotenoids (Car), and total pigment content (TPC) in lettuce. The LPCNet model integrates convolutional neural networks (CNN), bidirectional long short-term memory networks (BiLSTM), and multi-head self-attention (MHSA) mechanisms, enabling automatic extraction of pigment-related key features and simplifying the complex preprocessing and feature selection procedures required in traditional machine learning. Compared to multivariate analysis methods in machine learning, LPCNet demonstrated superior predictive accuracy, with coefficients of determination ( <math> <mrow><msubsup><mi>R</mi> <mi>P</mi> <mn>2</mn></msubsup> </mrow> </math> ) of 0.9449, 0.8613, 0.9121, and 0.8476 for Chl a, Chl b, Car, and TPC, respectively. Additionally, by combining the hyperspectral reflectance of lettuce canopies with the leaf-level inversion model, we visualized the spatial distribution of pigment content on the canopy of lettuce, achieving cross-scale analysis from leaf to canopy. This study provides an innovative approach for the rapid and accurate assessment of lettuce pigment content and offers an effective visualization tool for revealing the physiological processes and growth development of lettuce.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100104"},"PeriodicalIF":6.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08eCollection Date: 2025-09-01DOI: 10.1016/j.plaphe.2025.100103
Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang
Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 mm and the coefficient of determination (R2) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (Zea mays), pearl millet (Pennisetum glaucum), and rye (Secale cereale) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.
{"title":"LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation.","authors":"Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang","doi":"10.1016/j.plaphe.2025.100103","DOIUrl":"10.1016/j.plaphe.2025.100103","url":null,"abstract":"<p><p>Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 mm and the coefficient of determination (R<sup>2</sup>) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (<i>Zea mays</i>), pearl millet (<i>Pennisetum glaucum</i>), and rye (<i>Secale cereale</i>) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100103"},"PeriodicalIF":6.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710053/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}