首页 > 最新文献

Plant Phenomics最新文献

英文 中文
SegPPD-FS: Segmenting plant pests and diseases in the wild using few-shot learning. SegPPD-FS:利用少量学习技术对野生植物病虫害进行分割。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-30 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100121
Zihan Ge, Xijian Fan, Jingcheng Zhang, Shichao Jin

Accurate segmentation of areas affected by pests and diseases is essential for precisely assessing the severity and spread of infections, thereby facilitating the development of effective management and intervention strategies. Obtaining high-quality pixel-level annotations for training deep learning models in agricultural environments poses considerable challenges. To overcome this limitation, the present work introduces a novel semantic segmentation approach (SegPPD-FS) that employs few-shot learning techniques to reduce annotation demands while effectively segmenting plant pests and diseases. The proposed SegPPD-FS comprises two key components: the similarity feature enhancement module (SFEM) and the hierarchical prior knowledge injection module (HPKIM). The SFEM refines foreground targets by employing a lightweight attention mechanism to mitigate irrelevant background interference in natural images and further enhances the discriminative capability of query features. The HPKIM is designed to address the difficulties associated with identifying pests and diseases that vary widely in terms of shape and size within field images, which is achieved through a hierarchical integration of multiscale contextual data into the query feature representations. In addition, this study constructed and publicly released a high-quality few-shot semantic segmentation (FSS) dataset that included 101 distinct categories of plant pests and diseases, which supports further research on the precise monitoring of plant health issues. The experimental results demonstrate that the proposed method achieves mIoU values of 71.19 ​% and 71.58 ​% with the 1-shot and 2-shot settings, respectively, on the released dataset. This performance surpasses that of other FSS techniques, such as SegGPT and PerSAM, providing a promising and label-efficient solution for pest and disease monitoring. The collected dataset, which focuses on plant pests and diseases, has been publicly released at https://doi.org/10.5281/zenodo.15114159, providing a valuable resource for evaluating various FSS techniques.

准确划分受病虫害影响的地区对于准确评估感染的严重程度和传播至关重要,从而促进制定有效的管理和干预战略。为农业环境下的深度学习模型训练获得高质量的像素级注释是一个相当大的挑战。为了克服这一限制,本工作引入了一种新的语义分割方法(SegPPD-FS),该方法采用少镜头学习技术来减少注释需求,同时有效地分割植物病虫害。该算法由两个关键部分组成:相似特征增强模块(SFEM)和分层先验知识注入模块(HPKIM)。该方法采用轻量关注机制对前景目标进行细化,以减轻自然图像中不相关的背景干扰,进一步增强查询特征的判别能力。HPKIM旨在解决与识别田间图像中形状和大小差异很大的病虫害相关的困难,这是通过将多尺度上下文数据分层集成到查询特征表示中来实现的。此外,本研究构建并公开发布了包含101个不同种类植物病虫害的高质量少镜头语义分割(few-shot semantic segmentation, FSS)数据集,为进一步开展植物健康问题的精确监测提供了支持。实验结果表明,该方法在已发布的数据集上,在1次射击和2次射击设置下,mIoU值分别达到71.19%和71.58%。这一性能超过了其他FSS技术,如SegGPT和PerSAM,为病虫害监测提供了一种有前途的高效标签解决方案。收集到的数据集以植物病虫害为重点,已在https://doi.org/10.5281/zenodo.15114159上公开发布,为评估各种FSS技术提供了宝贵的资源。
{"title":"SegPPD-FS: Segmenting plant pests and diseases in the wild using few-shot learning.","authors":"Zihan Ge, Xijian Fan, Jingcheng Zhang, Shichao Jin","doi":"10.1016/j.plaphe.2025.100121","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100121","url":null,"abstract":"<p><p>Accurate segmentation of areas affected by pests and diseases is essential for precisely assessing the severity and spread of infections, thereby facilitating the development of effective management and intervention strategies. Obtaining high-quality pixel-level annotations for training deep learning models in agricultural environments poses considerable challenges. To overcome this limitation, the present work introduces a novel semantic segmentation approach (SegPPD-FS) that employs few-shot learning techniques to reduce annotation demands while effectively segmenting plant pests and diseases. The proposed SegPPD-FS comprises two key components: the similarity feature enhancement module (SFEM) and the hierarchical prior knowledge injection module (HPKIM). The SFEM refines foreground targets by employing a lightweight attention mechanism to mitigate irrelevant background interference in natural images and further enhances the discriminative capability of query features. The HPKIM is designed to address the difficulties associated with identifying pests and diseases that vary widely in terms of shape and size within field images, which is achieved through a hierarchical integration of multiscale contextual data into the query feature representations. In addition, this study constructed and publicly released a high-quality few-shot semantic segmentation (FSS) dataset that included 101 distinct categories of plant pests and diseases, which supports further research on the precise monitoring of plant health issues. The experimental results demonstrate that the proposed method achieves mIoU values of 71.19 ​% and 71.58 ​% with the 1-shot and 2-shot settings, respectively, on the released dataset. This performance surpasses that of other FSS techniques, such as SegGPT and PerSAM, providing a promising and label-efficient solution for pest and disease monitoring. The collected dataset, which focuses on plant pests and diseases, has been publicly released at https://doi.org/10.5281/zenodo.15114159, providing a valuable resource for evaluating various FSS techniques.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100121"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional reconstruction of densely planted rice seedlings based on MultiView images. 基于MultiView图像的密植水稻幼苗三维重建。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-30 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100122
Zhigang Zhang, Liwei Wang, Weiqi Ren, Shoutian Dong, Shaowen Liu, Haoran Xu, Yubo Yang, Rui Gao, Zhongbin Su

Three-dimensional(3D) seedling reconstruction technology can provide critical technical support for monitoring plant growth, phenotyping high-throughput plants, and conducting precision agriculture. However, multiview image-based reconstruction methods, which rely on image registration and feature matching, are susceptible to issues such as similar textures and viewpoint differences, leading to matching errors and the loss of key structural information. This can result in local deficiencies and reduced accuracy in the reconstructed models. Therefore, to attain improved reconstruction accuracy under low-cost constraints, deep learning-based feature extraction and matching methods are employed in this study, the SuperPoint network is utilized to increase the robustness of the feature point detection and description processes, and the LightGlue algorithm is introduced to improve the accuracy and stability of matching. Additionally, to reduce the impact of shooting and platform jitter on image quality, a dedicated plant 3D reconstruction platform is designed and constructed, and a dataset of densely planted rice seedlings under light stress conditions is collected, comprising three factors (light quality, light quantity, and the photoperiod) ​× ​three levels, totaling nine groups. Experimental results show that the proposed method achieves optimal performance in terms of its point cloud completeness and reprojection error. The phenotypic parameters (e.g., plant height) extracted from the reconstruction data are strongly correlated with the actual measurements (R2 ​= ​0.989, RMSE ​= ​4.54 ​mm), validating the potential of the proposed method for applications related to simulating plant growth processes, analyzing the effects of environmental factors (e.g., light), and optimizing crop cultivation schemes.

三维(3D)幼苗重建技术可以为植物生长监测、高通量植物表型分析以及开展精准农业提供关键的技术支持。然而,基于多视图图像的重建方法依赖于图像配准和特征匹配,容易受到纹理相似和视点差异等问题的影响,导致匹配错误和关键结构信息的丢失。这可能导致重建模型的局部缺陷和准确性降低。因此,为了在低成本约束下获得更高的重建精度,本研究采用基于深度学习的特征提取和匹配方法,利用SuperPoint网络增加特征点检测和描述过程的鲁棒性,引入LightGlue算法提高匹配的准确性和稳定性。此外,为了减少拍摄和平台抖动对图像质量的影响,设计搭建了专用的植物三维重建平台,采集了光胁迫条件下密植水稻幼苗数据集,该数据集包含3个因素(光质量、光量、光周期)× 3个水平,共9组。实验结果表明,该方法在点云完整性和重投影误差方面都达到了最优的效果。从重建数据中提取的表型参数(如株高)与实际测量值具有很强的相关性(R2 = 0.989, RMSE = 4.54 mm),验证了所提出的方法在模拟植物生长过程、分析环境因素(如光照)的影响以及优化作物栽培方案方面的应用潜力。
{"title":"Three-dimensional reconstruction of densely planted rice seedlings based on MultiView images.","authors":"Zhigang Zhang, Liwei Wang, Weiqi Ren, Shoutian Dong, Shaowen Liu, Haoran Xu, Yubo Yang, Rui Gao, Zhongbin Su","doi":"10.1016/j.plaphe.2025.100122","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100122","url":null,"abstract":"<p><p>Three-dimensional(3D) seedling reconstruction technology can provide critical technical support for monitoring plant growth, phenotyping high-throughput plants, and conducting precision agriculture. However, multiview image-based reconstruction methods, which rely on image registration and feature matching, are susceptible to issues such as similar textures and viewpoint differences, leading to matching errors and the loss of key structural information. This can result in local deficiencies and reduced accuracy in the reconstructed models. Therefore, to attain improved reconstruction accuracy under low-cost constraints, deep learning-based feature extraction and matching methods are employed in this study, the SuperPoint network is utilized to increase the robustness of the feature point detection and description processes, and the LightGlue algorithm is introduced to improve the accuracy and stability of matching. Additionally, to reduce the impact of shooting and platform jitter on image quality, a dedicated plant 3D reconstruction platform is designed and constructed, and a dataset of densely planted rice seedlings under light stress conditions is collected, comprising three factors (light quality, light quantity, and the photoperiod) ​× ​three levels, totaling nine groups. Experimental results show that the proposed method achieves optimal performance in terms of its point cloud completeness and reprojection error. The phenotypic parameters (e.g., plant height) extracted from the reconstruction data are strongly correlated with the actual measurements (R<sup>2</sup> ​= ​0.989, RMSE ​= ​4.54 ​mm), validating the potential of the proposed method for applications related to simulating plant growth processes, analyzing the effects of environmental factors (e.g., light), and optimizing crop cultivation schemes.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100122"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput plant phenotyping identifies and discriminates biotic and abiotic stresses in tomato. 高通量植物表型识别和区分番茄的生物和非生物胁迫。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-30 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100124
Maria Isabella Prigigallo, Giovanni Bubici, Giorgia Batelli, Antonello Costa, Monica De Palma, Maria Teresa Melillo, Angelo Petrozza, Alessandra Ruggiero, Giorgia Sportelli, Stephan Summerer, Pasqua Veronico, Francesco Cellini, Marina Tucci, Livia Stavolone, Stefania Grillo, Fabrizio Cillo

In the context of precision agriculture, high-throughput phenotyping (HTP) aims to rapidly and effectively identify factors that affect crop yield, enabling timely and appropriate interventions. However, interpreting data from HTP remains challenging. We performed a proximal red-green-blue (RGB)-based HTP on several tomato genotypes exposed to abiotic stress (drought) or biotic stress induced by tomato spotted wilt virus (TSWV), Pseudopyrenochaeta lycopersici (corky root rot; CRR), or Meloidogyne incognita (root-knot nematode; RKN). We aimed to determine if RGB-based HTP is effectively able to: a) distinguish the effects of biotic from abiotic stress; b) differentiate resistant/tolerant from susceptible genotypes. Our HTP data analysis produced 12 morphometric and eight colorimetric indices. Principal Component Analysis (PCA; P ​< ​0.0001; 83 ​% variation explained by three PCs) showed that factors such as shoot area solidity and certain color-based indices, including the senescence index and green area, effectively differentiated biotic from abiotic stress. Morphometric parameters, including plant height, projected shoot area, and convex hull area, proved to be applicable for identifying the stress status regardless of the type of stress. HTP effectively distinguished the genotype resistant to TSWV from the susceptible ones. This task was more challenging for below-ground stresses like CRR and RKN. Different profiles of HTP indices were observed among the genotypes assayed for drought tolerance, indicating variability in their ability to withstand drought conditions. In conclusion, our findings highlight the value of RGB-based HTP as a tool for precision farming of tomatoes, enabling the identification of both biotic and abiotic stressors.

在精准农业的背景下,高通量表型(HTP)旨在快速有效地识别影响作物产量的因素,从而实现及时和适当的干预。然而,解释来自http的数据仍然具有挑战性。我们对几种暴露于非生物胁迫(干旱)或生物胁迫的番茄基因型进行了近端红绿蓝(RGB) HTP,这些胁迫由番茄斑点枯萎病毒(TSWV)、葡萄根腐病(CRR)或根结线虫(RKN)诱导。我们的目的是确定基于rgb的HTP是否能够有效地:a)区分生物和非生物应激的影响;B)区分耐药/耐药基因型和敏感基因型。我们的HTP数据分析产生了12个形态测量指数和8个比色指数。主成分分析(PCA; P < 0.0001; 83%的变异由3个pc解释)表明,茎部面积坚固度和某些基于颜色的指标,包括衰老指数和绿面积,可以有效地区分生物和非生物胁迫。无论胁迫类型如何,形态计量参数,包括株高、投影梢面积和凸壳面积,都被证明适用于识别胁迫状态。HTP有效地区分了抗TSWV基因型和易感基因型。对于像CRR和RKN这样的地下应力,这项任务更具挑战性。在抗旱性测试中,不同基因型的HTP指数分布不同,表明它们的抗旱能力存在差异。总之,我们的研究结果突出了基于rgb的HTP作为番茄精准农业工具的价值,使生物和非生物应激源的识别成为可能。
{"title":"High-throughput plant phenotyping identifies and discriminates biotic and abiotic stresses in tomato.","authors":"Maria Isabella Prigigallo, Giovanni Bubici, Giorgia Batelli, Antonello Costa, Monica De Palma, Maria Teresa Melillo, Angelo Petrozza, Alessandra Ruggiero, Giorgia Sportelli, Stephan Summerer, Pasqua Veronico, Francesco Cellini, Marina Tucci, Livia Stavolone, Stefania Grillo, Fabrizio Cillo","doi":"10.1016/j.plaphe.2025.100124","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100124","url":null,"abstract":"<p><p>In the context of precision agriculture, high-throughput phenotyping (HTP) aims to rapidly and effectively identify factors that affect crop yield, enabling timely and appropriate interventions. However, interpreting data from HTP remains challenging. We performed a proximal red-green-blue (RGB)-based HTP on several tomato genotypes exposed to abiotic stress (drought) or biotic stress induced by tomato spotted wilt virus (TSWV), <i>Pseudopyrenochaeta lycopersici</i> (corky root rot; CRR), or <i>Meloidogyne</i> incognita (root-knot nematode; RKN). We aimed to determine if RGB-based HTP is effectively able to: a) distinguish the effects of biotic from abiotic stress; b) differentiate resistant/tolerant from susceptible genotypes. Our HTP data analysis produced 12 morphometric and eight colorimetric indices. Principal Component Analysis (PCA; <i>P</i> ​< ​0.0001; 83 ​% variation explained by three PCs) showed that factors such as shoot area solidity and certain color-based indices, including the senescence index and green area, effectively differentiated biotic from abiotic stress. Morphometric parameters, including plant height, projected shoot area, and convex hull area, proved to be applicable for identifying the stress status regardless of the type of stress. HTP effectively distinguished the genotype resistant to TSWV from the susceptible ones. This task was more challenging for below-ground stresses like CRR and RKN. Different profiles of HTP indices were observed among the genotypes assayed for drought tolerance, indicating variability in their ability to withstand drought conditions. In conclusion, our findings highlight the value of RGB-based HTP as a tool for precision farming of tomatoes, enabling the identification of both biotic and abiotic stressors.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100124"},"PeriodicalIF":6.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a binocular multispectral stereo imaging system and its application in plant phenotyping. 双目多光谱立体成像系统的设计及其在植物表型分析中的应用。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-20 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100105
Wenxiu Wan, Zhiyuan Liu, Ziru Yu, Jiahui Wang, Xiangyang Yu

With the continuous progress in micro-optical machine technology, miniature spectral imaging devices have been rapidly developed; however, three-dimensional (3D) imaging measurement technology has become increasingly mature and widely used. The evolution of these technologies has established a robust foundation for the integration of three-dimensional imaging and spectral information. To achieve accurate alignment between 3D data and spectral information to obtain a more comprehensive spectral representation of objects in 3D space, we developed a binocular multispectral stereo imaging (BMSI) system. This system acquires images in synchrony with a binocular multispectral imager, thereby ensuring accurate alignment between 3D data and spectral data at the pixel level and facilitating the construction of a four-dimensional (4D) dataset. The segmentation of leaf regions from shadow backgrounds in two distinct plant species was achieved through optimal band fusion and hue-saturation value (HSV) color space transformation, significantly improving the segmentation accuracy, processing efficiency, and robustness across different plant species. A systematic evaluation was conducted to quantify the reconstruction precision and system stability at different measurement distances. The designed system acquired 4D image spectral data with plants as the objects to be tested. The distribution characteristics of chlorophyll (Chl) on the 3D surface of plants were obtained by first-order derivatives of the spectral data and the normalized difference red edge (NDRE) index. This technique provides a new means for plant phenotyping research and a more effective technical approach for the digitalization and precision monitoring of the agricultural industry.

随着微光机技术的不断进步,微型光谱成像器件得到了迅速发展;然而,三维(3D)成像测量技术已经日益成熟和广泛应用。这些技术的发展为三维成像和光谱信息的集成奠定了坚实的基础。为了实现三维数据和光谱信息之间的精确对齐,以获得三维空间中物体更全面的光谱表示,我们开发了双目多光谱立体成像(BMSI)系统。该系统与双目多光谱成像仪同步获取图像,从而保证了三维数据与光谱数据在像素级的精确对齐,便于四维数据集的构建。通过最优波段融合和色调饱和值(HSV)色彩空间变换,实现了两种不同植物叶片阴影背景的分割,显著提高了不同植物叶片阴影背景的分割精度、处理效率和鲁棒性。对不同测量距离下的重建精度和系统稳定性进行了系统评价。设计的系统获取了以植物为被测对象的四维影像光谱数据。利用光谱数据的一阶导数和归一化差分红边指数(NDRE),得到了叶绿素在植物三维表面的分布特征。该技术为植物表型研究提供了新的手段,为农业产业化的数字化和精准化监测提供了更有效的技术途径。
{"title":"Design of a binocular multispectral stereo imaging system and its application in plant phenotyping.","authors":"Wenxiu Wan, Zhiyuan Liu, Ziru Yu, Jiahui Wang, Xiangyang Yu","doi":"10.1016/j.plaphe.2025.100105","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100105","url":null,"abstract":"<p><p>With the continuous progress in micro-optical machine technology, miniature spectral imaging devices have been rapidly developed; however, three-dimensional (3D) imaging measurement technology has become increasingly mature and widely used. The evolution of these technologies has established a robust foundation for the integration of three-dimensional imaging and spectral information. To achieve accurate alignment between 3D data and spectral information to obtain a more comprehensive spectral representation of objects in 3D space, we developed a binocular multispectral stereo imaging (BMSI) system. This system acquires images in synchrony with a binocular multispectral imager, thereby ensuring accurate alignment between 3D data and spectral data at the pixel level and facilitating the construction of a four-dimensional (4D) dataset. The segmentation of leaf regions from shadow backgrounds in two distinct plant species was achieved through optimal band fusion and hue-saturation value (HSV) color space transformation, significantly improving the segmentation accuracy, processing efficiency, and robustness across different plant species. A systematic evaluation was conducted to quantify the reconstruction precision and system stability at different measurement distances. The designed system acquired 4D image spectral data with plants as the objects to be tested. The distribution characteristics of chlorophyll (Chl) on the 3D surface of plants were obtained by first-order derivatives of the spectral data and the normalized difference red edge (NDRE) index. This technique provides a new means for plant phenotyping research and a more effective technical approach for the digitalization and precision monitoring of the agricultural industry.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100105"},"PeriodicalIF":6.4,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeafGen: Structure-aware Leaf Image Generation for Annotation-free Leaf Instance Segmentation. LeafGen:结构感知的叶片图像生成,用于无注释的叶片实例分割。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-20 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100092
Naoki Asada, Xinpeng Liu, Kanyu Xu, Ryohei Miyakawa, Yang Yang, Hiroaki Santo, Yosuke Toda, Fumio Okura

Instance segmentation of plant leaves plays a crucial role in plant phenotyping, leveraging the rapid advancements in neural network research. A significant challenge in leaf instance segmentation lies in the preparation of training datasets, which typically require manual annotations comprising numerous pairs of ground-truth masks and corresponding plant photographs. Recently, segmentation models pre-trained on large-scale datasets, e.g., Segment Anything, have enabled training-free (i.e., zero-shot) instance segmentation accessible to the public. However, applying these models to leaf segmentation often yields unsatisfactory results, as the training datasets for these foundation models may lack sufficient plant imagery to accurately segment leaves exhibiting heavy occlusions and similar textures. To address this issue, we propose a fully automatic method for generating training datasets for leaf instance segmentation, combining an off-the-shelf zero-shot model with structure-aware image generation. Specifically, given a set of plant images and an L-system growth rule representing the structural pattern of the target plant, the proposed method automatically produces an arbitrary number of instance mask and photorealistic plant image pairs, eliminating the need for manual annotation. To maximize usability, we also provide a GUI front-end that integrates the entire pipeline of our method. Experiments on Arabidopsis, Komatsuna, and Rhaphiloepsis plants demonstrate that our method achieves more accurate segmentation compared to state-of-the-art zero-shot models, attaining AP@50 scores of 74.8, 76.0, and 88.2 for leaf instance segmentation of Arabidopsis, Komatsuna, and Rhaphiloepsis, respectively-without any manual annotation.

植物叶片的实例分割在植物表型分析中起着至关重要的作用,利用了快速发展的神经网络研究。叶实例分割的一个重大挑战在于训练数据集的准备,这通常需要手动注释,包括许多对地面真值掩模和相应的植物照片。最近,在大规模数据集上预训练的分割模型,例如,Segment Anything,已经使公众可以访问无训练(即零射击)的实例分割。然而,将这些模型应用于叶片分割通常会产生令人不满意的结果,因为这些基础模型的训练数据集可能缺乏足够的植物图像来准确分割具有严重遮挡和相似纹理的叶片。为了解决这个问题,我们提出了一种全自动的方法来生成用于叶子实例分割的训练数据集,将现成的零拍摄模型与结构感知图像生成相结合。具体而言,给定一组植物图像和代表目标植物结构模式的l系统生长规则,该方法可以自动生成任意数量的实例掩码和逼真的植物图像对,从而消除了人工注释的需要。为了最大限度地提高可用性,我们还提供了一个GUI前端,它集成了我们方法的整个管道。对拟南芥、小松和Rhaphiloepsis植物的实验表明,与最先进的零射击模型相比,我们的方法实现了更准确的分割,在不需要任何手动注释的情况下,拟南芥、小松和Rhaphiloepsis的叶片实例分割分别达到了AP@50得分74.8、76.0和88.2。
{"title":"LeafGen: Structure-aware Leaf Image Generation for Annotation-free Leaf Instance Segmentation.","authors":"Naoki Asada, Xinpeng Liu, Kanyu Xu, Ryohei Miyakawa, Yang Yang, Hiroaki Santo, Yosuke Toda, Fumio Okura","doi":"10.1016/j.plaphe.2025.100092","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100092","url":null,"abstract":"<p><p>Instance segmentation of plant leaves plays a crucial role in plant phenotyping, leveraging the rapid advancements in neural network research. A significant challenge in leaf instance segmentation lies in the preparation of training datasets, which typically require manual annotations comprising numerous pairs of ground-truth masks and corresponding plant photographs. Recently, segmentation models pre-trained on large-scale datasets, <i>e.g.</i>, Segment Anything, have enabled training-free (<i>i.e.</i>, zero-shot) instance segmentation accessible to the public. However, applying these models to leaf segmentation often yields unsatisfactory results, as the training datasets for these foundation models may lack sufficient plant imagery to accurately segment leaves exhibiting heavy occlusions and similar textures. To address this issue, we propose a fully automatic method for generating training datasets for leaf instance segmentation, combining an off-the-shelf zero-shot model with structure-aware image generation. Specifically, given a set of plant images and an L-system growth rule representing the structural pattern of the target plant, the proposed method automatically produces an arbitrary number of instance mask and photorealistic plant image pairs, eliminating the need for manual annotation. To maximize usability, we also provide a GUI front-end that integrates the entire pipeline of our method. Experiments on <i>Arabidopsis</i>, Komatsuna, and <i>Rhaphiloepsis</i> plants demonstrate that our method achieves more accurate segmentation compared to state-of-the-art zero-shot models, attaining AP@50 scores of 74.8, 76.0, and 88.2 for leaf instance segmentation of <i>Arabidopsis</i>, Komatsuna, and <i>Rhaphiloepsis</i>, respectively-without any manual annotation.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100092"},"PeriodicalIF":6.4,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for three-dimensional (3D) plant phenomics. 三维(3D)植物表型学的深度学习。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-18 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100107
Shichao Jin, Dawei Li, Ting Yun, Jianling Tang, Ke Wang, Shaochen Li, Hongyi Yang, Si Yang, Shan Xu, Lin Cao, Haifeng Xia, Qinghua Guo, Yu Zhang, Dong Jiang, Yanfeng Ding

Plant phenomics, the comprehensive study of plant phenotypes, has gained prominence as a vital tool for understanding the intricate relationships between genotypes and the environment. Image-based plant phenomics has progressed rapidly, and three-dimensional (3D) phenotyping is a valuable extension of traditional 2D phenomics. However, the increased data dimensionality poses challenges to feature extraction and phenotyping. In recent decades, deep learning has led to remarkable progress in revolutionizing 3D phenotyping. Therefore, this review highlights the importance of using deep learning in 3D plant phenomics. It systematically overviews the capabilities of deep learning for 3D computer vision, covering 3D representation, classification, detection and tracking, semantic segmentation, instance segmentation, and generation. Additionally, deep learning techniques for 3D point preprocessing (e.g., annotation, downsampling, and dataset organization) and various plant phenotyping tasks are discussed. Finally, the challenges and perspectives associated with deep learning in 3D plant phenomics are summarized, including (1) benchmark dataset construction by using synthetic datasets and methods such as generative artificial intelligence and unsupervised or weakly supervised learning; (2) accurate and efficient 3D point cloud analysis by leveraging multitask learning, lightweight models, and self-supervised learning; and (3) deep learning for 3D plant phenomics by exploring interpretability, extensibility, and multimodal data utilization. The exploration of deep learning in 3D plant phenomics is poised to spur breakthroughs in a new dimension of plant science.

植物表型组学是对植物表型的综合研究,已成为理解基因型与环境之间复杂关系的重要工具。基于图像的植物表型组学发展迅速,三维表型是传统二维表型组学的宝贵延伸。然而,数据维度的增加对特征提取和表型分析提出了挑战。近几十年来,深度学习在彻底改变3D表型方面取得了显著进展。因此,这篇综述强调了在3D植物表型组学中使用深度学习的重要性。它系统地概述了3D计算机视觉的深度学习功能,包括3D表示,分类,检测和跟踪,语义分割,实例分割和生成。此外,还讨论了用于3D点预处理(例如,注释,下采样和数据集组织)和各种植物表型任务的深度学习技术。最后,总结了3D植物表型组学中深度学习的挑战和前景,包括:(1)利用合成数据集和生成式人工智能、无监督或弱监督学习等方法构建基准数据集;(2)利用多任务学习、轻量级模型和自监督学习实现准确高效的三维点云分析;(3)通过探索可解释性、可扩展性和多模态数据利用,对3D植物表型学进行深度学习。在3D植物表型组学中深度学习的探索有望在植物科学的一个新维度上取得突破。
{"title":"Deep learning for three-dimensional (3D) plant phenomics.","authors":"Shichao Jin, Dawei Li, Ting Yun, Jianling Tang, Ke Wang, Shaochen Li, Hongyi Yang, Si Yang, Shan Xu, Lin Cao, Haifeng Xia, Qinghua Guo, Yu Zhang, Dong Jiang, Yanfeng Ding","doi":"10.1016/j.plaphe.2025.100107","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100107","url":null,"abstract":"<p><p>Plant phenomics, the comprehensive study of plant phenotypes, has gained prominence as a vital tool for understanding the intricate relationships between genotypes and the environment. Image-based plant phenomics has progressed rapidly, and three-dimensional (3D) phenotyping is a valuable extension of traditional 2D phenomics. However, the increased data dimensionality poses challenges to feature extraction and phenotyping. In recent decades, deep learning has led to remarkable progress in revolutionizing 3D phenotyping. Therefore, this review highlights the importance of using deep learning in 3D plant phenomics. It systematically overviews the capabilities of deep learning for 3D computer vision, covering 3D representation, classification, detection and tracking, semantic segmentation, instance segmentation, and generation. Additionally, deep learning techniques for 3D point preprocessing (e.g., annotation, downsampling, and dataset organization) and various plant phenotyping tasks are discussed. Finally, the challenges and perspectives associated with deep learning in 3D plant phenomics are summarized, including (1) benchmark dataset construction by using synthetic datasets and methods such as generative artificial intelligence and unsupervised or weakly supervised learning; (2) accurate and efficient 3D point cloud analysis by leveraging multitask learning, lightweight models, and self-supervised learning; and (3) deep learning for 3D plant phenomics by exploring interpretability, extensibility, and multimodal data utilization. The exploration of deep learning in 3D plant phenomics is poised to spur breakthroughs in a new dimension of plant science.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100107"},"PeriodicalIF":6.4,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishment of a high-throughput field defoliation data survey strategy combined with genome-wide association studies to reveal the genetic basis of defoliation in cotton. 建立高通量田间落叶数据调查策略,结合全基因组关联研究揭示棉花落叶的遗传基础。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-17 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100109
Bowei Xu, Le Liu, Rumeng Zhao, Jiajie Yang, Bin Wu, Lili Lu, Xiantao Ai, Jingshan Tian, Fuguang Li, Kai Zheng, Liqiang Fan, Zuoren Yang

Pre-harvest defoliation of cotton is a key agricultural measure to improve mechanical harvesting efficiency and raw cotton purity. Collecting data on cotton defoliation traits for genetic localization and thus breeding defoliation-prone varieties is an essential alternative to traditional defoliant spraying. Nevertheless, it is hampered by low throughput and artificial error in manual field surveys. In this study, a framework for collecting high-throughput defoliation data in large fields was established. Three spectral indices (MTCI, VDVI, CI) and leaf area index (LAI) were first screened as core predictors through hierarchical segmentation analysis in three levels: leaf number (LN), leaf number difference (LND), and defoliation rate (DR). Four deep learning architectures (CNN, BiGRU, CNN-BiGRU, and CNN-BiGRU-Attention) were developed, and the CNN-BiGRU-Attention hybrid model demonstrated superior performance at all three levels, with R2 values exceeding 0.85. Importantly, the inversion accuracy of this model at the LN and LND levels was superior to that at the DR level, which was also confirmed by the results of the genome-wide association study (GWAS). We combined GWAS and transcriptome results to identify a new gene, GhDR_UAV1, associated with defoliation traits. The overexpression of GhDR_UAV1 significantly promoted the wilting of cotton leaves, indicating that GhDR_UAV1 plays a positive regulatory role in cotton defoliation. This study proposed a strategy to invert cotton defoliation data at three levels using deep learning fusion of UAV remote sensing data and LAI data and confirmed that LND can provide accurate phenotypic data for GWAS analysis. This study provides a new theoretical basis for cotton defoliation regulation and genetic improvement by integrating cotton high-throughput defoliation phenomics and genomics from an innovative perspective.

棉花采前落叶是提高机械采收效率和原棉纯度的关键农业措施。收集棉花脱落性状数据进行遗传定位,从而选育易脱落品种是传统脱叶剂喷洒的重要替代方法。然而,人工实地调查的低通量和人为误差阻碍了它的发展。在本研究中,建立了一个大范围采集高通量落叶数据的框架。首先通过叶数(LN)、叶数差(LND)和落叶率(DR)三个层次的分层分割分析,筛选出MTCI、VDVI、CI和叶面积指数(LAI)作为核心预测因子。开发了CNN、BiGRU、CNN-BiGRU和CNN-BiGRU- attention四种深度学习架构,其中CNN-BiGRU- attention混合模型在三个层次上均表现优异,R2值均超过0.85。重要的是,该模型在LN和LND水平上的反演精度优于DR水平,这也被全基因组关联研究(GWAS)的结果所证实。我们结合GWAS和转录组结果鉴定了一个与落叶性状相关的新基因GhDR_UAV1。过表达GhDR_UAV1显著促进了棉花叶片的萎蔫,说明GhDR_UAV1在棉花落叶中起着积极的调控作用。本研究提出了将无人机遥感数据与LAI数据深度学习融合,在三个层面反演棉花落叶数据的策略,证实了LND可以为GWAS分析提供准确的表型数据。本研究创新性地将棉花高通量落叶表型组学与基因组学相结合,为棉花落叶调控和遗传改良提供了新的理论基础。
{"title":"Establishment of a high-throughput field defoliation data survey strategy combined with genome-wide association studies to reveal the genetic basis of defoliation in cotton.","authors":"Bowei Xu, Le Liu, Rumeng Zhao, Jiajie Yang, Bin Wu, Lili Lu, Xiantao Ai, Jingshan Tian, Fuguang Li, Kai Zheng, Liqiang Fan, Zuoren Yang","doi":"10.1016/j.plaphe.2025.100109","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100109","url":null,"abstract":"<p><p>Pre-harvest defoliation of cotton is a key agricultural measure to improve mechanical harvesting efficiency and raw cotton purity. Collecting data on cotton defoliation traits for genetic localization and thus breeding defoliation-prone varieties is an essential alternative to traditional defoliant spraying. Nevertheless, it is hampered by low throughput and artificial error in manual field surveys. In this study, a framework for collecting high-throughput defoliation data in large fields was established. Three spectral indices (MTCI, VDVI, CI) and leaf area index (LAI) were first screened as core predictors through hierarchical segmentation analysis in three levels: leaf number (LN), leaf number difference (LND), and defoliation rate (DR). Four deep learning architectures (CNN, BiGRU, CNN-BiGRU, and CNN-BiGRU-Attention) were developed, and the CNN-BiGRU-Attention hybrid model demonstrated superior performance at all three levels, with R<sup>2</sup> values exceeding 0.85. Importantly, the inversion accuracy of this model at the LN and LND levels was superior to that at the DR level, which was also confirmed by the results of the genome-wide association study (GWAS). We combined GWAS and transcriptome results to identify a new gene, <i>GhDR_UAV1</i>, associated with defoliation traits. The overexpression of <i>GhDR_UAV1</i> significantly promoted the wilting of cotton leaves, indicating that <i>GhDR_UAV1</i> plays a positive regulatory role in cotton defoliation. This study proposed a strategy to invert cotton defoliation data at three levels using deep learning fusion of UAV remote sensing data and LAI data and confirmed that LND can provide accurate phenotypic data for GWAS analysis. This study provides a new theoretical basis for cotton defoliation regulation and genetic improvement by integrating cotton high-throughput defoliation phenomics and genomics from an innovative perspective.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100109"},"PeriodicalIF":6.4,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPENS: Interactive unsupervised framework for rapid plant phenotyping extraction via NeRF-SAM2 fusion. IPENS:通过NeRF-SAM2融合快速提取植物表型的交互式无监督框架。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-15 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100106
Wentao Song, He Huang, Fang Qu, Jiaqi Zhang, Longhui Fang, Yuwei Hao, Chenyang Peng, Youqiang Sun

Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding. Due to the species diversity of plants, existing methods heavily rely on large-scale high-precision manually annotated data. For self-occluded objects at the grain level, unsupervised methods often prove ineffective. This study proposes IPENS, an interactive unsupervised multi-target point cloud extraction method. It utilizes radiance field information to lift 2D masks, segmented by SAM2 (Segment Anything Model 2), into 3D space for target point cloud extraction. A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction. On a rice dataset, IPENS achieves a grain-level segmentation mean Intersection over Union (mIoU) of 63.72 ​%. For phenotypic trait estimation, it achieves a grain voxel volume coefficient of determination R 2 ​= ​0.7697 (Root Mean Square Error, RMSE ​= ​0.0025), leaf surface area R 2 ​= ​0.84 (RMSE ​= ​18.93), and leaf length and width prediction accuracies of R 2 ​= ​0.97 and R 2 ​= ​0.87 (RMSE ​= ​1.49 and 0.21). On a wheat dataset, IPENS further improves segmentation performance to a mIoU of 89.68 ​%, with exceptional phenotypic estimation results: panicle voxel volume R 2 ​= ​0.9956 (RMSE ​= ​0.0055), leaf surface area R 2 ​= ​1.00 (RMSE ​= ​0.67), and leaf length and width predictions reaching R 2 ​= ​0.99 and R 2 ​= ​0.92 (RMSE ​= ​0.23 and 0.15). Without requiring annotated data, IPENS rapidly extracts grain-level point clouds for multiple targets within 3 ​min using single-round image interactions. These features make IPENS a high-quality, non-invasive phenotypic extraction solution for rice and wheat, offering significant potential to enhance intelligent breeding.

先进的植物表型技术对性状改良和加速智能育种具有重要意义。由于植物的物种多样性,现有的方法严重依赖于大规模的高精度人工标注数据。对于颗粒级自遮挡物体,无监督方法往往是无效的。本研究提出一种交互式无监督多目标点云提取方法IPENS。它利用辐射场信息将由SAM2 (Segment Anything Model 2)分割的2D掩模提升到3D空间中进行目标点云提取。多目标协同优化策略解决了从单个交互中分割多个目标的挑战。在水稻数据集上,IPENS实现了63.72%的粒级分割均值交叉点超过联合(Intersection over Union, mIoU)。对表型性状的预测,粒体素体积决定系数r2 = 0.7697(均方根误差,RMSE = 0.0025),叶表面积r2 = 0.84 (RMSE = 18.93),叶长和叶宽预测精度r2 = 0.97和r2 = 0.87 (RMSE = 1.49和0.21)。在小麦数据集上,IPENS进一步提高了分割性能,mIoU达到89.68%,表型估计结果异常:穗体素体积r2 = 0.9956 (RMSE = 0.0055),叶表面积r2 = 1.00 (RMSE = 0.67),叶长和叶宽预测r2 = 0.99和r2 = 0.92 (RMSE = 0.23和0.15)。IPENS无需带注释的数据,通过单轮图像交互,在3分钟内快速提取多个目标的颗粒级点云。这些特点使IPENS成为一种高质量、无创的水稻和小麦表型提取解决方案,具有增强智能育种的巨大潜力。
{"title":"IPENS: Interactive unsupervised framework for rapid plant phenotyping extraction via NeRF-SAM2 fusion.","authors":"Wentao Song, He Huang, Fang Qu, Jiaqi Zhang, Longhui Fang, Yuwei Hao, Chenyang Peng, Youqiang Sun","doi":"10.1016/j.plaphe.2025.100106","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100106","url":null,"abstract":"<p><p>Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding. Due to the species diversity of plants, existing methods heavily rely on large-scale high-precision manually annotated data. For self-occluded objects at the grain level, unsupervised methods often prove ineffective. This study proposes IPENS, an interactive unsupervised multi-target point cloud extraction method. It utilizes radiance field information to lift 2D masks, segmented by SAM2 (Segment Anything Model 2), into 3D space for target point cloud extraction. A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction. On a rice dataset, IPENS achieves a grain-level segmentation mean Intersection over Union (mIoU) of 63.72 ​%. For phenotypic trait estimation, it achieves a grain voxel volume coefficient of determination <i>R</i> <sup>2</sup> ​= ​0.7697 (Root Mean Square Error, RMSE ​= ​0.0025), leaf surface area <i>R</i> <sup>2</sup> ​= ​0.84 (RMSE ​= ​18.93), and leaf length and width prediction accuracies of <i>R</i> <sup>2</sup> ​= ​0.97 and <i>R</i> <sup>2</sup> ​= ​0.87 (RMSE ​= ​1.49 and 0.21). On a wheat dataset, IPENS further improves segmentation performance to a mIoU of 89.68 ​%, with exceptional phenotypic estimation results: panicle voxel volume <i>R</i> <sup>2</sup> ​= ​0.9956 (RMSE ​= ​0.0055), leaf surface area <i>R</i> <sup>2</sup> ​= ​1.00 (RMSE ​= ​0.67), and leaf length and width predictions reaching <i>R</i> <sup>2</sup> ​= ​0.99 and <i>R</i> <sup>2</sup> ​= ​0.92 (RMSE ​= ​0.23 and 0.15). Without requiring annotated data, IPENS rapidly extracts grain-level point clouds for multiple targets within 3 ​min using single-round image interactions. These features make IPENS a high-quality, non-invasive phenotypic extraction solution for rice and wheat, offering significant potential to enhance intelligent breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100106"},"PeriodicalIF":6.4,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From leaf to canopy: Inversion of lettuce pigment distribution using hyperspectral imaging technology combined with deep learning algorithms. 从叶片到冠层:结合深度学习算法的高光谱成像技术反演莴苣色素分布。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-12 eCollection Date: 2025-12-01 DOI: 10.1016/j.plaphe.2025.100104
Yue Zhao, Jiangchuan Fan, Xianju Lu, Ying Zhang, Weiliang Wen, Guanmin Huang, Yinglun Li, Xinyu Guo, Liping Chen

Plant pigment content is a crucial indicator for assessing photosynthetic efficiency, nutritional status, and physiological health. Its spatial distribution is significantly influenced by variety, location, and environmental factors. However, existing methods for measuring pigment content are often destructive, inefficient, and costly, making them unsuitable for the demands of modern precision agriculture. This study proposes a cross-scale, non-destructive detection method for lettuce pigments by integrating hyperspectral imaging (HSI) technology with deep learning algorithms, addressing the limitations of existing techniques in high-throughput and spatial resolution analysis. In this study, we built a multidimensional dataset based on eight different types of lettuce and developed a deep learning model named LPCNet to predict the contents of chlorophyll a (Chl a), chlorophyll b (Chl b), carotenoids (Car), and total pigment content (TPC) in lettuce. The LPCNet model integrates convolutional neural networks (CNN), bidirectional long short-term memory networks (BiLSTM), and multi-head self-attention (MHSA) mechanisms, enabling automatic extraction of pigment-related key features and simplifying the complex preprocessing and feature selection procedures required in traditional machine learning. Compared to multivariate analysis methods in machine learning, LPCNet demonstrated superior predictive accuracy, with coefficients of determination ( R P 2 ) of 0.9449, 0.8613, 0.9121, and 0.8476 for Chl a, Chl b, Car, and TPC, respectively. Additionally, by combining the hyperspectral reflectance of lettuce canopies with the leaf-level inversion model, we visualized the spatial distribution of pigment content on the canopy of lettuce, achieving cross-scale analysis from leaf to canopy. This study provides an innovative approach for the rapid and accurate assessment of lettuce pigment content and offers an effective visualization tool for revealing the physiological processes and growth development of lettuce.

植物色素含量是评价植物光合效率、营养状况和生理健康状况的重要指标。其空间分布受品种、区位和环境因素的影响显著。然而,现有的测量色素含量的方法往往是破坏性的,低效的,昂贵的,使它们不适合现代精准农业的需求。本研究提出了一种将高光谱成像(HSI)技术与深度学习算法相结合的莴苣色素跨尺度无损检测方法,解决了现有技术在高通量和空间分辨率分析方面的局限性。在本研究中,我们基于8种不同类型的生菜构建了一个多维数据集,并开发了一个名为LPCNet的深度学习模型来预测生菜中叶绿素a (Chl a)、叶绿素b (Chl b)、类胡萝卜素(Car)和总色素含量(TPC)的含量。LPCNet模型集成了卷积神经网络(CNN)、双向长短期记忆网络(BiLSTM)和多头自注意(MHSA)机制,实现了色素相关关键特征的自动提取,简化了传统机器学习中复杂的预处理和特征选择过程。与机器学习中的多变量分析方法相比,LPCNet的预测精度更高,Chl a、Chl b、Car和TPC的决定系数(r2)分别为0.9449、0.8613、0.9121和0.8476。此外,我们将莴苣冠层的高光谱反射率与叶级反演模型相结合,可视化莴苣冠层上色素含量的空间分布,实现从叶片到冠层的跨尺度分析。本研究为莴苣色素含量的快速准确测定提供了一种创新方法,为揭示莴苣的生理过程和生长发育提供了有效的可视化工具。
{"title":"From leaf to canopy: Inversion of lettuce pigment distribution using hyperspectral imaging technology combined with deep learning algorithms.","authors":"Yue Zhao, Jiangchuan Fan, Xianju Lu, Ying Zhang, Weiliang Wen, Guanmin Huang, Yinglun Li, Xinyu Guo, Liping Chen","doi":"10.1016/j.plaphe.2025.100104","DOIUrl":"https://doi.org/10.1016/j.plaphe.2025.100104","url":null,"abstract":"<p><p>Plant pigment content is a crucial indicator for assessing photosynthetic efficiency, nutritional status, and physiological health. Its spatial distribution is significantly influenced by variety, location, and environmental factors. However, existing methods for measuring pigment content are often destructive, inefficient, and costly, making them unsuitable for the demands of modern precision agriculture. This study proposes a cross-scale, non-destructive detection method for lettuce pigments by integrating hyperspectral imaging (HSI) technology with deep learning algorithms, addressing the limitations of existing techniques in high-throughput and spatial resolution analysis. In this study, we built a multidimensional dataset based on eight different types of lettuce and developed a deep learning model named LPCNet to predict the contents of chlorophyll <i>a</i> (Chl a), chlorophyll <i>b</i> (Chl b), carotenoids (Car), and total pigment content (TPC) in lettuce. The LPCNet model integrates convolutional neural networks (CNN), bidirectional long short-term memory networks (BiLSTM), and multi-head self-attention (MHSA) mechanisms, enabling automatic extraction of pigment-related key features and simplifying the complex preprocessing and feature selection procedures required in traditional machine learning. Compared to multivariate analysis methods in machine learning, LPCNet demonstrated superior predictive accuracy, with coefficients of determination ( <math> <mrow><msubsup><mi>R</mi> <mi>P</mi> <mn>2</mn></msubsup> </mrow> </math> ) of 0.9449, 0.8613, 0.9121, and 0.8476 for Chl a, Chl b, Car, and TPC, respectively. Additionally, by combining the hyperspectral reflectance of lettuce canopies with the leaf-level inversion model, we visualized the spatial distribution of pigment content on the canopy of lettuce, achieving cross-scale analysis from leaf to canopy. This study provides an innovative approach for the rapid and accurate assessment of lettuce pigment content and offers an effective visualization tool for revealing the physiological processes and growth development of lettuce.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 4","pages":"100104"},"PeriodicalIF":6.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147779160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation. LenRuler:一种以水稻为中心的自动根长度测量方法,具有多组验证。
IF 6.4 1区 农林科学 Q1 AGRONOMY Pub Date : 2025-09-08 eCollection Date: 2025-09-01 DOI: 10.1016/j.plaphe.2025.100103
Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang

Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 ​mm and the coefficient of determination (R2) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (Zea mays), pearl millet (Pennisetum glaucum), and rye (Secale cereale) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.

胚根长度是衡量种子活力、发芽能力和幼苗生长潜力的重要指标。然而,现有的测量方法在自动化、效率和通用性方面面临挑战,通常需要人工干预或对不同的种子类型进行重新注释。为了解决这些限制,本文提出了一种自动化方法,LenRuler,主要关注水稻种子并在多种作物中进行验证。该方法以片段任意模型(Segment Anything Model, SAM)为基础分割模型,采用粗到精的分割策略,结合高斯分类,自动生成边界框和质心,将边界框和质心输入到SAM中,对种皮和胚根进行精确分割。通过将胚根骨架最远端点与种皮骨架最近交点之间的测地线距离转换为真实长度,计算胚根长度。在Riceseed1数据集上的实验表明,该方法的Dice系数为0.955,Pixel Accuracy为0.944,具有良好的分割性能。稻根长度测定实验结果表明,平均绝对误差(MAE)为0.273 mm,测定系数(R2)为0.982,表明该方法对水稻具有较高的测定精度。在Otherseed数据集上,玉米(Zea mays)、珍珠粟(Pennisetum glaucum)和黑麦(Secale cereale)的预测胚根长度与观测到的胚根长度分布一致,表现出很强的跨物种性能。这些结果表明,LenRuler是一种精确、自动化的水稻胚根长度测量方法,并可应用于其他作物品种。
{"title":"LenRuler: a rice-centric method for automated radicle length measurement with multicrop validation.","authors":"Jinfeng Zhao, Zeyu Hou, Hua Hua, Qianlong Nie, Yuqian Pang, Yan Ma, Xuehui Huang","doi":"10.1016/j.plaphe.2025.100103","DOIUrl":"10.1016/j.plaphe.2025.100103","url":null,"abstract":"<p><p>Radicle length is a critical indicator of seed vigor, germination capacity, and seedling growth potential. However, existing measurement methods face challenges in automation, efficiency, and generalizability, often requiring manual intervention or re-annotation for different seed types. To address these limitations, this paper proposes an automated method, LenRuler, with a primary focus on rice seeds and validation in multiple crops. The method leverages the Segment Anything Model (SAM) as the foundational segmentation model and employs a coarse-to-fine segmentation strategy combined with Gaussian-based classification to automatically generate bounding boxes and centroids, which are then fed into SAM for precise segmentation of the seed coat and radicle. The radicle length is subsequently computed by converting the geodesic distance between the radicle skeleton's farthest endpoint and its nearest intersection with the seed coat skeleton into the true length. Experiments on the Riceseed1 dataset show that the proposed method achieves a Dice coefficient of 0.955 and a Pixel Accuracy of 0.944, demonstrating excellent segmentation performance. Radicle length measurement experiments on the Riceseed2 test set show that the Mean Absolute Error (MAE) was 0.273 ​mm and the coefficient of determination (R<sup>2</sup>) was 0.982, confirming the method's high precision for rice. On the Otherseed dataset, the predicted radicle lengths for maize (<i>Zea mays</i>), pearl millet (<i>Pennisetum glaucum</i>), and rye (<i>Secale cereale</i>) are consistent with the observed radicle length distributions, demonstrating strong cross-species performance. These results establish LenRuler as an accurate and automated solution for radicle length measurement in rice, with validated applicability to other crop species.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"7 3","pages":"100103"},"PeriodicalIF":6.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12710053/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Plant Phenomics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1