A Review of Computer Vision Technologies in Precision Agriculture

Research Article
Open access

A Review of Computer Vision Technologies in Precision Agriculture

Wenqi Wang 1 , Ye Kang 2*
  • 1 Institute of Engineering, Heilongjiang Bayi Agricultural University, Daqing, China    
  • 2 Institute of Engineering, Heilongjiang Bayi Agricultural University, Daqing, China    
  • *corresponding author kangyebynd@126.com
TNS Vol.101
ISSN (Print): 2753-8826
ISSN (Online): 2753-8818
ISBN (Print): 978-1-80590-017-7
ISBN (Online): 978-1-80590-018-4

Abstract

Precision agriculture offers a promising solution to enhance crop productivity and sustainability amidst global agricultural challenges. This paper reviews the development and application of computer vision technologies in modern farming, with a focus on deep learning techniques such as Convolutional Neural Networks (CNNs), including Residual Network (ResNet), You Only Look Once (YOLO), and Segmentation Network (SegNet), applied to disease detection, weed classification, and crop health monitoring. The integration of Unmanned Aerial Vehicles (UAVs), robotics, and the Internet of Things (IoT) has significantly advanced agricultural efficiency. However, challenges such as data scarcity, computational limitations, and environmental variability continue to impede large-scale adoption. Emerging solutions, such as lightweight AI models, edge computing, and multi-source data fusion, offer potential pathways to overcome these hurdles. These innovations are critical for scaling, adapting, and sustaining precision agriculture technologies. This paper provides an overview of the current state of computer vision in precision agriculture, identifies key challenges, and outlines future research directions aimed at advancing the field.

Keywords:

computer vision, precision agriculture, deep learning, smart farming

Wang,W.;Kang,Y. (2025). A Review of Computer Vision Technologies in Precision Agriculture. Theoretical and Natural Science,101,35-40.
Export citation

1. Introduction

As global agriculture faces growing challenges due to population growth, climate change, and resource constraints, precision agriculture has emerged as a promising solution [1]. It integrates computer vision, IoT, and machine learning into modern farming practices. Deep learning-based computer vision has significantly improved crop health monitoring, disease detection, and automated farm management, offering greater accuracy and adaptability compared to traditional feature-based methods [2].

Recent advancements have introduced CNN architectures such as ResNet, the YOLO object detection algorithm, and SegNet, which, when combined with UAV imagery, robotics, and hyperspectral imaging, enhance weed detection and disease classification. However, challenges remain, such as limited access to diverse agricultural datasets, high computational demands for real-time processing, and environmental variability affecting model performance.

This paper explores the evolution, applications, and challenges of computer vision in agriculture, focusing on directions such as lightweight artificial intelligence (AI) models, multimodal data fusion, and edge computing. Developing these aspects is crucial for scaling precision agriculture, ensuring efficient, adaptive, and sustainable smart farming systems.

2. The Evolution of Computer Vision in Agriculture

Traditional computer vision relied on handcrafted feature extraction methods such as Scale-Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG), and Oriented FAST and Rotated BRIEF (ORB), which used predefined rules for image analysis. However, these approaches struggled with lighting, scale, and occlusion variations, limiting their adaptability to complex environments.

The rise of deep learning has shifted the focus to data-driven feature learning, reducing dependence on manually designed features. In object detection, Carion et al. introduced Detection Transformer (DETR), which eliminated anchors and non-maximum suppression, enabling end-to-end detection [3]. In feature matching, Sarlin et al. proposed SuperGlue, which integrates graph neural networks and self-attention mechanisms to improve robustness and spatial awareness [4]. Obviously, computer vision is transitioning from rigid, manually designed features to flexible, data-driven architectures, leveraging deep learning to establish a foundation for more complex visual tasks.

This trend has also extended to agriculture, particularly in weed detection, disease identification, and precision farming management, where deep learning has become the dominant technology. Early computer vision applications in agriculture relied on traditional image processing techniques, such as color analysis, texture extraction, and shape recognition. These methods performed well in controlled environments but struggled with lighting variations, occlusion, and complex field conditions. Weed detection, in particular, initially depended on handcrafted feature extraction and classical machine learning models, which lacked adaptability to diverse agricultural settings.

With advances in deep learning, agricultural vision systems have shifted toward data-driven intelligent models, improving weed classification, disease detection, and crop stress assessment. Murad et al. highlighted that CNN-based models, including ResNet, YOLO, and SegNet, now dominate weed detection research, significantly surpassing earlier handcrafted techniques. These models achieve higher accuracy, real-time adaptability, and better handling of complex field conditions. In contrast, Visual Geometry Group Network (VGGNet) demonstrated the lowest accuracy (84%), while ResNet-based architectures reached up to 99% [5].

Beyond weed detection, computer vision is now integrated with IoT, UAVs, and agricultural robotics, advancing precision agriculture and automation. UAVs with hyperspectral imaging support large-scale crop monitoring, while vision-guided robotic sprayers optimize pesticide application, reducing chemical use and improving environmental sustainability [6]. Deep learning also powers autonomous weeding, allowing robots to differentiate crops from weeds and perform targeted herbicide application.

The transition from handcrafted feature-based vision systems to AI-driven smart farming continues to reshape precision agriculture, improving efficiency, sustainability, and resource management.

3. Key Applications of Computer Vision in Agriculture

With the continuous optimization of deep learning algorithms and rapid advancements in computer hardware, computer vision has not only significantly improved agricultural productivity but also accelerated the process of intelligent agricultural management. Its applications have gradually expanded from industrial sectors to agriculture, demonstrating tremendous potential in weed detection, disease identification, and precision agricultural management.

Computer vision has become a key tool in crop health monitoring and disease detection, particularly for automatic identification and classification. For instance, Rajasree et al. effectively applied the YOLOv5 model for real-time tomato leaf disease detection, achieving 93% mAP, with excellent performance in terms of both speed and accuracy, making it suitable for practical agricultural applications [7]. However, dataset diversity and scale limitations can restrict the model's effectiveness. To address these, Chen et al. proposed a combination of transfer learning and data augmentation, which significantly enhanced performance in complex environments [8].

In addition to disease detection, precision fertilization is also critical for optimizing agricultural resource usage. Zhu et al. developed a computer vision-based dual-face precision spraying system using the SN-YOLOX Nano-ECA model to identify leaf area and plant height with 97% and 96% accuracy, enabling variable-rate foliar fertilization and reducing spray deviation to 0.46 mL [9]. However, it relies only on morphological features, lacking direct nutrient assessment. Yi et al. applied computer vision with DenseNet-161 for RGB-based nutrient diagnosis, achieving 98.4% accuracy in detecting nitrogen, phosphorus, and potassium deficiencies [10]. This enhances fertilization precision but faces challenges in adapting to different growth stages. Integrating morphological analysis with nutrient diagnostics could make computer vision-driven fertilization more adaptive and effective in dynamic field conditions. Compared to traditional morphology-based fertilization methods, the integration of computer vision-based nutrient diagnosis enhances accuracy and improves system adaptability in complex field environments.

Further advancements in computer vision have been made in crop yield prediction and farm management. Bhadra et al. developed a 3D CNN model using UAV imagery, achieving R² = 0.69 in soybean yield prediction, offering a non-destructive way to estimate crop production on a large scale [11]. However, its accuracy declined in low-yield areas due to spatial variability. Apolo-Apolo et al. applied Faster R-CNN to map orchard yield, improving fruit detection and aiding farm planning [12]. While this method reduced spatial inconsistencies, tree canopy obstruction remained a challenge. Combining temporal analysis from 3D CNN with object detection could refine yield prediction, helping farmers make more precise management decisions.

As technology evolves, the application of computer vision in agriculture is shifting towards automation and intelligence. Mekhalfi et al. developed a vision-based kiwifruit yield estimation system, integrating optical sensors and image processing to automate fruit counting and improve preharvest planning [13]. This reduced labor costs and enhanced efficiency but struggled with occlusions and lighting variations, leading to detection errors. Alaaudeen et al. advanced this by integrating computer vision with robotic harvesting, enabling autonomous fruit grasping and reducing reliance on manual labor [14].

While these advancements have significantly improved agricultural automation and precision farming, several challenges still hinder the large-scale deployment of computer vision in real-world agricultural environments. Issues such as data availability, computational efficiency, environmental adaptability, and system integration remain key areas for future research, as discussed in the following section.

4. Challenges in Applying Computer Vision to Agriculture

Despite the progress made in applying computer vision to agriculture, the large-scale implementation of these technologies is still hindered by several challenges. These challenges, which stem from issues related to data, computational power, environmental variability, and system integration, collectively affect the effectiveness and scalability of computer vision-based solutions in agriculture.

One of the biggest hurdles is the fragmented nature of agricultural datasets, which limits the ability of models to generalize across different environments. Crop species, soil types, and climate conditions vary widely from region to region, creating data silos that hinder the adaptability of models. For example, a model trained to detect maize diseases in temperate regions may not work as effectively when applied to tropical rice fields. This fragmentation not only drives up the cost of data collection but also forces deep learning models to depend heavily on time-consuming, expert-driven annotations. To tackle this issue, future research should focus on self-supervised learning and cross-domain adaptation. These approaches can help leverage unlabeled data to identify useful features and allow models to learn from different types of crops, reducing the need for extensive labeling.

Another major challenge is the computational demands of deep learning-based computer vision models. These models often require powerful computing resources, which can be difficult to achieve on edge devices such as UAVs and autonomous farming robots. For instance, if a real-time weed detection system experiences delays due to computational limitations, it can affect the timing of herbicide application and, ultimately, crop yields. To address this, strategies like network pruning, quantization, and knowledge distillation can help optimize large, complex models and create smaller, more efficient versions that maintain accuracy. At the same time, edge computing frameworks can be used to distribute computational tasks more effectively, allowing for quicker responses even when resources are limited.

In addition, agricultural environments present their own set of challenges. Unlike controlled settings, agricultural fields are subject to factors like changing light conditions, occlusions caused by dense crop canopies, and the natural growth cycle of crops, all of which can cause significant variability in the visual data. For example, dew-covered leaves in the morning can be mistaken for disease symptoms by a computer vision system. To improve reliability under these conditions, combining data from different types of sensors, such as hyperspectral imaging, thermal cameras, and Light Detection and Ranging (LiDAR), can help provide a fuller picture of the crop’s state. By fusing this multimodal data, models can be made more resilient to environmental disruptions.

These environmental and data-related challenges also contribute to problems with generalizing models across different crop types and growth stages. For example, a model trained to detect mature wheat may struggle to identify weeds in early-stage crops. Techniques like temporal pattern analysis using models such as Long Short-Term Memory (LSTM) networks, which capture changes in crop growth over time, and transfer learning, which allows models to adapt quickly to new crops, could help address these issues. Additionally, the development of open-source benchmark datasets that cover a wide range of crops, climates, and growth stages would provide an essential foundation for building more universally applicable models.

Finally, even when promising technical solutions are developed, integrating them into real-world farming operations can be difficult. Issues like incompatibility with existing farming equipment and hardware limitations on smaller farms pose significant challenges. For instance, integrating computer vision systems with autonomous spraying robots requires close coordination between various technologies. To make this integration easier, modular system designs with standardized APIs and plug-and-play hardware components could allow for easier adaptation to different farm sizes. Low-cost, open-source solutions, such as those built on platforms like Robot Operating System for Agriculture (ROS Agri), could also help make these technologies more accessible to smaller, resource-limited farms.

Although these challenges present obstacles to widespread adoption, they also highlight key areas for future research. Advances in self-supervised learning, edge-optimized models, and multimodal sensor fusion, alongside improved system integration and cross-disciplinary collaboration, could bridge the gap between laboratory research and real-world applications. In the end, these innovations hold the potential to create computer vision-based agricultural systems that are both effective and economically viable.

5. Emerging Solutions: IoT, Edge Computing, and Multi-Source Data Fusion

The integration of IoT, edge computing, and multi-source data fusion is transforming computer vision in agriculture, enabling real-time crop monitoring, early disease detection, and precision farming. These technologies address key challenges such as data latency, connectivity issues, and single-source limitations, making smart agriculture more efficient and scalable.

IoT enhances computer vision by integrating sensor networks and deep learning models for plant disease detection. Kasera et al. developed an IoT-enabled smart agriculture system utilizing VGG-16 and ResNet-50 to classify tomato and brinjal leaf diseases, achieving detection rates of 97.81% and 99.03%, respectively [15]. However, traditional cloud-based systems struggle with high data transmission costs and latency, necessitating a shift toward edge computing.

Edge computing mitigates latency and bandwidth constraints by processing sensor and image data locally. Deploying machine learning models on edge devices has significantly improved real-time plant health assessment. Studies show that Integrating AI-driven analysis with edge computing can enhance the response speed and reduce dependence on cloud computing. Yet, single-source data limitations remain a challenge, requiring multi-source data fusion for a more comprehensive view.

Combining IoT sensor data, UAV imagery, and satellite imaging enhances agricultural decision-making. Ouhami et al. demonstrated that fusing hyperspectral imaging with environmental sensor readings significantly improved disease detection accuracy. Additionally, integrating AI-based sensor fusion techniques has optimized yield prediction, irrigation management, and early disease identification, outperforming single-source methods [16].

Despite their benefits, challenges remain, including data standardization, computational constraints of edge devices, and cybersecurity risks [17]. Future research should focus on adaptive AI models capable of dynamically integrating multi-source data, alongside the development of lightweight deep learning architectures optimized for edge deployment [18].

6. Conclusion

This study highlights the role of computer vision in precision agriculture, focusing on crop monitoring, disease detection, fertilization, and automation. While deep learning improves accuracy, challenges remain in data scarcity, computational demands, and environmental adaptability. Future research should prioritize lightweight AI models for edge deployment, multimodal data fusion, and enhanced automation. Overcoming these barriers will drive efficient, scalable, and sustainable smart farming, optimizing agricultural productivity while reducing resource consumption.


References

[1]. Nimmala, S., Ramchander, M., Mahendar, M., Manasa, P., Kiran, M. A., & Rambabu, B. (2024). A Recent Survey on AI Enabled Practices for Smart Agriculture. 2024 International Conference on Intelligent Systems for Cybersecurity, ISCS 2024.

[2]. Upadhyay, A., Chandel, N.S., Singh, K.P. et al. (2025). Deep Learning and Computer Vision in Plant Disease Detection: A Comprehensive Review of Techniques, Models, And Trends in Precision Agriculture. Artificial Intelligence Review, 58(92).

[3]. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-End Object Detection with Transformers. Computer Vision – ECCV 2020, Lecture Notes in Computer Science, 12346.

[4]. Sarlin, P. E., DeTone, D., Malisiewicz, T., & Rabinovich, A. (2020). SuperGlue: Learning Feature Matching with Graph Neural Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4938-4947.

[5]. Murad, N. Y., Mahmood, T., Forkan, A. R. M., Morshed, A., Jayaraman, P. P., & Siddiqui, M. S. (2023). Weed Detection Using Deep Learning: A Systematic Literature Review. Sensors, 23(7), 3670.

[6]. Shin, J., Mahmud, M. S., Rehman, T. U., Ravichandran, P., Heung, B., & Chang, Y. K. (2023). Trends and Prospect of Machine Vision Technology for Stresses and Diseases Detection in Precision Agriculture. AgriEngineering, 5(1), 20-39.

[7]. Rajamohanan, R., & Latha, B. C., (2023). An Optimized YOLO v5 Model for Tomato Leaf Disease Classification with Field Dataset. Engineering, Technology & Applied Science Research, 13(6), 12033–12038.

[8]. Chen, J., Chen, J., Zhang, D., Sun, Y., & Nanehkaran, Y. A. (2020). Using Deep Transfer Learning for Image-based Plant Disease Identification. Computers and Electronics in Agriculture, 173.

[9]. Zhu, C., Hao, S., Liu, C., Wang, Y., Jia, X., Xu, J., Guo, S., Huo, J., & Wang, W. (2024). An Efficient Computer Vision-Based Dual-Face Target Precision Variable Spraying Robotic System for Foliar Fertilisers. Agronomy, 14(12), 2770.

[10]. Yi, J., Krusenbaum, L., Unger, P., Hüging, H., Seidel, S. J., Schaaf, G., & Gall, J. (2020). Deep Learning for Non-Invasive Diagnosis of Nutrient Deficiencies in Sugar Beet Using RGB Images. Sensors,20(20), 5893.

[11]. Bhadra, S., Sagan, V., Skobalski, J., et al. (2024). End-to-end 3D CNN for Plot-scale Soybean Yield Prediction Using Multitemporal UAV-based RGB Images. Precision Agriculture, 25, 834–864.

[12]. Apolo-Apolo, O. E., Pérez-Ruiz, M., Martínez-Guanter, J., & Valente, J. (2020). A Cloud-based Environment for Generating Yield Estimation Maps from Apple Orchards Using UAV Imagery and A Deep Learning Technique. Frontiers in Plant Science, 11.

[13]. Mekhalfi, M. L., Nicolò, C., Ianniello, I., Calamita, F., Goller, R., Barazzuol, M., & Melgani, F. (2020). Vision System for Automatic On-tree Kiwifruit Counting and Yield Estimation. Sensors, 20(15), 4214.

[14]. Alaaudeen, K. M., Selvarajan, S., Manoharan, H., et al. (2024). Intelligent Robotics Harvesting System Process for Fruits Grasping Prediction. Scientific Reports, 14, 2820.

[15]. Kasera, R. K., Nath, S., Das, B., Kumar, A., & Acharjee, T. (2025). IoT-enabled Smart Agriculture System for Detection and Classification of Tomato and Brinjal Plant Leaves Disease. Scalable Computing: Practice and Experience, 26(1), 96–113.

[16]. Ouhami, M., Hafiane, A., Es-Saady, Y., El Hajji, M., & Canals, R. (2021). Computer vision, IoT and Data Fusion for Crop Disease Detection Using Machine Learning: A Survey and Ongoing Research. Remote Sensing, 13(13).

[17]. Durga Sai Prasad, G., Vanathi, A., & Kiruthika Devi, B. S. (2023). A Review on IoT Applications in Smart Agriculture. Advances in Transdisciplinary Engineering, 32, 683–688.

[18]. Gawande, A. R., & Sherekar, S. S. (2023). Analysis of crop diseases using IoT and machine learning approaches. In Proceedings of the International Conference on Applications of Machine Intelligence and Data Analytics, ICAMIDA 2022, 78-85.


Cite this article

Wang,W.;Kang,Y. (2025). A Review of Computer Vision Technologies in Precision Agriculture. Theoretical and Natural Science,101,35-40.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MPCS 2025 Symposium: Mastering Optimization: Strategies for Maximum Efficiency

ISBN:978-1-80590-017-7(Print) / 978-1-80590-018-4(Online)
Editor:Marwan Omar
Conference date: 21 March 2025
Series: Theoretical and Natural Science
Volume number: Vol.101
ISSN:2753-8818(Print) / 2753-8826(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Nimmala, S., Ramchander, M., Mahendar, M., Manasa, P., Kiran, M. A., & Rambabu, B. (2024). A Recent Survey on AI Enabled Practices for Smart Agriculture. 2024 International Conference on Intelligent Systems for Cybersecurity, ISCS 2024.

[2]. Upadhyay, A., Chandel, N.S., Singh, K.P. et al. (2025). Deep Learning and Computer Vision in Plant Disease Detection: A Comprehensive Review of Techniques, Models, And Trends in Precision Agriculture. Artificial Intelligence Review, 58(92).

[3]. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-End Object Detection with Transformers. Computer Vision – ECCV 2020, Lecture Notes in Computer Science, 12346.

[4]. Sarlin, P. E., DeTone, D., Malisiewicz, T., & Rabinovich, A. (2020). SuperGlue: Learning Feature Matching with Graph Neural Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4938-4947.

[5]. Murad, N. Y., Mahmood, T., Forkan, A. R. M., Morshed, A., Jayaraman, P. P., & Siddiqui, M. S. (2023). Weed Detection Using Deep Learning: A Systematic Literature Review. Sensors, 23(7), 3670.

[6]. Shin, J., Mahmud, M. S., Rehman, T. U., Ravichandran, P., Heung, B., & Chang, Y. K. (2023). Trends and Prospect of Machine Vision Technology for Stresses and Diseases Detection in Precision Agriculture. AgriEngineering, 5(1), 20-39.

[7]. Rajamohanan, R., & Latha, B. C., (2023). An Optimized YOLO v5 Model for Tomato Leaf Disease Classification with Field Dataset. Engineering, Technology & Applied Science Research, 13(6), 12033–12038.

[8]. Chen, J., Chen, J., Zhang, D., Sun, Y., & Nanehkaran, Y. A. (2020). Using Deep Transfer Learning for Image-based Plant Disease Identification. Computers and Electronics in Agriculture, 173.

[9]. Zhu, C., Hao, S., Liu, C., Wang, Y., Jia, X., Xu, J., Guo, S., Huo, J., & Wang, W. (2024). An Efficient Computer Vision-Based Dual-Face Target Precision Variable Spraying Robotic System for Foliar Fertilisers. Agronomy, 14(12), 2770.

[10]. Yi, J., Krusenbaum, L., Unger, P., Hüging, H., Seidel, S. J., Schaaf, G., & Gall, J. (2020). Deep Learning for Non-Invasive Diagnosis of Nutrient Deficiencies in Sugar Beet Using RGB Images. Sensors,20(20), 5893.

[11]. Bhadra, S., Sagan, V., Skobalski, J., et al. (2024). End-to-end 3D CNN for Plot-scale Soybean Yield Prediction Using Multitemporal UAV-based RGB Images. Precision Agriculture, 25, 834–864.

[12]. Apolo-Apolo, O. E., Pérez-Ruiz, M., Martínez-Guanter, J., & Valente, J. (2020). A Cloud-based Environment for Generating Yield Estimation Maps from Apple Orchards Using UAV Imagery and A Deep Learning Technique. Frontiers in Plant Science, 11.

[13]. Mekhalfi, M. L., Nicolò, C., Ianniello, I., Calamita, F., Goller, R., Barazzuol, M., & Melgani, F. (2020). Vision System for Automatic On-tree Kiwifruit Counting and Yield Estimation. Sensors, 20(15), 4214.

[14]. Alaaudeen, K. M., Selvarajan, S., Manoharan, H., et al. (2024). Intelligent Robotics Harvesting System Process for Fruits Grasping Prediction. Scientific Reports, 14, 2820.

[15]. Kasera, R. K., Nath, S., Das, B., Kumar, A., & Acharjee, T. (2025). IoT-enabled Smart Agriculture System for Detection and Classification of Tomato and Brinjal Plant Leaves Disease. Scalable Computing: Practice and Experience, 26(1), 96–113.

[16]. Ouhami, M., Hafiane, A., Es-Saady, Y., El Hajji, M., & Canals, R. (2021). Computer vision, IoT and Data Fusion for Crop Disease Detection Using Machine Learning: A Survey and Ongoing Research. Remote Sensing, 13(13).

[17]. Durga Sai Prasad, G., Vanathi, A., & Kiruthika Devi, B. S. (2023). A Review on IoT Applications in Smart Agriculture. Advances in Transdisciplinary Engineering, 32, 683–688.

[18]. Gawande, A. R., & Sherekar, S. S. (2023). Analysis of crop diseases using IoT and machine learning approaches. In Proceedings of the International Conference on Applications of Machine Intelligence and Data Analytics, ICAMIDA 2022, 78-85.