
Abstract
A staggering $220 billion worth of crops are lost annually globally due to crop illness and the graph below shows a part of the economic losses. The cost is because agriculture has traditionally relied on time-tested, experience-driven methods for crop health monitoring. While rich in historical context, such practices have shown their limitations, leading to inefficiencies and, at times, resource wastage.
Recognizing this gap, our “Agricultural Field Crop Health Monitor Robot” project endeavors to usher in transitioning from this age-old experience-based system to a more contemporary and efficient data-driven model. The model is expected to help detect, recognize, and gather the illness leaves in the agricultural field. It will use the CNN method in machine learning with the help of a depth camera to achieve the expected goal.
Click figure above to view the original site.
Site figure caption content: (Insects, diseases and weeds are the three main biological factors for losing crop yield and causing economic loss to farmers. Unlike the visible impact of diseases and insects, the impact of weeds goes unnoticed,” said Dr Yogita Gharde, lead author of the paper and scientist at the directorate of weed research at Indian Council of Agriculture Research. “If weed growth is not stopped at a critical time, it results in massive crop loss, sometimes as high as 70%,” said Gharde.)
Introduction
Agriculture remains an indispensable facet of human civilization, sustaining an ever-increasing global population. In the face of escalating demands and the imperative to conserve energy and resources, it is critical to devise strategies that minimize agricultural losses. Crop diseases are a formidable impediment to food security, often exacerbated by limited infrastructure. Traditional reliance on human acumen, while invaluable, confronts boundaries, especially given the scale of potential crop devastation. Hence, there is an imperative to amalgamate human expertise with technological advancements to transform how we detect, manage, and thwart crop diseases, propelling us toward a more resilient and productive agricultural framework.

The innovative study by Hackster.io et al. introduces ‘Farmaid,’ an autonomous robot engineered for greenhouse deployment, capable of precise navigation to safeguard plant integrity and soil [1]. Farmaid’s chief role is detecting plant diseases, bolstered by an SMS notification system powered by Twilio for real-time disease severity alerts [1].
In disease detection, traditional methods pale against the backdrop of technological evolution. The review of contemporary advancements in Machine Learning (ML) and Deep Learning (DL) from 2015 to 2022 showcases a leap in the efficacy and accuracy of these technologies for plant disease diagnosis despite challenges such as limited data, imaging constraints, and differentiation between diseased and healthy vegetation [3].
Furthermore, the research by Mohanty et al. capitalizes on the surge in smartphone proliferation coupled with deep learning breakthroughs to usher in a novel paradigm of mobile-assisted disease diagnostics [4]. A deep convolutional neural network, trained on an extensive dataset of plant leaf images, exhibits remarkable proficiency—identifying various crop species and diseases with 99.35% accuracy, suggesting a scalable solution for precision agriculture [4].
FIGURE ABOVE SHOWS THE EXAMPLE OF LEAF IMAGES FROM THE PLANTVILLAGE DATASET, REPRESENTING EVERY CROP-DISEASE PAIR USED [4]. (Click the figure above to view the original paper)
Our initiative aims to synthesize the capabilities of robotics and ML algorithms into a mobile rover—a technological sentinel in the agricultural landscape. This rover is designed to autonomously detect and categorize plant ailments via symptomatic analysis on leaves and then precisely navigate to harvest affected samples for a comprehensive examination. Field tests of this integrated system have yielded promising advancements toward automating and refining crop health management.
Brain Storming
The figure below shows the initial mind map for our group, and we finally settled on the agricultural robot project. This mind map outlines various applications and considerations for autonomous vehicles in different contexts, which can be used for further projects. Additionally, each application has specific goals, customer groups, and technological requirements. The mind map also considers the pros and cons, including the benefits of saving time and labor and the drawbacks of research and development costs. Click on the picture to access the original mind map.
Project Goals
This section will outline the project’s objectives from three distinct perspectives. Firstly, we will discuss the expected performance and functional goals, which encompass the desired outcomes and benchmarks of success for our project. Secondly, we will delve into the educational objectives, which align closely with a curated selection of courses offered at Duke University. These courses are strategically chosen not only to provide a foundational platform for initiating the project and facilitate its advancement to more sophisticated stages. This integrated approach ensures practical outcomes and comprehensive academic support underpin the project’s progression.
- High accuracy in disease detection through advanced image processing.
- Reliable navigation and field coverage, thanks to precise GPS integration.
- Efficient execution of physical tasks with a responsive robotic arm.
- Seamless rover maneuverability controlled by robust motor drivers.
- Enhanced overall agricultural productivity with the aid of autonomous technology.
- Navigate autonomously within the agricultural field without human intervention.
- Identify and diagnose crop diseases using onboard cameras and processing units.
- Utilize GPS data for precise positioning and tracking within the farm.
- Operate the robotic arm for physical tasks such as picking or treating plants.
- Control the movement of the rover through motor drivers based on commands from the central unit.
- Improve efficiency and reduce resource wastage in the agricultural process.
- Enhance the safety and productivity of agricultural practices.
Embarking on this project, having a solid educational foundation in electrical and computer engineering is advantageous, covering a broad spectrum of topics ranging from the fundamentals to more specialized subjects. The courses below are the possible courses that can use the project as the course project. Additionally, taking these courses is helpful to advance the system’s design. The comprehensive educational journey equips aspiring engineers with the skills and knowledge necessary to tackle complex projects, combining theoretical understanding with practical application.
Courses list (Duke University):
ECE 110L. Fundamentals of Electrical and Computer Engineering.
ECE 250D. Computer Architecture.
ECE 280L. Introduction to Signals and Systems.
ECE 330L. Fundamentals of Microelectronic Devices.
ECE 331L. Fundamentals of Microelectronic Circuits.
ECE 353. Introduction to Operating Systems.
ECE 356. Computer Network Architecture.
ECE 363L. Electric Vehicle Project.
ECE 383. Introduction to Robotics and Automation.
ECE 449. Sensors and Sensor Interface Design.
ECE 459. Introduction to Embedded Systems.
ECE 483. Introduction to Digital Communication Systems.
ECE 489. Advanced Robot System Design.
”’The name in bold are highly recommended courses.”’
Project Decomposition
Our team has designed an autonomous agricultural robot system, which integrates a suite of sophisticated components. At the heart is the Jetson processor, which interprets data from a mounted camera and GPS for real-time navigation and task execution. We’ve equipped the system with motor drivers that translate the Jetson’s commands into precise movements of the rover, providing the agility to traverse farm terrain. Additionally, a robotic arm, controlled through the system, is ready to perform various tasks such as picking or treating plants. Our design prioritizes seamless communication, denoted by blue signal lines for data, red command lines for actions, and green software lines for operational logic, ensuring a harmonious interplay between technology and agriculture.
Project Breakdown
The following interfaces will chronicle the journey of our project, meticulously categorized by levels of complexity. We will guide you through a curated progression starting from ‘Beginner’, advancing to ‘Medium’, then to ‘Advanced’, and culminating at the ‘Expert’ level. Each category is thoughtfully designed to not only showcase the evolution and milestones of our project but also to encapsulate the growing sophistication and depth of our work. This tiered approach ensures a comprehensive understanding of our project’s scope and the incremental challenges we’ve surmounted at each stage.
As for the very first stage, we want to start looking at different data analytics methods and how the sensor works to collect data.
Our team has diligently conducted a comprehensive literature review to understand the integration of touch interface controls with a robotic arm via Arduino technology.
Simultaneously, we’ve delved into advanced methodologies in machine learning, exploring its application in pinpointing diseases on plant leaves [3][6]. Furthermore, we’ve expanded our research to include a focused study on leaf-based disease detection in bell pepper plants employing the cutting-edge YOLO v5 algorithm (Figure below). This multifaceted research effort is directed towards synthesizing control technology with AI diagnostics to revolutionize precision agriculture. (Click the figure to direct to the paper.)
Also, we come up with a first version of our rover robot. Also, we have our initial hand drawing design as shown below.
Moving forward, we aim to ensure that each hardware component within our functional decomposition is autonomously operational. The components include the rover, Jetson (our robotic central processing unit), the robot arm, and the camera system.
Initially, we focus on mobilizing the rover equipped with two 12V motors and a dual-belt assembly as shown in the figure below.
Upon successful mobilization, we installed and rigorously tested the Jetson operating system alongside the Robot Operating System (ROS), resolved any issues with the CSI camera connection, and employed the Darknet training framework to fine-tune YOLOv3 for our needs. The figure shown below is an example when we use the YOLOv3 on Jetson environment.
Concurrently, we have achieved the activation of the robotic arm. Utilizing an Arduino, we can now input coordinates and manipulate two servo motors within our pin limitations. Simulations have confirmed the potential to extend this control to six servo motors for individual rotations.
In the final phase of this stage, we concentrate on perfecting the Camera-Detection Algorithm. Our approach utilizes the TensorFlow framework to construct the model, incorporating a Convolutional Neural Network (CNN) to precisely detect diseases in individual leaves, thus paving the way for more efficient and technologically advanced agricultural practices. The figure shown below is the first test trial on the illness of tomatoes, and with the precision of the results shown for different types of photos.
Next stage is to make sure all the hardware and software can work properly respectively after initial assembly with the connection of the computer.
We can communicate and ask about all the movements through the computer now. Below are the new changes and a demonstration of our initial final product.
For the machine learning part, we have a full picture set and also preprocess them as shown in the picture below.
After combining the depth camera we use (RealSense i435D) the result is shown below. It can not only provide us the depth detection but also can diagnose the situation of leaves.
Then, we did the simulation for the arm.
After this, we assemble everything and use the code to try the results. It turns out that using computers to communicate among them works pretty well. The video is shown below.
The whole system can work properly and detect the disease in the real field and the robotic arm can take ill leaves autonomously.
Since the problem of Jetson Nano, which is describe below in the Jetson section, we can not finish this step. The use of Jetson Nano will be the next stage where it will communicate all the parts.
Detailed Procedure of Each Robot Part
The rover part of the project involved the modification and enhancement of an existing Rover model. The Rover, initially developed by a previous group, is equipped with dual tracks, two 12-volt motors, and a metal platform with perforations. This model is also available for purchase on Amazon. See Amazon Link.
Modifications:
1. Platform Alteration:
The first modification entailed adding two holes to the Rover’s platform, enabling the attachment of a mechanical arm. This addition enhances the Rover’s functionality and expands its operational capabilities.
2. Motor Controller Upgrade:
The second modification involved replacing the existing motor control board with a Motor Controller Shield compatible with the UNO Board. See Amazon Link for Motor Controller Shield
3. Additional Storage Box:
A storage box was added to the rear section of the Rover to house mechanical components. The chosen box size is 5.9” x 5.9” x 5.1”, which adequately accommodates all circuit boards while protecting them from dirt. See Amazon Link for Storage Box
To accommodate the added height without impeding the Rover’s mobility, a platform was 3D printed and the box was secured using Sticky Back Round Dots.
Control System:
The Rover is controlled using an Arduino, following online tutorials for connectivity with the Controller. See Tutorial.
Working Principle:
When the camera grabs the target diseased leaf, it will transmit the coordinate information to the robotic arm, and through the twisting of each joint, it will realize to move to the position of the target leaf and grab the sample.
Component:
Metal chassis and connecting arms, 5 Servo Motors.
Modified Section:
3D printed the L-shaped bracket platform used to carry the fixed camera. Removed the original bottom motor used to rotate to reduce weight and balance the project.
Testing and Debugging:
After our many tests and debugging, the robotic arm can maintain a home position after power on, keeping a cobra-like stance. It can keep the center of gravity as close to the center of the whole as possible to keep it stable and allow the camera to get a relatively good view. After the robotic arm performs a series of actions such as moving and grasping, we can return it to its initial position with a single click of home.
Camera Integration
For the imaging component of our project, we utilized the Intel RealSense D435i Camera, a depth-sensing camera provided by our laboratory. This advanced camera is instrumental in capturing detailed 3D images and depth data, essential for our project’s objectives.
Software Requirements and Setup
To operate the Intel RealSense D435i Camera, specific software packages are required. These packages can be downloaded from the official website of Intel RealSense. It is crucial to note that the operational prerequisite for this camera is the presence of an in-built or external webcam on the computer system. Without a webcam, the functionality of the Intel RealSense D435i Camera cannot be leveraged.
Resource Utilization for 3D Coordinate Mapping
Our team undertook extensive research, utilizing various online resources, to effectively run the camera and extract the necessary 3D coordinate data. The process involved navigating through a plethora of documentation and tutorials, predominantly available in Chinese. Due to the language-specific nature of these documents, direct links have not been included in this report.
The implementation of the Intel RealSense D435i Camera marks a significant milestone in our project, enabling us to capture and utilize high-precision 3D data for further analysis and application development.
Overview:
In our project, the implementation of Jetson Nano was both the most utilized and the most challenging aspect. This phase involved setting up the system environment extensively to ensure the compatibility and functionality of the system with the RealSense Camera.
Objective:
Our goal was to operate our system on the Jetson Nano, ensuring it could utilize the RealSense Camera to read and process data. The data obtained from the camera was crucial for controlling the mechanical arm and the Rover. This involved using Python to extract data from the camera and save it into a separate document, which Arduino would then read to operate the mechanical arm and the Rover’s wheels.
Major Challenges:
Jetson Nano Environment Setup:
The most difficult part was configuring the Jetson Nano’s operational environment. Despite significant time investment, our attempts were unsuccessful. The primary issue stemmed from the incompatibility of the Jetson Nano’s software packages with the versions of PyTorch, CUDA, and CNN required for our project. Our project needed PyTorch version 1.11 or higher, which in turn required Python 3.7, compatible only with JetPack 5.0 or higher. However, Jetson Nano’s hardware supports only up to JetPack 4.0, leading to a compatibility deadlock.
Delayed Realization of Issues:
The realization of these compatibility issues came late due to initial focus on installing the RealSense Camera’s software, which faced its hurdles. The system lacked Python 3.6 or higher, and attempts to install these versions led to further problems due to missing environmental packages. After resolving these issues and successfully operating pyrealsense2 (the package used to read data from the camera in Python), we encountered the aforementioned PyTorch compatibility issue, costing us four weeks.
Recommendations for Future Projects:
For those who will take over this project or undertake similar projects, we strongly recommend verifying all operational versions of the required software before initiating the environment setup. This preemptive measure is crucial to avoid the challenges we faced and to ensure a smoother implementation process.
1. Mean Function
The primary function of our system is to process images captured by a depth camera to detect leaves, specifically targeting diseased leaves for robotic harvesting. The system analyzes the images, identifies leaves, and computes the coordinates of the central points of the bounding boxes. These coordinates are crucial as they guide the robot’s manipulators to precisely reach and pick the diseased leaves.
2. Methods
Our methodology involved creating a unique dataset that blends real-world farm imagery with supplemental data from the PlantVillage dataset. We captured original photographs at Duke’s campus farm and integrated these with PlantVillage images to simulate authentic field conditions. This composite dataset served as the training ground for our YOLOv5-based model. We chose YOLOv5 for its robustness and efficiency in object detection tasks, aiming to tailor it specifically to our requirements for accurate leaf detection in diverse and challenging agricultural environments.
3. Results
The performance of our model has been encouraging. It demonstrates a high degree of accuracy in detecting leaves against a variety of noisy backgrounds – a common challenge in real-world farm conditions. This effectiveness is a testament to both the quality of our custom dataset and the suitability of the YOLOv5 model for this application. Our system’s ability to discern and pinpoint leaves is a significant step towards automating the process of identifying and handling diseased plant parts, potentially offering a valuable tool for precision agriculture.
1. Simplified model simulation:
In the two-dimensional space, according to the actual length of the manipulator, the forward and inverse kinematics process of the link manipulator is simulated, and the process is graphically visualized in python. And the code is transformed into Arduino control code to control the movement of the manipulator.
2. Build dh coordinate system with MATLAB and ROS simulation
The Le-Arm model is constructed, the simulink framework is written, and the .urdf file is used to specify the movement Angle of each joint. For the space predetermined target, the forward and inverse kinematics are calculated.
Final Presentation Video
Code for Operating Arm and Rover
Arm
Rover
Milestones:
Understand how to use all the hardwares and softwares. (10/4 – 10/13)
Finish training the plant-disease detection algorithm. (10/13 – 10/27)
All subsystems are able to function well. (10/13 – 10/27)
Finish the robot and vehicle and assemble them together. (10/27 – 11/3)
Test and debug the whole system and write final reports. (11/3 – 11/17)
Deliverables:
A system can detect plant illnesses.(10/27)
Each part can function separately. (10/27)
A functional robot that can run in the field, detect the illness, locate the illness and robot arm can work. May not be perfect but can work as an assembled robot. (11/3)
A robot fit the initial goal and a completed final report. (11/17)
Literature review:
“Farmaid: Plant Disease Detection Robot.” Hackster.io, 30 October 2018, https://www.hackster.io/teamato/farmaid-plant-disease-detection-robot-55eeb1. Accessed 27 July 2023.
Dev, Medidi. J. V. S. A. S., Ratna, T. V., Tharun, P. S., Harsha, M. S., & Daniya, T. (2023). Plant disease detection and crop recommendation using Deep Learning. 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC). https://doi.org/10.1109/icaaic56838.2023.10141294
Kasinathan, T., Singaraju, D., & Uyyala, S. R. (2021). Insect classification and detection in field crops using modern machine learning techniques. Information Processing in Agriculture, 8(3), 446–457. https://doi.org/10.1016/j.inpa.2020.09.006
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7. https://doi.org/10.3389/fpls.2016.01419
Shruthi, U., Nagaveni, V., & Raghavendra, B. K. (2019). A review on machine learning classification techniques for Plant Disease Detection. 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS). https://doi.org/10.1109/icaccs.2019.8728415
Tianhai Wang, Bin Chen, Zhenqian Zhang, Han Li, Man Zhang, Applications of machine vision in agricultural robot navigation: A review, Computers and Electronics in Agriculture, Volume 198, 2022, 107085, ISSN 0168-1699, https://doi.org/10.1016/j.compag.2022.107085.
Smart system for early detection of agricultural plant diseases in the vegetation period Rustam Baratov, Himola Valixanova. E3S Web Conf. 386 01007 (2023). DOI: 10.1051/e3sconf/202338601007
Arduino Robotic Arm Controlled by Touch Interface, Maurizio Miscio. https://www.instructables.com/Arduino-Robotic-Arm-Controlled-by-Touch-Interface/
DIY Arduino Robot Arm – Controlled by Hand Gestures, Eben Kouao, January 10, 2021.https://www.youtube.com/watch?v=F0ZvF-FbCr0
Robot Arm Automation, Danielgass, January 8, 2022 https://www.hackster.io/danielgass/robot-arm-automation-26a97f
Who we are?! (Team Members)
Website Cube Code, Share With Your Friends!
Click the photo to know more about us!