Multiclass classification of diseased grape leaf identification using deep convolutional neural network(dcnn) classifier

Nature

Multiclass classification of diseased grape leaf identification using deep convolutional neural network(dcnn) classifier"


Play all audios:

Loading...

ABSTRACT The cultivation of grapes encounters various challenges, such as the presence of pests and diseases, which have the potential to considerably diminish agricultural productivity.


Plant diseases pose a significant impediment, resulting in diminished agricultural productivity and economic setbacks, thereby affecting the quality of crop yields. Hence, the precise and


timely identification of plant diseases holds significant importance. This study employs a Convolutional neural network (CNN) with and without data augmentation, in addition to a DCNN


Classifier model based on VGG16, to classify grape leaf diseases. A publicly available dataset is utilized for the purpose of investigating diseases affecting grape leaves. The DCNN


Classifier Model successfully utilizes the strengths of the VGG16 model and modifies it by incorporating supplementary layers to enhance its performance and ability to generalize. Systematic


evaluation of metrics, such as accuracy and F1-score, is performed. With training and test accuracy rates of 99.18 and 99.06%, respectively, the DCNN Classifier model does a better job than


the CNN models used in this investigation. The findings demonstrate that the DCNN Classifier model, utilizing the VGG16 architecture and incorporating three supplementary CNN layers,


exhibits superior performance. Also, the fact that the DCNN Classifier model works well as a decision support system for farmers is shown by the fact that it can quickly and accurately


identify grape diseases, making it easier to take steps to stop them. The results of this study provide support for the reliability of the DCNN classifier model and its potential utility in


the field of agriculture. SIMILAR CONTENT BEING VIEWED BY OTHERS OPTIMIZED SEQUENTIAL MODEL FOR SUPERIOR CLASSIFICATION OF PLANT DISEASE Article Open access 29 January 2025 INTEGRATING


ADVANCED DEEP LEARNING TECHNIQUES FOR ENHANCED DETECTION AND CLASSIFICATION OF CITRUS LEAF AND FRUIT DISEASES Article Open access 12 April 2025 CPD-CCNN: CLASSIFICATION OF PEPPER DISEASE


USING A CONCATENATION OF CONVOLUTIONAL NEURAL NETWORK MODELS Article Open access 20 September 2023 INTRODUCTION The cultivation of grapes holds a noteworthy position in the agricultural


sector, making a substantial contribution to the economy and serving as a crucial means of subsistence for numerous farmers across the globe. The presence of pests and diseases frequently


impedes the growth and productivity of grape plants. The factors mentioned earlier can cause significant reductions in the quality and quantity of agricultural produce, thereby impacting the


economic viability and environmental sustainability of grape farming. Plant diseases are a significant concern for grape growers, representing one of their primary challenges. Plant


diseases negatively impact grape production, leading to reduced yields and compromised quality that renders them unsuitable for both consumption and commercial purposes. Furthermore, the


precise identification and efficient management of these ailments necessitate prompt and precise diagnosis to execute efficacious interventions and curtail damages. In the past,


knowledgeable agronomical experts had to conduct a visual examination to identify grape leaf ailments. Nonetheless, this methodology frequently involves subjectivity and consumes time,


resulting in delays in executing suitable interventions. Hence, the imperative for the creation of automated and efficient disease diagnosis systems arises, as they are essential in aiding


farmers in making timely and precise determinations. In recent years, rising consumer demand and higher living standards have driven the expansion of grape cultivation. Grapes are a widely


consumed fruit that is recognized for its nutritional significance, owing to the presence of diverse advantageous constituents. The active constituents found in grape extracts exhibit


antioxidant, antibacterial, anti-inflammatory, and anti-carcinogenic properties, rendering them a valuable resource in the pharmaceutical industry, particularly for the treatment of


hypertension. Gavhale et al. 1 acknowledged the significance of identifying plant leaf diseases in a facile and uncomplicated manner to facilitate the progress of agriculture. The authors


investigated the processes of image acquisition, preprocessing methods for feature extraction, and classification using neural networks. They also analyzed the benefits and drawbacks


associated with each of these techniques. According to Ampatazids et al. 2, grape cultivation encounters several obstacles during growth, such as vulnerability to unfavorable weather


patterns, ecological elements, insect infestations, bacteria, and fungi. Various grapevine foliar diseases, including black rot, black measles, leaf blight, and downy mildew, can have


detrimental effects on grape production and quality, resulting in considerable economic losses for grape growers. Zhang et al. 3 employed advanced deep learning models, namely enhanced


GoogleNet and Cifar10, to achieve accurate identification of leaf diseases. By adjusting the parameters, two enhanced models were devised to facilitate the training and testing of neural


networks on a set of nine distinct categories of maize leaf images. Rathnakumar et al. 4 presented a framework that provides a rapid, accurate, and pragmatic method for detecting and


managing leaf diseases. The methodology employed by the researchers entailed the utilization of a multiclass Support Vector Machine (SVM) grouping algorithm to facilitate the identification


and classification of leaf diseases. In contemporary times, there have been notable developments in deep learning, specifically Convolutional neural networks (CNNs), which have demonstrated


encouraging outcomes in diverse image classification assignments. Convolutional neural networks (CNNs) possess the capacity to acquire and extract significant features from images, thereby


facilitating the discrimination of distinct object categories. In the context of our study, CNNs are employed to differentiate between various types of grape leaf diseases. Furthermore, the


utilization of data augmentation techniques can augment the efficacy of Convolutional neural networks (CNNs) by amplifying the heterogeneity and volume of the training data, thereby


resulting in better generalization abilities5,6. Numerous investigations have been carried out to examine the identification and assessment of plant ailments through the utilization of deep


learning and machine learning methodologies. Ferentinos 7 suggested applying Convolutional neural network (CNN) models to identify and classify plant diseases. The methodology involved


utilizing basic leaf images of healthy and diseased plants. Barbedo 8 proposed a data augmentation technique to enhance the database's variety of images. The approach concentrates on


particular lesions and spots rather than the entire leaf. This methodology has improved the ability to identify distinct illnesses impacting a single leaf. Within the realm of grape leaf


illnesses, Miaomiao et al. 9 established a unified framework using Convolutional neural network s (CNNs) to distinguish between grape leaves affected by common diseases including black rot,


esca, and isariopsis leaf spot and unaffected leaves. Using improved Convolutional neural networks (CNNs), Liu et al. 10 presented a novel method for disease identification in grape leaves.


Their strategy entailed combining photographs from the field with those from open sources. With the use of Convolutional neural networks (CNNs), Hasan et al. 11,12 attempted to make it


easier to identify and classify diseases in grape leaves. K-means clustering for segmentation, VGG16 transfer learning for feature extraction, and CNNs for classification were only some of


the image processing methods used in the authors' research. Mohammed et al. 13 developed a method based on artificial intelligence to identify and categorize illnesses affecting grape


leaves. In order to accurately diagnose diseases in grape leaves, Lu et al. 14 used transformer and ghost-Convolutional networks. Ansari et al. 15 used a unique method including support


vector machines and image processing to identify and classify grape leaf diseases. Researchers used a multi-stage process that included gathering data, cleaning it up with filters,


segmenting with fuzzy C means, extracting features with principal component analysis, and classifying with PSO SVM, BPNN, and random forest. Suo et al. 16 created the GSSL method for


diagnosing and categorizing illnesses. Grape leaf photos have their texture improved using a variety of image processing techniques, such as the Gaussian filter, Sobel smoothing, denoising,


and Laplace operator. For the goal of diagnosing plant illnesses using images of its leaves, Thakur et al. 17 proposed using VGG-ICNN, a lightweight Convolutional neural network. In order to


speed up the identification of grape leaf diseases, Ashokkumar et al. 18 used a region-based Convolutional neural network (CNN) technique, more precisely the Grape Leaf Disease Detection


Technique (GLDDT), with a Faster Region based Convolutional neural network (FRCNN). To diagnose illnesses in tomato, apple, and grape plants in real time, Yag et al. 19 developed a robust


hybrid classification model that combines machine learning and deep learning techniques. The model also includes a swarm optimization-based feature selection procedure. Jeong et al. 20


examined two DNN models for the classification and segmentation of plant diseases. By integrating a multiclass support vector machine (SVM) with image processing, Javidan et al. 21 provided


a different approach to disease detection and classification in grape leaves. Alajas et al. 22 go into detail about how to tell normal grape leaves from fungus-infected ones and how to count


the total number of spots. This was achieved with the use of cutting-edge tools in computer vision, machine learning, and computational intelligence. To diagnose the mildew disease in pearl


millet, Coulibaly et al. 23 developed a transfer learning strategy with feature extraction. Results from the pre-trained VGG16 model, which was trained on limited data, were encouraging.


For the automatic detection and diagnosis of illnesses affecting rice, maize, and other crops, Chen et al. 24 employed the VGGNet and Inception modules. The suggested method significantly


outperforms other cutting-edge approaches in terms of performance; on the open dataset, it obtains a validation accuracy of at least 91.83%. The proposed method's average accuracy for


classifying photos of rice plants reaches 92.00% even against complicated backdrop conditions. Abbas et al. 25 proposed a densenet121 model for tomato disease diagnosis that uses both fake


and real photos as training data. Here, the Conditional Generative Adversarial Network (C-GAN) artificially creates images of tomato plant leaves to complement the data. The suggested


strategy demonstrates its superiority to the current approaches. Artificial Neural Network (ANN) is a nonlinear statistical model that shows an intricate connection between inputs and


outputs in an effort to find a novel pattern. According to the findings of the authors 26,27,28, ANNs are currently the most often used machine learning models that have a wide range of


applications in the fields of agriculture, healthcare, environment, finance, etc. to help with decision-making, estimation, classification, and prediction tasks. This paper describes


research that also employs a Deep Convolutional neural network (DCNN) Classifier Model, a type of ANN, for detecting diseases in grape leaves. The primary goal of this study is to classify


the many illnesses that might affect grape leaves, such as Black Rot, ESCA, Leaf Blight, and Healthy Leaves. Utilizing the VGG16 architecture with three additional CNN layers achieved the


best performance, which in turn improved the generalizability and prediction accuracy of the DCNN classifier. It is vital that farmers immediately and precisely evaluate any issues with the


leaves of grape plants in order to reduce the negative consequences of diseases on grape cultivation and maintain the general health and productivity of grape flora. DATA PREPARATION DATASET


The present investigation sourced its dataset from Kaggle 29, a publicly accessible online platform. The grape images in the dataset are all of 256 × 256 pixels in size. As shown in Table


1, the collection is unbalanced and has a total of 9027 photos from four different classes of grape leaf disease. The dataset comprises both asymptomatic and symptomatic images, encompassing


manifestations such as black rot, ESCA, and leaf blight symptoms, as presented in Fig. 1. In diagnosing grape diseases, the leaf is utilised instead of the flower, fruit, or stem. The


persistent presence of grape leaves, in contrast to the ephemeral appearance of blossoms and fruit, accounts for this phenomenon. Furthermore, the leaf exhibits a higher degree of


sensitivity towards the overall health of plants and generally imparts more comprehensive information in contrast to the stem. The grape's stem may not promptly exhibit symptoms of


illness. DATA AUGMENTATION Data augmentation involves applying transformations to existing images to increase the diversity and quantity of training data. Random rotations, flips, zooms, and


shifts are commonly used to simulate variations in disease patterns, lighting conditions, and camera angles. Data augmentation improves the model's generalization ability, reduces


overfitting risks, and enhances the robustness and accuracy of grape leaf disease image classification (see Fig. 2). The following transformations are applied to grape leaf images to create


augmented images: * Zoom: Image is resized on the interval [1−x,1 + x]. Here the value of x is 0.1 * Horizontal flip: All the rows and columns of an image pixels are reversed horizontally.


Further it creates mirror image of an original image along a vertical line. SPLITTING THE DATA A total of 7222 images, or eighty percent of the data, were set aside for training, while 1805


images, or twenty percent, were set aside for testing. A further 20% of the training data, i.e., 1444 images were used for validation. The "flow_from_directory()" method of


ImageDataGenerator is utilized in the given code to partition the data into training and testing sets while preserving the distribution of classes. The training set has been configured to


utilize a portion of the available data by setting the "subset" option to "training". In the context of the testing set, it is observed that the absence of a subset


parameter implies the utilization of the entire dataset. METHODOLOGY In this study, Convolutional neural network (CNN) models with and without augmentation are developed and compared for


their performance. Several techniques, including image preprocessing and data augmentation techniques, are implemented to enhance the classification model's performance. We also further


designed a Deep Convolutional neural network (DCNN) classifier model employing the VGG16 architecture with three extra CNN layers and assessed its performance. Finding the model that


performed the best for the grape leaf disease classification task was the goal. CONVOLUTION NEURAL NETWORK At the outset, certain machine learning methodologies commence by extracting


features, which are subsequently employed to train a classifier with the aim of automating the classification of leaf diseases. Nevertheless, the process of manually engineering features is


a time-consuming endeavor. The rapid progress in deep learning-based models has facilitated researchers efforts to achieve autonomous representation and feature learning 30. This study


presents the development of a novel convolutional neural network (CNN) architecture that demonstrates the ability to automatically extract features through convolution and pooling. The


feature vectors that have been extracted are then utilized for the purpose of categorization. A Convolutional neural network (CNN) is a specialized deep learning algorithm that has been


specifically developed for the purpose of analyzing visual data, with a particular focus on images. The advent of computer vision has brought about a significant transformation, resulting in


notable achievements in various tasks such as image classification, object detection, and image segmentation. Several published studies have employed Convolutional neural network (CNN)


models for the purpose of object recognition and classification 31,32. The general framework employed in this study is depicted in Fig. 3. The following is a comprehensive elucidation of


each stratum within the proposed conceptual framework. Convolutional layers are the essential components of a Convolutional neural network (CNN). Filters (or kernels) form these layers and


move across the input picture. Feature maps are generated by performing element-wise multiplications and summations. Each filter is optimized for identifying a particular class of visual


characteristics, such as edges, corners, or textures. The CNN model incorporates three convolutional layers that utilize 3 × 3 filters. Pooling layers are commonly used after convolutional


layers to decrease the spatial dimensions of the feature maps. A frequently employed pooling technique in the field is known as maxpooling. This technique involves downsampling the feature


maps by selecting the maximum value within each pooling region. This procedure facilitates the extraction of essential characteristics while simultaneously mitigating computational


complexity. This study employs three maxpooling layers. The role of activation functions in the architecture of Convolutional neural networks (CNNs) is to introduce non-linearities.


Non-linear activation functions, such as the Rectified Linear Unit (ReLU), are frequently employed after each convolutional and fully connected layer. The Rectified Linear Unit (ReLU)


activation function is designed to transform the input values in a neural network. It operates by replacing all negative input values with zero while leaving positive input values unchanged.


This characteristic of ReLU allows the neural network to effectively learn intricate non-linear relationships between the input and output variables. The utilization of softmax activation


is implemented in the ultimate layer to classify four distinct diseases affecting grape leaves. Fully connected layers are commonly located at the terminal stage of convolutional neural


network (CNN) architectures. These layers establish connections between each neuron in a given layer and every neuron in the subsequent layer. This connectivity allows the network to capture


and comprehend high-level features, facilitating the generation of predictions. In the context of multiclass classification, it is common to employ a softmax activation function in the


output layer. This function facilitates the generation of a probability distribution across various classes. The loss function serves to measure the discrepancy between the predicted labels


and the actual labels. The cross-entropy loss function is frequently employed in classification tasks, often in conjunction with the softmax activation function. The primary goal of training


is to minimize the loss function by iteratively adjusting the weights and biases of the network using the backpropagation algorithm. The utilization of a sparse categorical cross-entropy


loss function is implemented in this study. The backpropagation algorithm is utilized to calculate the gradients of the loss function in relation to the parameters of the neural network.


Then, optimization algorithms like stochastic gradient descent (SGD) or its variations are used to change the weights and biases in the network based on these gradients. The Adam optimizer


is employed in this research. During the training phase, the model's predictions are iteratively improved through the use of backpropagation. The initial model employed in this study


was a Convolutional neural network (CNN) model, which did not incorporate any form of data augmentation. The model was trained using a dataset consisting of labeled photographs. The


model's performance was evaluated using an additional validation dataset, and the training accuracy and loss were subsequently recorded. Despite the model's elevated training


accuracy, a notable disparity between the training and validation accuracy was observed, thereby indicating the potential occurrence of overfitting. In order to mitigate overfitting and


enhance generalization, a convolutional neural network (CNN) model with augmentation techniques was employed. The training dataset underwent two picture augmentation techniques, namely zoom


and horizontal flip. The inclusion of supplementary data expanded the scope and diversity of the training dataset, thereby augmenting the model's capacity for generalization. The


monitoring and comparison of training and validation accuracies were conducted to evaluate the influence of augmentation.Algorithm 1 provides a comprehensive description of the training


procedure for Convolutional neural networks (CNNs). TRANSFER LEARNING Deep learning algorithms present considerable obstacles, as they necessitate extensive datasets and prolonged training


periods due to the multitude of weights and millions of parameters found within deep networks. Data augmentation is a methodology employed to increase the size of a dataset by implementing


transformations on images, thus reducing the occurrence of overfitting. Moreover, the utilization of Graphical Processing Units (GPUs) enables the effective allocation of computational


resources for the purpose of training deep neural networks. The process of integrating these components requires significant effort and financial resources. Transfer learning has emerged as


a viable approach to achieving precise classification with a reduced number of training samples. The concept of transfer learning entails the utilization of a pre-existing model that was


initially created for a specific task, but is subsequently employed as a foundation for a distinct yet interconnected task. This practice leads to enhanced performance, as visually


represented in Fig. 4. The goal of this approach is to enhance one's capacity to apply information and abilities learned from one work to another. Convolutional neural network (CNN)


designs can be pre-trained, allowing researchers to use features derived from the final layer.These features can be combined with different classifiers prior to the implementation of fully


connected layers. The pre-training of these architectures has been shown to result in enhanced performance 33. Pan et al. 34 provide a comprehensive analysis in which they elucidate the


concept of transfer learning and illustrate its practical application in leveraging pre-existing features acquired from one dataset to facilitate training on another dataset. The existing


body of literature presents a wide range of Convolutional neural network (CNN) architectures, which are carefully chosen for specific tasks based on various considerations such as


classification accuracy, model complexity, and computational efficiency. The model employed in the study undergoes a series of trials, from which the most successful and efficient model is


selected. VGG16 The VGG16 architecture, a Convolutional neural network (CNN) proposed by 35, is widely acknowledged as one of the most effective vision models currently in existence. The


nomenclature "VGG16" is derived from its architecture, which consists of 16 learnable parameters layers, each of which possesses associated weights. The aforementioned robust model


accepts a 224 × 224 pixel image as its input and generates a vector of dimensions 1000, which signifies the probabilities associated with each class. The VGG16 architecture consists of a


total of 13 convolutional layers, 3 fully connected layers, and 5 pooling layers which sum up to 21 layers but it has only sixteen weight layers. The pooling layers are implemented by


employing 2 × 2 filters with a stride of 2, resulting in a reduction of spatial dimensions. In contrast, the convolutional layers employ 3 × 3 filters with a stride of 1 and consistently


apply identical padding. There are three fully connected layers (FC) that terminate the network. Within the realm of architecture, the rectified linear unit (ReLU) is employed as the


activation function for each hidden layer. On the other hand, the final fully connected layer uses the softmax activation function to make it easier to classify multiple classes, giving each


class a probability. This network is pretty big, with about 138 million parameters. This gives it the ability to learn complex representations and do great work on a wide range of visual


tasks. PROPOSED DEEP CONVOLUTIONAL NEURAL NETWORK CLASSIFIER MODEL BASED ON VGG16 Figure 5 depicts the system implementation of the deep convolutional neural network (DCNN) classifier. The


Convolutional neural network (CNN) is trained utilizing the ImageNet datasets, which encompass a vast array of generic images. However, the dataset lacks a substantial number of images


specifically pertaining to grape leaf diseases. As a result, the utilization of a pre-existing network would prove insufficient for accurately detecting and classifying diseases affecting


grape leaves. Therefore, it becomes imperative to modify and adapt the existing network to effectively address this specific task. In order to tackle this challenge, we present a proposed


model for a deep convolutional neural network (DCNN) classifier that is based on features and designed to promote consistent adaptability. The proposed model leverages a deep learning


architecture, wherein a pre-trained model such as VGG16 serves as the fundamental basis. Using the proposed deep convolutional neural network (DCNN) classifier model, the initial 1000


classes in ImageNet can be changed into a more accurate classification of four different diseases that affect grape leaves (as shown in Fig. 5). Algorithm 2 gives a detailed explanation of


how the proposed DCNN classifier model is trained, showing the steps that must be taken in order for accurate disease detection on grape leaves. EXPERIMENTAL RESULTS SYSTEM SPECIFICATION An


Intel Core i5 processor running Windows 11 with 8 GB of RAM was the system used to implement the code. Anaconda, an integrated Python distribution that streamlines deployment and package


management, enabled the code execution. Python served as the primary programming language for writing the code, leveraging its versatility and extensive libraries for ML tasks. PERFORMANCE


EVALUATION The present investigation focuses on the issue of disease identification in grape leaves, specifically as a multiclass classification problem. In this study, a deep convolutional


neural network (DCNN) classifier model is employed, which is based on the VGG16 pretrained model. The primary objective of this model is to accurately classify images into different disease


categories. To evaluate its performance, the DCNN model is compared with a conventional CNN model. Various techniques are utilized to improve the performance of the classification model,


such as data augmentation or non-augmentation, preprocessing, and adjusting the number of epochs. In contrast to previous methodologies, such as those presented by Miaomiao et al. 9, our


proposed methodology exhibits superior performance. The CNN model, as shown in Fig. 6, shows signs of overfitting because the training accuracy is higher than the validation accuracy. This


discrepancy can be attributed to the limited diversity present in the training data, resulting in constrained generalization capabilities. Using augmentation techniques is one of the most


important ways to prevent overfitting and improve the generalizability of models, which improves the accuracy of both training and validation results (see Fig. 6). The accuracy of the CNN


model exhibits a consistent upward trend as the number of epochs increases, whereas the loss curve demonstrates rapid convergence (see Fig. 7). In the context of the DCNN classification


model (as depicted in Fig. 8), it is observed that the training accuracy and loss exhibit a positive trend over the course of training. However, in the absence of augmentation techniques,


the model encounters difficulties in effectively generalizing to novel data, resulting in a decline in validation accuracy. The performance metrics for each convolutional neural network


(CNN) model on different varieties of grape leaf diseases are illustrated in Fig. 9. Through the process of augmentation, it has been observed that the performance metric values of all


disease varieties consistently surpass the threshold of 90%, with the exception of Block Rot. The CNN model with augmentation demonstrates precise categorization accuracy of 100% for the


Healthy leaf variety. The CNN model augmented with additional data exhibits superior performance in terms of F1-score, Recall, Precision, and Accuracy, achieving a remarkable 96% (as shown


in Fig. 10) compared to the CNN model without augmentation. Figure 11 displays the performance indicators of the DCNN classifier model for each variety of grape leaf disease. With the


exception of Block Rot, all disease types exhibit performance measure values exceeding 95%. The DCNN classifier model demonstrates a precision rate of 100% in accurately classifying the


Healthy and Leaf Blight varieties. Based on the data presented in Table 2, it is apparent that the DCNN classifier model exhibits superior performance in terms of F1-score, Recall,


Precision, and Accuracy. Specifically, the DCNN model achieves a remarkable 99% in these metrics, surpassing the CNN model utilized in this investigation by a margin of 3%. The DCNN


Classifier Model, which was implemented using the open-source Keras framework built on TensorFlow, demonstrates a notable precision of 99% when evaluated on the test data. This surpasses the


previously reported results have shown in the given below Table 2. To assess the performance of the models, a confusion matrix is displayed in Fig. 12. The DCNN model outperformed the other


two models through analysis of the confusion matrix, which had 1805 testing images. Whole number representations are given for the confusion matrix.The healthy leaf type is classified


perfectly followed by ESCA, Leaf Blight and Black Rot. COMPARATIVE ANALYSIS The performance comparison between the suggested DCNN model and a number of models from the literature is shown in


Table 3. A 98.57% test accuracy was achieved by Miaomiao et al. 9 using the UnitedModel architecture. Liu et al. 10 obtained a somewhat lower test accuracy of 97.22% using the DICNN


architecture. Hasan et al. 11 reported a 91.37% test accuracy using a CNN architecture. Out of all the models tested, the Deep Convolutional neural network (DCNN) architecture utilized in


this study yielded the best test accuracy, at 99.06%. In this study, DCNN model is developed based on the popular and extensively used VGG16 architecture, utilizes the information encoded in


the pretrained VGG16 layers. And the additional Convolutional and MaxPooling layers added to the DCNN classifier helps to learn task-specific features and provide the flexibility required


to capture complex patterns in the grape leaf disease data. In overall, this optimization technique leads to better performance by combining the benefits of transfer learning with the


adaptability to the unique details of the grape leaf disease identification task. CONCLUSION This article presents a proposed method that is effective in distinguishing between leaves that


are healthy and those that are diseased. The research employs Convolutional neural networks (CNN) both with and without augmentation techniques, in addition to a DCNN Classifier model that


is based on the VGG16 architecture. Augmentation has been recognized as a valuable technique for improving generalization capabilities and mitigating overfitting issues in convolutional


neural network (CNN) models. The Deep Convolutional Neural Network (DCNN) model does better than other models because it uses the VGG16 architecture and adds additional Convolutional Neural


Network (CNN) layers. This highlights its aptness for tasks involving image classification. Demonstrating a training data accuracy of 99.18% and a test data accuracy of 99.04%, the deep


convolutional neural network (DCNN) classifier shows a lot of promise for accurately detecting grape leaf diseases. In order to improve the performance of grape leaf disease classification,


future research could look into new augmentation techniques, optimization of hyperparameters, and integration of state-of-the-art deep learning architectures. DATA AVAILABILITY The datasets


generated and/or analysed during the current study are available in the [Rajarshi Mandal] repository, [https://www.kaggle.com/datasets/rm1000/grape-disease-dataset-original]. REFERENCES *


Gavhale, K. R. & Ujwalla, G. An overview of the research on plant leaves disease detection using image processing techniques. _Iosr J. Comput. Eng. (iosr-jce)_ 16(1), 10–16 (2014).


Article  Google Scholar  * Ampatzidis, Y., De Luigi, B. & Andrea, L. iPathology: Robotic applications and management of plants and plant diseases. _Sustainability_ 9(6), 1010 (2017).


Article  Google Scholar  * Zhang, X., Qiao, Y., Meng, F., Fan, C. & Zhang, M. Identification of maize leaf diseases using improved deep convolutional neural networks. _IEEE Access_ 6,


30370–30377 (2018). Article  Google Scholar  * Rathnakumar, A. J. & Balakrishnan, S. Machine learning based grape leaf disease detection. _J. Adv. Res. Dyn. Control Syst._ 10(08),


775–780 (2018). Google Scholar  * Nagaraju, M., Chawla, P., Upadhyay, S. & Tiwari, R. Convolution network model-based leaf disease detection using augmentation techniques. _Expert Syst._


39(4), e12885 (2022). Article  Google Scholar  * Nagaraju, M., Chawla, P., & Tiwari, R. (2022). An effective image augmentation approach for maize crop disease recognition and


classification. In: _International conference on computational intelligence and smart communication_ (63–72). Springer Nature Switzerland. * Ferentinos, K. P. Deep learning models for plant


disease detection and diagnosis. _Comput. Electr. Agric._ 145, 311–318 (2018). Article  Google Scholar  * Barbedo, J. G. & Arnal.,. Plant disease identification from individual lesions


and spots using deep learning. _Biosyst. Eng._ 180, 96–107 (2019). Article  Google Scholar  * Ji, M., Zhang, L. & Qiufeng, Wu. Automatic grape leaf diseases identification via


UnitedModel based on multiple convolutional neural networks. _Inf. Process. Agric._ 7(3), 418–426 (2020). Google Scholar  * Liu, B. _et al._ Grape leaf disease identification using improved


deep convolutional neural networks. _Front. Plant Sci._ 11, 1082 (2020). Article  PubMed  PubMed Central  Google Scholar  * Hasan, M. A. _et al._ Identification of grape leaf diseases using


convolutional neural network. _J. Phys._ 1641(1), 012007 (2020). Google Scholar  * Hasan, M. A., Riyanto, Y. & Riana, D. Klasifikasi penyakit citra daun anggur menggunakan model


CNN-VGG16. _J. Teknologi dan Sistem Komputer_ 9(4), 218–223 (2021). Article  Google Scholar  * Mohammed, KK., Darwish A, and Hassenian AE. "Artificial intelligent system for grape leaf


diseases classification." _Artificial intelligence for sustainable development: theory, practice and future applications_ (2021): 19–29. * Lu, X. _et al._ A hybrid model of


ghost-convolution enlightened transformer for effective diagnosis of grape leaf disease and pest. _J. King Saud Univ.-Comput. Inf. Sci._ 34(5), 1755–1767 (2022). Google Scholar  * Ansari, A.


S. _et al._ Improved support vector machine and image processing enabled methodology for detection and classification of grape leaf disease. _J. Food Qual._ 2022, 9502475 (2022). Article 


Google Scholar  * Suo, J. _et al._ Casm-amfmnet: a network based on coordinate attention shuffle mechanism and asymmetric multi-scale fusion module for classification of grape leaf diseases.


_Front. Plant Sci._ 13, 846767 (2022). Article  PubMed  PubMed Central  Google Scholar  * Thakur, P. S., Tanuja, S. & Aparajita, O. VGG-ICNN: A Lightweight CNN model for crop disease


identification. _Multimedia Tools Appl_ 82(1), 497–520 (2023). Article  Google Scholar  * Ashokkumar, K., Parthasarathy, S., Nandhini, S. & Ananthajothi, K. Prediction of grape leaf


through digital image using FRCNN. _Measur. Sens._ 24, 100447 (2022). Article  Google Scholar  * Yağ, İ & AytaçAltan.,. Artificial intelligence-based robust hybrid algorithm design and


implementation for real-time detection of plant diseases in agricultural environments. _Biology_ 11(12), 1732 (2022). Article  PubMed  PubMed Central  Google Scholar  * Jeong, S., Jeong, S.


& Bong, J. Detection of tomato leaf miner using deep neural network. _Sensors_ 22(24), 9959 (2022). Article  ADS  PubMed  PubMed Central  Google Scholar  * Javidan, S. M., Banakar, A.,


Vakilian, K. A. & Ampatzidis, Y. Diagnosis of grape leaf diseases using automatic K-means clustering and machine learning. _Smart Agric. Technol._ 3, 100081 (2023). Article  Google


Scholar  * Alajas, O. J. Y. _et al._ Grape pseudocercospora leaf specked area estimation using hybrid genetic algorithm and recurrent neural network. _J. Adv. Comput. Intell. Intell. Inf._


27(1), 35–43 (2023). Article  Google Scholar  * Coulibaly, S., Kamsu-Foguem, B., Kamissoko, D. & Traore, D. Deep neural networks with transfer learning in millet crop images. _Comput.


Indus._ 108, 115–120 (2019). Article  Google Scholar  * Chen, J., Chen, J., Zhang, D., Sun, Y. & Nanehkaran, Y. A. Using deep transfer learning for image-based plant disease


identification. _Comput. Electr. Agric._ 173, 105393 (2020). Article  Google Scholar  * Abbas, A., Jain, S., Gour, M. & Vankudothu, S. Tomato plant disease detection using transfer


learning with C-GAN synthetic images. _Comput. Electr. Agric._ 187, 106279 (2021). Article  Google Scholar  * Prasad, K. V., Hanumesh, V., Kumar Swamy, K. & Renuka, S. Pumpkin seeds


classification: Artificial neural network and machine learning methods. _J. Int. Acad. Phys. Sci._ 27(1), 22–23 (2023). Google Scholar  * Hanumesh, V., Prasad, K. V., Renuka, S. &


KumarSwamy, K. Multiclass classification of dry beans using artificial neural network. _J. Int. Acad. Phys. Sci._ 27(2), 109–124 (2023). Google Scholar  * Prasad, K. V., Vaidya, H. &


Shobha, Y. Multi-class brain tumour classification using convolutional neural network. _J. Int. Acad. Phys. Sci._ 27(2), 125–137 (2023). Google Scholar  *


https://www.kaggle.com/datasets/rm1000/grape-disease-dataset-original * Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural


networks. In: _Advances in neural information processing systems_, 25. * Kaur, P. _et al._ Recognition of leaf disease using hybrid convolutional neural network by applying feature


reduction. _Sensors_ 22(2), 575 (2022). Article  ADS  PubMed  PubMed Central  Google Scholar  * Mishra, A. M., Harnal, S., Gautam, V., Tiwari, R. & Upadhyay, S. Weed density estimation


in soya bean crop using deep convolutional neural networks in smart agriculture. _J. Plant Dis. Protect._ 129(3), 593–604 (2022). Article  CAS  Google Scholar  * Yosinski, J., Clune, J.,


Bengio, Y., & Lipson, H. (2014) “How transferable are features in deep neural networks?”. _Advances in neural information processing systems_, 27 * Pan, S. J. & Yang, Q. A survey on


transfer learning. _IEEE Trans. Knowl. Data Eng._ 22(10), 1345–1359 (2009). Article  Google Scholar  * Simonyan, K., & Zisserman, A. “Very deep convolutional networks for large-scale


image recognition”. _arXiv preprint arXiv_:1409,(2014):1556. Download references ACKNOWLEDGEMENTS "This study is supported via funding from Prince Sattam bin Abdulaziz University


project number (PSAU/2024/R/1445)". AUTHOR INFORMATION AUTHORS AND AFFILIATIONS * Department of Studies in Mathematics, Vijayanagara Sri Krishnadevaraya University, Ballari, Karnataka,


India Kerehalli Vinayaka Prasad & Hanumesh Vaidya * Department of Mathematics, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, Karnataka, India


Choudhari Rajashekhar * Department of Studies in Computer Science, Vijayanagara Sri Krishnadevaraya University, Ballari, Karnataka, India Kumar Swamy Karekal & Renuka Sali * Department


of Mathematics, College of Science and Humanities in Alkharj, Prince Sattam Bin Abdulaziz University, Alkharj, 11942, Saudi Arabia Kottakkaran Sooppy Nisar Authors * Kerehalli Vinayaka


Prasad View author publications You can also search for this author inPubMed Google Scholar * Hanumesh Vaidya View author publications You can also search for this author inPubMed Google


Scholar * Choudhari Rajashekhar View author publications You can also search for this author inPubMed Google Scholar * Kumar Swamy Karekal View author publications You can also search for


this author inPubMed Google Scholar * Renuka Sali View author publications You can also search for this author inPubMed Google Scholar * Kottakkaran Sooppy Nisar View author publications You


can also search for this author inPubMed Google Scholar CONTRIBUTIONS "K.V.P., C.R., H.V., K.S.K., R.S., K.S.N. wrote the main manuscript text and C.R., H.V., R.S. prepared figures.


All authors reviewed the manuscript" . CORRESPONDING AUTHOR Correspondence to Kottakkaran Sooppy Nisar. ETHICS DECLARATIONS COMPETING INTERESTS The authors declare no competing


interests. ADDITIONAL INFORMATION PUBLISHER'S NOTE Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. RIGHTS AND


PERMISSIONS OPEN ACCESS This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any


medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The


images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is


not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission


directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and permissions ABOUT THIS ARTICLE CITE THIS ARTICLE Prasad,


K.V., Vaidya, H., Rajashekhar, C. _et al._ Multiclass classification of diseased grape leaf identification using deep convolutional neural network(DCNN) classifier. _Sci Rep_ 14, 9002


(2024). https://doi.org/10.1038/s41598-024-59562-x Download citation * Received: 29 November 2023 * Accepted: 12 April 2024 * Published: 18 April 2024 * DOI:


https://doi.org/10.1038/s41598-024-59562-x SHARE THIS ARTICLE Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not


currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative KEYWORDS * Convolutional neural network * Deep neural network


classifier * Visual Geometry Group * Support vector machine * Transfer learning


Trending News

Michelle keegan, 33, reveals family stopped asking when she'll have a baby with mark wright after she called it sexist

MICHELLE Keegan has revealed her family have stopped asking when she'll have a baby with Mark Wright - after she sl...

Creatine transporter (slc6a8) knockout mice exhibit reduced muscle performance, disrupted mitochondrial ca2+ homeostasis, and severe muscle atrophy

ABSTRACT Creatine (Cr) is essential for cellular energy homeostasis, particularly in muscle and brain tissues. Creatine ...

Biden is reportedly sending ukraine old soviet air defense weapons from america's own secret stockpile

After the Soviet Union collapsed in 1991, the U.S. went on a secret buying spree to collect "a small number of Sovi...

Covid is 'reversing heart care gains' as deaths rise, charity warns

There were more than 5,800 excess deaths from heart and circulatory diseases in England during the first year of the pan...

Senators’ silence suggests they may be taking their impeachment trial duty seriously

Several Republican senators have taken a “vow of silence” on the impeachment inquiry in the House of Representatives. Ma...

Latests News

Multiclass classification of diseased grape leaf identification using deep convolutional neural network(dcnn) classifier

ABSTRACT The cultivation of grapes encounters various challenges, such as the presence of pests and diseases, which have...

Migrants hide in christmas trees

CUSTOMS OFFICIALS IN DIEPPE DISCOVER 16 ALBANIAN NATIONALS FOUND HIDING IN A TRUCKLOAD OF CHRISTMAS TREES SIXTEEN migran...

Karnataka elections result 2018: who won and who lost?

The BJP’s BS Yeddyurappa took oath as the Chief Minister of Karnataka on 17 May, hours after the Supreme Court in a rare...

Economic analysis of gastroprotective treatments in patients with chronic arthritis

Elliott RA _ et al_. (2006) Preventing non-steroidal anti-inflammatory drug-induced gastrointestinal toxicity: are older...

 montana va to host pact act claims clinics in pablo, browning | va montana health care | veterans affairs

The Montana VA Health Care System (MTVAHCS) is scheduled to host in-person PACT Act claims clinics in Pablo, Montana on ...

Top