PlantVillage2018马铃薯叶病检测数据集和Keras-CNN检测样例

摘要:

合集:AI案例-CV-农业
数据集:PlantVillage2018马铃薯叶病检测数据集
数据集价值:精准病害识别、促进智慧农业。
解决方案:Tensorflow-Keras框架、CNN模型

一、问题描述

早期检测马铃薯叶病具有挑战性,因为作物品种、作物病害症状和环境因素的变化。这些因素使得在早期阶段难以检测到马铃薯叶病。已经开发了各种机器学习技术来检测马铃薯叶病。然而,现有方法不能普遍检测作物品种和作物病害,因为这些模型是在特定地区的植物叶片图像上训练和测试的。 在这项研究中,开发了一个用于马铃薯叶病识别的多层次深度学习模型。在第一层,它使用YOLOv5图像分割技术从马铃薯植株图像中提取马铃薯叶片。在第二层,开发了一种新颖的深度学习技术,使用卷积神经网络从马铃薯叶图像中检测早疫病和晚疫病马铃薯病害。所提出的马铃薯叶病检测模型在马铃薯叶病数据集上进行了训练和测试。 马铃薯叶病数据集包含了从巴基斯坦旁遮普省中部地区收集的4072张图像。所提出的深度学习技术在马铃薯叶病数据集上达到了99.75%的准确率。所提出的技术还在PlantVillage数据集上进行了性能评估。所提出的技术与最先进的模型进行了比较,并在准确性和计算成本方面取得了显著的成绩。

二、数据集内容

Potato Disease Leaf Dataset (PLD) 是由 PlantVillage 发布的,用于植物病害分类研究,尤其针对土豆叶片的病害。该数据集包含了多种常见的土豆叶片病害的图像,包括健康叶片、早疫病、晚疫病等。PLD数据集的发布时间为 2018年。

  • Training
    • Healthy:816张图片
    • Early_Blight:1,303张图片
    • Late_Blight:1,132张图片
  • Validation
    • Healthy:102张图片
    • Early_Blight:163张图片
    • Late_Blight:151张图片
  • Testing
    • Healthy:102张图片
    • Early_Blight:162张图片
    • Late_Blight:141张图片

数据集图片样例:

数据集版权许可协议

Database Contents License (DbCL) v1.0

引用要求

Potato Leaves Images Create First Time Dataset In Pakistan Region. Please Cite Paper if you Can Download and Use images for your Research.(https://doi.org/10.3390/electronics10172064)

Multi-Level Deep Learning Model for Potato Leaf Disease Recognition

三、检测样例

方案

农民每年都会因土豆植株的各种疾病而遭受经济损失和作物浪费。早疫病和晚疫病是土豆叶的主要疾病。据估计,土豆产量遭受的重大损失是由这些疾病引起的。因此,图像被分类为3个类别:

  • 健康叶片
  • 早疫病叶片
  • 晚疫病叶片

使用Keras库构建一个卷积神经网络(CNN)模型进行图像分类。

安装tensorflow-gpu=2.x

tensorflow 库的安装步骤可以参考文档:Ai-Basic章节中的《安装深度学习框架TensorFlow》。以tensorflow_gpu-2.6.0的安装为例,查表可获知该版本依赖于python3.6-3.9。

conda create -n tensorflow-gpu-2-6-p3-9 python=3.9
conda activate tensorflow-gpu-2-6-p3-9
conda install tensorflow-gpu==2.6

导入库

import tensorflow as tf
from tensorflow.keras import models, layers
import matplotlib.pyplot as plt
import numpy as np
import pathlib

加载数据集

从图片目录中读入数据集:

Current_Dir = os.getcwd()
dataset_dir = pathlib.Path(Current_Dir + '/PLD_3_Classes_256')
dataset = tf.keras.preprocessing.image_dataset_from_directory(
   dataset_dir, batch_size = Batch_Size, image_size = (Image_Size, Image_Size), shuffle = True)

分割数据集

将数据集按照 8:1:1 的比例分成训练集、验证集和测试集:

def split_dataset(ds, train_split=0.8, val_split=0.1, test_split=0.1, shuffle=True, shuffle_size=10000):
   if shuffle:
       ds = ds.shuffle(shuffle_size, seed = 10)
       
   ds_size = len(ds)
   train_size = int(train_split * ds_size)
   val_size = int(val_split * ds_size)
   
   train_ds = ds.take(train_size)
   val_ds = ds.skip(train_size).take(val_size)
   test_ds = ds.skip(train_size).skip(val_size)
   
   return train_ds, val_ds, test_ds

创建模型

这段代码是使用Keras库构建一个卷积神经网络(CNN)模型的过程,用于图像分类任务。下面是对代码的解释:

  1. 输入层和预处理层:
    • Input_Shape 定义了输入图像的形状。
    • resize_and_rescaledata_augmentation 是预处理层,用于调整图像大小和增强数据多样性。
  2. 卷积层和池化层:
    • 模型包含多个卷积层(Conv2D)和最大池化层(MaxPool2D)。
    • 卷积层使用不同数量的滤波器(filters)和3×3的卷积核(kernel_size),激活函数为ReLU。
    • 池化层使用2×2的窗口进行最大池化,减少特征图的尺寸。
  3. 全连接层:
    • Flatten 层将卷积层输出的三维特征图展平为一维向量,以便连接到全连接层。
    • 第一个全连接层(Dense)有128个神经元,激活函数为ReLU。
    • 第二个全连接层有64个神经元,激活函数为softmax,用于多分类任务。
model = models.Sequential([
   # Adjust the size of the image and rescale its pixel values to a specific range
   resize_and_rescale,

   # Increasing the diversity of training data by applying a series of random transformations to the original image
   data_augmentation,
   
   # The first convolutional layer consists of 16 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(filters = 16, kernel_size = (3, 3), activation = 'relu', input_shape = Input_Shape),

   # The first maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # The second convolutional layer consists of 64 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(64, (3, 3), activation = 'relu'),

   # The second maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # The third convolutional layer consists of 128 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(128, (3, 3), activation = 'relu'),

   # The third maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # The fourth convolutional layer consists of 64 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(64, (3, 3), activation = 'relu'),

   # The fouth maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # The fifth convolutional layer consists of 128 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(128, (3, 3), activation = 'relu'),

   # The fifth maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # The sixth convolutional layer consists of 64 3x3 convolution kernels and ReLU activation function
   layers.Conv2D(64, (3, 3), activation = 'relu'),

   # The sixth maximum pooling layer, 2x2 pooling window
   layers.MaxPool2D((2, 2)),

   # Flatten layer, flatten the 3D output into one dimension for connection to the fully connected layer
   layers.Flatten(),

   # The first fully connected layer, consisting of 128 neurons and a ReLU activation function
   layers.Dense(128, activation = 'relu'),

   # The second fully connected layer, consisting of 64 neurons and a softmax activation function, is used for multi classification tasks
   layers.Dense(64, activation = 'softmax'),
])

输出:

Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape             Param #  
=================================================================
sequential_3 (Sequential)   (32, 256, 256, 3)         0        
_________________________________________________________________
sequential_4 (Sequential)   (32, 256, 256, 3)         0        
_________________________________________________________________
conv2d (Conv2D)             (32, 254, 254, 16)       448      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (32, 127, 127, 16)       0        
_________________________________________________________________
conv2d_1 (Conv2D)           (32, 125, 125, 64)       9280      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (32, 62, 62, 64)         0        
_________________________________________________________________
conv2d_2 (Conv2D)           (32, 60, 60, 128)         73856    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (32, 30, 30, 128)         0        
_________________________________________________________________
conv2d_3 (Conv2D)           (32, 28, 28, 64)         73792    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (32, 14, 14, 64)         0        
_________________________________________________________________
conv2d_4 (Conv2D)           (32, 12, 12, 128)         73856    
...
Total params: 346,176
Trainable params: 346,176
Non-trainable params: 0

训练模型

history = model.fit(train_data, epochs = Epochs, batch_size = Batch_Size, verbose = 1, validation_data = val_data)

执行过程:

Epoch 1/50
102/102 [==============================] - 110s 345ms/step - loss: 0.8258 - accuracy: 0.7889 - val_loss: 0.6926 - val_accuracy: 0.7891
Epoch 2/50
102/102 [==============================] - 36s 329ms/step - loss: 0.6639 - accuracy: 0.8009 - val_loss: 0.7018 - val_accuracy: 0.7708
Epoch 3/50
102/102 [==============================] - 38s 355ms/step - loss: 0.6671 - accuracy: 0.7966 - val_loss: 0.6512 - val_accuracy: 0.8073
Epoch 4/50
102/102 [==============================] - 43s 401ms/step - loss: 0.6588 - accuracy: 0.7990 - val_loss: 0.6797 - val_accuracy: 0.7839
Epoch 5/50
102/102 [==============================] - 42s 393ms/step - loss: 0.6469 - accuracy: 0.8036 - val_loss: 0.7158 - val_accuracy: 0.7786
Epoch 6/50
...

分析结果

以图形方式分析训练/验证模型的结果:

plt.figure(figsize = (15, 15))
plt.subplot(2, 3, 1)
plt.plot(range(Epochs), train_acc, label = 'Training Accuracy')
plt.plot(range(Epochs), val_acc, label = 'Validation Accuracy')
plt.legend(loc = 'lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(2, 3, 2)
plt.plot(range(Epochs), train_loss, label = 'Training Loss')
plt.plot(range(Epochs), val_loss, label = 'Validation Loss')
plt.legend(loc = 'upper right')
plt.title('Training and Validation Loss')

展示结果

展示一批土豆叶片图像的实际标签、预测标签和置信度:

plt.figure(figsize = (16, 16))

for batch_image, batch_label in train_data.take(1):
   for i in range(9):
       ax = plt.subplot(3,3,i+1)
       image = batch_image[i].numpy().astype('uint8')
       label = class_name[batch_label[i]]
   
       plt.imshow(image)
   
       batch_prediction = model.predict(batch_image)
       predicted_class = class_name[np.argmax(batch_prediction[i])]
       confidence = round(np.max(batch_prediction[i]) * 100, 2)
       
       plt.title(f'Actual: {label},\nPrediction: {predicted_class},\nConfidence: {confidence}%')
       plt.axis('off')

最后执行结果

The Model shows final accuracy > 98%

四、获取案例套装

需要登录后才允许下载文件包。登录

需要登录后才允许下载文件包。登录

发表评论