当前位置: 首页 > article >正文

tensorflow案例2--猴痘病识别,一道激活函数的bug

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

    文章目录

    • 1、bug
    • 2、模型构建
      • 1、数据处理
        • 1、导入库
        • 2、查看数据目录
        • 3、加载数据
        • 4、数据展示
      • 2、内存优化
      • 3、模型构建
      • 4、模型训练
        • 1、超参数设置
        • 2、模型训练
      • 5、结果展示
      • 6、图片预测
      • 7、尝试优化

1、bug

🤔 思路:

首先采用:tf.keras.losses.BinaryCrossentropy(from_logits=False),作为激活函数,但是没有在修改输出层,对于二分类问题来说,输出层这个时候应该变成1个神经元,并且最后一层采用sigmoid激活函数,但是,但是🔲,我没有改,输出层依然是2个神经元导致我的准确率一直上不去,一直在0.6只有徘徊,后面换了不少神经网络模型😢😢😢😢,最后才发现激活函数用错了,激活函数换成处理多分类tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)就可以了:happy::happy::happy::happy:。


📖 积累:

tf.keras.losses.BinaryCrossentropy

tf.keras.losses.BinaryCrossentropy(from_logits=False) 是 TensorFlow 中用于二分类任务的损失函数。这个损失函数计算的是二元交叉熵损失,它是衡量模型预测的概率分布与真实标签之间的差异的一种方式。

  • from_logits=False

    • 默认值False
    • 含义:表示模型的输出已经经过了激活函数(如 sigmoid),即输出是概率值(范围在 0 到 1 之间)。
    • 作用:在这种情况下,损失函数直接使用模型的输出值来计算二元交叉熵损失。

二元交叉熵公式

from_logits=False 时,二元交叉熵损失的计算公式为:

​ loss=−(y⋅log⁡§+(1−y)⋅log⁡(1−p))loss=−(y⋅log(p)+(1−y)⋅log(1−p))

其中:

  • yy 是真实的标签(0 或 1)。
  • pp 是模型的预测概率(范围在 0 到 1 之间)。

tf.keras.losses.SparseCategoricalCrossentropy

tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) 是 TensorFlow 中用于多分类任务的损失函数。这个损失函数计算的是稀疏分类交叉熵损失,适用于标签为整数的情况(而不是 one-hot 编码)。

参数解释

  • from_logits=True
    • 默认值False
    • 含义:表示模型的输出是没有经过激活函数的原始值(即 logits)。
    • 作用:在这种情况下,损失函数内部会先对输出值应用 softmax 激活函数,然后再计算分类交叉熵损失。

分类交叉熵公式

from_logits=True 时,分类交叉熵损失的计算公式为:

loss = − ∑ i y i log ⁡ ( softmax ( z i ) ) \text{loss} = -\sum_{i} y_i \log(\text{softmax}(z_i)) loss=iyilog(softmax(zi))

其中:

  • y i y_i yi是真实的标签(整数,表示类别索引)。
  • z i z_i zi 是模型的输出值(logits,未经过激活函数)。
  • softmax ( z i ) \text{softmax}(z_i) softmax(zi)是经过 softmax 激活函数后的概率分布。

分类交叉熵损失

是一种常用的损失函数,特别适用于多分类任务。它用于衡量模型预测的概率分布与真实标签之间的差异。

2、模型构建

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers 
import numpy as np 

# 查看是否支持gpu
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

gpus

输出:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看数据目录

import os, PIL, pathlib 

data_dir = './data/'
data_dir = pathlib.Path(data_dir)  # 转化成 pathlib 对象

data_paths = data_dir.glob('*')  # 获取对象下的文件
classnames = [str(path).split('/')[1] for path in data_paths]
classnames

输出:

['Monkeypox', 'Others']

3、加载数据

batch_size = 32 
heights = 224
widths = 224 

# 训练集
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split=0.2,
    batch_size=batch_size,
    image_size=(widths, heights),
    subset='training',
    seed=42,
    shuffle=True
)

# 验证集
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split=0.2,
    batch_size=batch_size,
    image_size=(widths, heights),
    subset='validation',
    seed=42,
    shuffle=True
)
Found 2142 files belonging to 2 classes.
Using 1714 files for training.
Found 2142 files belonging to 2 classes.
Using 428 files for validation.

4、数据展示

import matplotlib.pyplot as plt 

plt.figure(figsize=(20, 10))
for images, labels in train_ds.take(1):
    for i in range(20):
        plt.subplot(5, 10, i + 1)
        
        plt.imshow(images[i].numpy().astype('uint8'))
        plt.title(classnames[labels[i]])
        
        plt.axis('off')


在这里插入图片描述

# 查看数据格式
for images, labels in train_ds:
    print('(C, N, H, W): ',images.shape)
    print('class_labels: ', labels)
    break
(C, N, H, W):  (32, 224, 224, 3)
class_labels:  tf.Tensor([0 0 1 0 1 0 0 1 1 0 0 0 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 0 1 1 1 1], shape=(32,), dtype=int32)

2、内存优化

from tensorflow.data.experimental import AUTOTUNE

AUTOTUNE = tf.data.experimental.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
vals_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

3、模型构建

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(heights, widths, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(heights, widths, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(len(classnames))               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling_1 (Rescaling)     (None, 224, 224, 3)       0         
                                                                 
 conv2d_3 (Conv2D)           (None, 222, 222, 16)      448       
                                                                 
 average_pooling2d_2 (Averag  (None, 111, 111, 16)     0         
 ePooling2D)                                                     
                                                                 
 conv2d_4 (Conv2D)           (None, 109, 109, 32)      4640      
                                                                 
 average_pooling2d_3 (Averag  (None, 54, 54, 32)       0         
 ePooling2D)                                                     
                                                                 
 dropout_2 (Dropout)         (None, 54, 54, 32)        0         
                                                                 
 conv2d_5 (Conv2D)           (None, 52, 52, 64)        18496     
                                                                 
 dropout_3 (Dropout)         (None, 52, 52, 64)        0         
                                                                 
 flatten_1 (Flatten)         (None, 173056)            0         
                                                                 
 dense_2 (Dense)             (None, 128)               22151296  
                                                                 
 dense_3 (Dense)             (None, 2)                 258       
                                                                 
=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0
_________________________________________________________________

4、模型训练

1、超参数设置

# 学习率
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(
    optimizer=opt,
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),  # 二分类任务
    metrics=['accuracy']  # 准确率
)

2、模型训练

from tensorflow.keras.callbacks import ModelCheckpoint

epochs = 50

checkpointer = ModelCheckpoint('best_model.h5',
                               monitor='val_accuracy',
                               verbose=1,
                               save_best_only=True,
                               save_weights_only=True)

result = model.fit(
    x=train_ds,
    validation_data=vals_ds,
    epochs=epochs,
    batch_size=batch_size,
    callbacks=[checkpointer]   # 回调函数,保存最好模型
)
Epoch 1/50
52/54 [===========================>..] - ETA: 0s - loss: 0.7122 - accuracy: 0.5418
Epoch 1: val_accuracy improved from -inf to 0.60280, saving model to best_model.h5
54/54 [==============================] - 4s 38ms/step - loss: 0.7114 - accuracy: 0.5420 - val_loss: 0.6610 - val_accuracy: 0.6028
Epoch 2/50
54/54 [==============================] - ETA: 0s - loss: 0.6555 - accuracy: 0.6429
Epoch 2: val_accuracy improved from 0.60280 to 0.61449, saving model to best_model.h5
54/54 [==============================] - 3s 65ms/step - loss: 0.6555 - accuracy: 0.6429 - val_loss: 0.6723 - val_accuracy: 0.6145
Epoch 3/50
53/54 [============================>.] - ETA: 0s - loss: 0.6227 - accuracy: 0.6736
Epoch 3: val_accuracy did not improve from 0.61449
54/54 [==============================] - 1s 24ms/step - loss: 0.6238 - accuracy: 0.6727 - val_loss: 0.7243 - val_accuracy: 0.6145
Epoch 4/50
53/54 [============================>.] - ETA: 0s - loss: 0.5910 - accuracy: 0.6813
Epoch 4: val_accuracy improved from 0.61449 to 0.63785, saving model to best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.5907 - accuracy: 0.6820 - val_loss: 0.6972 - val_accuracy: 0.6379
Epoch 5/50
52/54 [===========================>..] - ETA: 0s - loss: 0.6150 - accuracy: 0.6618
Epoch 5: val_accuracy improved from 0.63785 to 0.65421, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.6157 - accuracy: 0.6622 - val_loss: 0.6427 - val_accuracy: 0.6542
Epoch 6/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5473 - accuracy: 0.7200
Epoch 6: val_accuracy improved from 0.65421 to 0.67523, saving model to best_model.h5
54/54 [==============================] - 2s 29ms/step - loss: 0.5468 - accuracy: 0.7205 - val_loss: 0.6319 - val_accuracy: 0.6752
Epoch 7/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5197 - accuracy: 0.7412
Epoch 7: val_accuracy improved from 0.67523 to 0.68458, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.5226 - accuracy: 0.7363 - val_loss: 0.5572 - val_accuracy: 0.6846
Epoch 8/50
53/54 [============================>.] - ETA: 0s - loss: 0.5101 - accuracy: 0.7384
Epoch 8: val_accuracy improved from 0.68458 to 0.68925, saving model to best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.5118 - accuracy: 0.7375 - val_loss: 0.6184 - val_accuracy: 0.6893
Epoch 9/50
52/54 [===========================>..] - ETA: 0s - loss: 0.4747 - accuracy: 0.7679
Epoch 9: val_accuracy improved from 0.68925 to 0.78037, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.4743 - accuracy: 0.7695 - val_loss: 0.4770 - val_accuracy: 0.7804
Epoch 10/50
53/54 [============================>.] - ETA: 0s - loss: 0.4504 - accuracy: 0.7895
Epoch 10: val_accuracy did not improve from 0.78037
54/54 [==============================] - 1s 22ms/step - loss: 0.4528 - accuracy: 0.7870 - val_loss: 0.4698 - val_accuracy: 0.7640
Epoch 11/50
53/54 [============================>.] - ETA: 0s - loss: 0.4583 - accuracy: 0.7753
Epoch 11: val_accuracy did not improve from 0.78037
54/54 [==============================] - 1s 24ms/step - loss: 0.4571 - accuracy: 0.7760 - val_loss: 0.4528 - val_accuracy: 0.7734
Epoch 12/50
53/54 [============================>.] - ETA: 0s - loss: 0.4225 - accuracy: 0.8044
Epoch 12: val_accuracy improved from 0.78037 to 0.79206, saving model to best_model.h5
54/54 [==============================] - 2s 36ms/step - loss: 0.4219 - accuracy: 0.8057 - val_loss: 0.4540 - val_accuracy: 0.7921
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.4011 - accuracy: 0.8291
Epoch 13: val_accuracy improved from 0.79206 to 0.80140, saving model to best_model.h5
54/54 [==============================] - 3s 48ms/step - loss: 0.4011 - accuracy: 0.8291 - val_loss: 0.4250 - val_accuracy: 0.8014
Epoch 14/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3779 - accuracy: 0.8339
Epoch 14: val_accuracy did not improve from 0.80140
54/54 [==============================] - 1s 23ms/step - loss: 0.3813 - accuracy: 0.8326 - val_loss: 0.4555 - val_accuracy: 0.7850
Epoch 15/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3603 - accuracy: 0.8442
Epoch 15: val_accuracy improved from 0.80140 to 0.82944, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.3605 - accuracy: 0.8454 - val_loss: 0.3814 - val_accuracy: 0.8294
Epoch 16/50
53/54 [============================>.] - ETA: 0s - loss: 0.3405 - accuracy: 0.8561
Epoch 16: val_accuracy improved from 0.82944 to 0.85047, saving model to best_model.h5
54/54 [==============================] - 1s 28ms/step - loss: 0.3387 - accuracy: 0.8576 - val_loss: 0.3755 - val_accuracy: 0.8505
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.3223 - accuracy: 0.8658
Epoch 17: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 22ms/step - loss: 0.3223 - accuracy: 0.8658 - val_loss: 0.4021 - val_accuracy: 0.8364
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.3203 - accuracy: 0.8611
Epoch 18: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 24ms/step - loss: 0.3203 - accuracy: 0.8611 - val_loss: 0.3645 - val_accuracy: 0.8458
Epoch 19/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3138 - accuracy: 0.8780
Epoch 19: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 22ms/step - loss: 0.3111 - accuracy: 0.8792 - val_loss: 0.3717 - val_accuracy: 0.8505
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.2977 - accuracy: 0.8810
Epoch 20: val_accuracy improved from 0.85047 to 0.86916, saving model to best_model.h5
54/54 [==============================] - 2s 29ms/step - loss: 0.2977 - accuracy: 0.8810 - val_loss: 0.3575 - val_accuracy: 0.8692
Epoch 21/50
53/54 [============================>.] - ETA: 0s - loss: 0.2802 - accuracy: 0.8960
Epoch 21: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 23ms/step - loss: 0.2775 - accuracy: 0.8979 - val_loss: 0.3989 - val_accuracy: 0.8505
Epoch 22/50
52/54 [===========================>..] - ETA: 0s - loss: 0.2712 - accuracy: 0.9012
Epoch 22: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 23ms/step - loss: 0.2691 - accuracy: 0.9020 - val_loss: 0.4104 - val_accuracy: 0.8248
Epoch 23/50
53/54 [============================>.] - ETA: 0s - loss: 0.2792 - accuracy: 0.8930
Epoch 23: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 22ms/step - loss: 0.2763 - accuracy: 0.8950 - val_loss: 0.3594 - val_accuracy: 0.8668
Epoch 24/50
52/54 [===========================>..] - ETA: 0s - loss: 0.2571 - accuracy: 0.9000
Epoch 24: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 24ms/step - loss: 0.2557 - accuracy: 0.9008 - val_loss: 0.3951 - val_accuracy: 0.8318
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.2302 - accuracy: 0.9137
Epoch 25: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 28ms/step - loss: 0.2302 - accuracy: 0.9137 - val_loss: 0.3504 - val_accuracy: 0.8692
Epoch 26/50
53/54 [============================>.] - ETA: 0s - loss: 0.2428 - accuracy: 0.9132
Epoch 26: val_accuracy did not improve from 0.86916
54/54 [==============================] - 2s 32ms/step - loss: 0.2410 - accuracy: 0.9137 - val_loss: 0.4068 - val_accuracy: 0.8505
Epoch 27/50
53/54 [============================>.] - ETA: 0s - loss: 0.2375 - accuracy: 0.9078
Epoch 27: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 24ms/step - loss: 0.2353 - accuracy: 0.9090 - val_loss: 0.3579 - val_accuracy: 0.8668
Epoch 28/50
53/54 [============================>.] - ETA: 0s - loss: 0.2171 - accuracy: 0.9257
Epoch 28: val_accuracy improved from 0.86916 to 0.88551, saving model to best_model.h5
54/54 [==============================] - 2s 38ms/step - loss: 0.2174 - accuracy: 0.9247 - val_loss: 0.3274 - val_accuracy: 0.8855
Epoch 29/50
53/54 [============================>.] - ETA: 0s - loss: 0.2106 - accuracy: 0.9233
Epoch 29: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.2109 - accuracy: 0.9230 - val_loss: 0.3738 - val_accuracy: 0.8715
Epoch 30/50
53/54 [============================>.] - ETA: 0s - loss: 0.2144 - accuracy: 0.9251
Epoch 30: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 22ms/step - loss: 0.2170 - accuracy: 0.9236 - val_loss: 0.3435 - val_accuracy: 0.8808
Epoch 31/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1972 - accuracy: 0.9376
Epoch 31: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.1988 - accuracy: 0.9352 - val_loss: 0.3614 - val_accuracy: 0.8738
Epoch 32/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1830 - accuracy: 0.9352
Epoch 32: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.1833 - accuracy: 0.9341 - val_loss: 0.3529 - val_accuracy: 0.8808
Epoch 33/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1834 - accuracy: 0.9315
Epoch 33: val_accuracy improved from 0.88551 to 0.89019, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.1845 - accuracy: 0.9306 - val_loss: 0.3385 - val_accuracy: 0.8902
Epoch 34/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1749 - accuracy: 0.9370
Epoch 34: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1786 - accuracy: 0.9358 - val_loss: 0.3647 - val_accuracy: 0.8855
Epoch 35/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1767 - accuracy: 0.9358
Epoch 35: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1764 - accuracy: 0.9358 - val_loss: 0.3402 - val_accuracy: 0.8855
Epoch 36/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1593 - accuracy: 0.9442
Epoch 36: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1614 - accuracy: 0.9434 - val_loss: 0.3344 - val_accuracy: 0.8879
Epoch 37/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1565 - accuracy: 0.9370
Epoch 37: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 22ms/step - loss: 0.1590 - accuracy: 0.9370 - val_loss: 0.4124 - val_accuracy: 0.8785
Epoch 38/50
53/54 [============================>.] - ETA: 0s - loss: 0.1798 - accuracy: 0.9293
Epoch 38: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 22ms/step - loss: 0.1816 - accuracy: 0.9277 - val_loss: 0.3567 - val_accuracy: 0.8762
Epoch 39/50
53/54 [============================>.] - ETA: 0s - loss: 0.1399 - accuracy: 0.9590
Epoch 39: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1445 - accuracy: 0.9551 - val_loss: 0.3856 - val_accuracy: 0.8832
Epoch 40/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1514 - accuracy: 0.9479
Epoch 40: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1507 - accuracy: 0.9487 - val_loss: 0.3333 - val_accuracy: 0.8879
Epoch 41/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1339 - accuracy: 0.9564
Epoch 41: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1322 - accuracy: 0.9562 - val_loss: 0.3422 - val_accuracy: 0.8832
Epoch 42/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1304 - accuracy: 0.9539
Epoch 42: val_accuracy improved from 0.89019 to 0.89252, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.1350 - accuracy: 0.9522 - val_loss: 0.3840 - val_accuracy: 0.8925
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.1250 - accuracy: 0.9580
Epoch 43: val_accuracy did not improve from 0.89252
54/54 [==============================] - 2s 37ms/step - loss: 0.1250 - accuracy: 0.9580 - val_loss: 0.4118 - val_accuracy: 0.8832
Epoch 44/50
53/54 [============================>.] - ETA: 0s - loss: 0.1283 - accuracy: 0.9518
Epoch 44: val_accuracy did not improve from 0.89252
54/54 [==============================] - 2s 33ms/step - loss: 0.1293 - accuracy: 0.9504 - val_loss: 0.4486 - val_accuracy: 0.8668
Epoch 45/50
53/54 [============================>.] - ETA: 0s - loss: 0.1331 - accuracy: 0.9548
Epoch 45: val_accuracy improved from 0.89252 to 0.89486, saving model to best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.1337 - accuracy: 0.9545 - val_loss: 0.3383 - val_accuracy: 0.8949
Epoch 46/50
53/54 [============================>.] - ETA: 0s - loss: 0.1126 - accuracy: 0.9655
Epoch 46: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 22ms/step - loss: 0.1125 - accuracy: 0.9650 - val_loss: 0.3808 - val_accuracy: 0.8832
Epoch 47/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1270 - accuracy: 0.9576
Epoch 47: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 23ms/step - loss: 0.1263 - accuracy: 0.9574 - val_loss: 0.3838 - val_accuracy: 0.8808
Epoch 48/50
52/54 [===========================>..] - ETA: 0s - loss: 0.0988 - accuracy: 0.9642
Epoch 48: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 22ms/step - loss: 0.0987 - accuracy: 0.9638 - val_loss: 0.3463 - val_accuracy: 0.8925
Epoch 49/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1000 - accuracy: 0.9697
Epoch 49: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 23ms/step - loss: 0.0979 - accuracy: 0.9702 - val_loss: 0.3449 - val_accuracy: 0.8855
Epoch 50/50
53/54 [============================>.] - ETA: 0s - loss: 0.0835 - accuracy: 0.9703
Epoch 50: val_accuracy improved from 0.89486 to 0.89720, saving model to best_model.h5
54/54 [==============================] - 2s 31ms/step - loss: 0.0863 - accuracy: 0.9697 - val_loss: 0.3432 - val_accuracy: 0.8972

5、结果展示

acc = result.history['accuracy']
val_acc = result.history['val_accuracy']

loss = result.history['loss']
val_loss = result.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()


在这里插入图片描述

6、图片预测

# 加载最佳模型
model.load_weights('best_model.h5')
# 选一个图片预测
from PIL import Image
import numpy as np

img = Image.open("./data/Monkeypox/M01_02_11.jpg")  #这里选择你需要预测的图片
plt.imshow(img)
image = tf.image.resize(img, [heights, widths])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",classnames[np.argmax(predictions)])
1/1 [==============================] - 0s 30ms/step
预测结果为: Monkeypox

在这里插入图片描述

7、尝试优化

🤔 思路:添加正则化层,代码如下:

num_classes = 2

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(heights, widths, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(heights, widths, 3)), # 卷积层1,卷积核3*3  
    layers.BatchNormalization(),
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.BatchNormalization(),
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.BatchNormalization(),
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

🤞 结果:训练集确实准确率更快提升,但是验证集没有,所以总的来说没有什么太大的变化,损失率反而提升了,效果反而变差了。

提示:这张图片不知道为什么上传不了,大家可以换成这个代码试一下,看一下结果


http://www.kler.cn/news/365038.html

相关文章:

  • 虚拟机安装麒麟v10、配置网络、安装docker
  • 安全见闻8,量子力学见闻
  • git入门操作(2)
  • Java SnakeYaml 反序列化漏洞原理
  • nginx 日志配置笔记
  • 要做消息列表的颜色切换
  • MySQL 查看有哪些表
  • 台达A2伺服
  • ONLYOFFICE 文档8.2版本已发布:PDF 协作编辑、改进界面、性能优化等更新
  • Spring Data 技术详解与最佳实践
  • 旧电脑安装Win11提示“这台电脑当前不满足windows11系统要求”,安装中断。怎么办?
  • Webserver(2)GCC
  • 线性可分支持向量机的原理推导 9-26对拉格朗日函数L(w,b,α) 关于b求导 公式解析
  • AI应用程序低代码构建平台Langflow
  • 从一到无穷大 #37 Databricks Photon:打响 Spark Native Engine 第一枪
  • 打包方式-jar和war的区别
  • oracle数据库---PL/SQL、存储函数、存储过程、触发器、定时器job、备份
  • 做网站怎么做?
  • VSCode设置用鼠标滚轮控制字体大小
  • 安全见闻---清风
  • 记一次AWS服务器扩容
  • Lua数字
  • xtu oj 分段
  • ScrollView 真机微信小程序无法隐藏滚动条
  • 记一次js泄露pass获取核心业务
  • API接口开发系列文章:构建高效、安全、可扩展的服务