当前位置: 首页 > news >正文

第T11周:优化器对比实验

  • 本文为365天深度学习训练营 中的学习记录博客
  • 原作者:K同学啊

本次主要是探究不同优化器、以及不同参数配置对模型的影响,在论文当中我们也可以进行优化器的比对,以增加论文工作量。

我的环境:
●操作系统:ubuntu 22.04
●语言环境:python 3.8.10
●编译器:jupyter notebook
●深度学习框架:tensorflow-gpu 2.9.0
●显卡(GPU):RTX 3090(24GB) * 1
●数据集:好莱坞明星识别数据集

一、设置GPU

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")from tensorflow          import keras
import matplotlib.pyplot as plt
import pandas            as pd
import numpy             as np
import warnings,os,PIL,pathlibwarnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

二、导入数据

  1. 导入数据
data_dir    = "./T11/data"
data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)

代码输出:

图片总数为: 1800
batch_size = 16
img_height = 336
img_width  = 336
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size)

代码输出:

Found 1800 files belonging to 17 classes.
Using 1440 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size)

代码输出:

Found 1800 files belonging to 17 classes.
Using 360 files for validation.
class_names = train_ds.class_names
print(class_names)

代码输出:

['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']
  1. 检查数据
for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

代码输出:

(16, 336, 336, 3)
(16,)
  1. 配置数据集
AUTOTUNE = tf.data.AUTOTUNEdef train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)val_ds = (val_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)
  1. 数据可视化
plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 显示图片plt.imshow(images[i])# 显示标签plt.xlabel(class_names[labels[i]-1])plt.show()

代码输出:

在这里插入图片描述

三、构建模型

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Modeldef create_model(optimizer='adam'):# 加载预训练模型vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='avg')for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.outputX = Dense(170, activation='relu')(X)X = BatchNormalization()(X)X = Dropout(0.5)(X)output = Dense(len(class_names), activation='softmax')(X)vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer=optimizer,loss='sparse_categorical_crossentropy',metrics=['accuracy'])return vgg16_modelmodel1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model1.summary()

代码输出:

Model: "model"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================input_1 (InputLayer)        [(None, 336, 336, 3)]     0         block1_conv1 (Conv2D)       (None, 336, 336, 64)      1792      block1_conv2 (Conv2D)       (None, 336, 336, 64)      36928     block1_pool (MaxPooling2D)  (None, 168, 168, 64)      0         block2_conv1 (Conv2D)       (None, 168, 168, 128)     73856     block2_conv2 (Conv2D)       (None, 168, 168, 128)     147584    block2_pool (MaxPooling2D)  (None, 84, 84, 128)       0         block3_conv1 (Conv2D)       (None, 84, 84, 256)       295168    block3_conv2 (Conv2D)       (None, 84, 84, 256)       590080    block3_conv3 (Conv2D)       (None, 84, 84, 256)       590080    block3_pool (MaxPooling2D)  (None, 42, 42, 256)       0         block4_conv1 (Conv2D)       (None, 42, 42, 512)       1180160   block4_conv2 (Conv2D)       (None, 42, 42, 512)       2359808   block4_conv3 (Conv2D)       (None, 42, 42, 512)       2359808   block4_pool (MaxPooling2D)  (None, 21, 21, 512)       0         block5_conv1 (Conv2D)       (None, 21, 21, 512)       2359808   block5_conv2 (Conv2D)       (None, 21, 21, 512)       2359808   block5_conv3 (Conv2D)       (None, 21, 21, 512)       2359808   block5_pool (MaxPooling2D)  (None, 10, 10, 512)       0         global_average_pooling2d (G  (None, 512)              0         lobalAveragePooling2D)                                          dense (Dense)               (None, 170)               87210     batch_normalization (BatchN  (None, 170)              680       ormalization)                                                   dropout (Dropout)           (None, 170)               0         dense_1 (Dense)             (None, 17)                2907      =================================================================
Total params: 14,805,485
Trainable params: 90,457
Non-trainable params: 14,715,028
_________________________________________________________________
model2.summary()

代码输出:

Model: "model_1"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================input_2 (InputLayer)        [(None, 336, 336, 3)]     0         block1_conv1 (Conv2D)       (None, 336, 336, 64)      1792      block1_conv2 (Conv2D)       (None, 336, 336, 64)      36928     block1_pool (MaxPooling2D)  (None, 168, 168, 64)      0         block2_conv1 (Conv2D)       (None, 168, 168, 128)     73856     block2_conv2 (Conv2D)       (None, 168, 168, 128)     147584    block2_pool (MaxPooling2D)  (None, 84, 84, 128)       0         block3_conv1 (Conv2D)       (None, 84, 84, 256)       295168    block3_conv2 (Conv2D)       (None, 84, 84, 256)       590080    block3_conv3 (Conv2D)       (None, 84, 84, 256)       590080    block3_pool (MaxPooling2D)  (None, 42, 42, 256)       0         block4_conv1 (Conv2D)       (None, 42, 42, 512)       1180160   block4_conv2 (Conv2D)       (None, 42, 42, 512)       2359808   block4_conv3 (Conv2D)       (None, 42, 42, 512)       2359808   block4_pool (MaxPooling2D)  (None, 21, 21, 512)       0         block5_conv1 (Conv2D)       (None, 21, 21, 512)       2359808   block5_conv2 (Conv2D)       (None, 21, 21, 512)       2359808   block5_conv3 (Conv2D)       (None, 21, 21, 512)       2359808   block5_pool (MaxPooling2D)  (None, 10, 10, 512)       0         global_average_pooling2d_1   (None, 512)              0         (GlobalAveragePooling2D)                                        dense_2 (Dense)             (None, 170)               87210     batch_normalization_1 (Batc  (None, 170)              680       hNormalization)                                                 dropout_1 (Dropout)         (None, 170)               0         dense_3 (Dense)             (None, 17)                2907      =================================================================
Total params: 14,805,485
Trainable params: 90,457
Non-trainable params: 14,715,028
_________________________________________________________________

四、训练模型

NO_EPOCHS = 50history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
print('-------------------model1与model2分割线--------------------model1与model2分割线--------------model1与model2分割线------------------')
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)

代码输出:

Epoch 1/502024-09-14 23:26:45.118879: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 81013/90 [>.............................] - ETA: 3s - loss: 3.3548 - accuracy: 0.1250      2024-09-14 23:26:47.088327: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.90/90 [==============================] - 9s 63ms/step - loss: 2.7587 - accuracy: 0.1743 - val_loss: 2.6669 - val_accuracy: 0.1306
Epoch 2/50
90/90 [==============================] - 5s 56ms/step - loss: 2.0662 - accuracy: 0.3403 - val_loss: 2.4666 - val_accuracy: 0.1944
Epoch 3/50
90/90 [==============================] - 5s 56ms/step - loss: 1.7279 - accuracy: 0.4674 - val_loss: 2.2326 - val_accuracy: 0.3083
Epoch 4/50
90/90 [==============================] - 5s 56ms/step - loss: 1.5328 - accuracy: 0.5063 - val_loss: 1.9590 - val_accuracy: 0.3806
Epoch 5/50
90/90 [==============================] - 5s 56ms/step - loss: 1.3340 - accuracy: 0.5660 - val_loss: 1.9163 - val_accuracy: 0.3778
Epoch 6/50
90/90 [==============================] - 5s 56ms/step - loss: 1.2230 - accuracy: 0.6174 - val_loss: 1.6120 - val_accuracy: 0.4583
Epoch 7/50
90/90 [==============================] - 5s 56ms/step - loss: 1.1022 - accuracy: 0.6708 - val_loss: 1.6064 - val_accuracy: 0.4750
Epoch 8/50
90/90 [==============================] - 5s 56ms/step - loss: 1.0177 - accuracy: 0.6854 - val_loss: 1.5709 - val_accuracy: 0.4889
Epoch 9/50
90/90 [==============================] - 5s 56ms/step - loss: 0.9350 - accuracy: 0.7097 - val_loss: 1.9145 - val_accuracy: 0.4139
Epoch 10/50
90/90 [==============================] - 5s 57ms/step - loss: 0.8492 - accuracy: 0.7465 - val_loss: 1.8292 - val_accuracy: 0.4667
Epoch 11/50
90/90 [==============================] - 5s 56ms/step - loss: 0.7936 - accuracy: 0.7549 - val_loss: 1.6844 - val_accuracy: 0.5167
Epoch 12/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7355 - accuracy: 0.7750 - val_loss: 1.8864 - val_accuracy: 0.4722
Epoch 13/50
90/90 [==============================] - 5s 56ms/step - loss: 0.6526 - accuracy: 0.7965 - val_loss: 2.0205 - val_accuracy: 0.4639
Epoch 14/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6154 - accuracy: 0.8160 - val_loss: 1.8515 - val_accuracy: 0.4694
Epoch 15/50
90/90 [==============================] - 5s 57ms/step - loss: 0.5688 - accuracy: 0.8236 - val_loss: 1.5597 - val_accuracy: 0.5500
Epoch 16/50
90/90 [==============================] - 5s 57ms/step - loss: 0.5693 - accuracy: 0.8236 - val_loss: 1.5927 - val_accuracy: 0.5500
Epoch 17/50
90/90 [==============================] - 5s 56ms/step - loss: 0.5225 - accuracy: 0.8424 - val_loss: 1.5935 - val_accuracy: 0.5472
Epoch 18/50
90/90 [==============================] - 5s 57ms/step - loss: 0.4764 - accuracy: 0.8556 - val_loss: 1.9192 - val_accuracy: 0.4833
Epoch 19/50
90/90 [==============================] - 5s 56ms/step - loss: 0.4587 - accuracy: 0.8618 - val_loss: 2.0954 - val_accuracy: 0.5056
Epoch 20/50
90/90 [==============================] - 5s 56ms/step - loss: 0.4435 - accuracy: 0.8632 - val_loss: 1.9991 - val_accuracy: 0.5250
Epoch 21/50
90/90 [==============================] - 5s 56ms/step - loss: 0.3940 - accuracy: 0.8882 - val_loss: 1.9208 - val_accuracy: 0.4694
Epoch 22/50
90/90 [==============================] - 5s 57ms/step - loss: 0.3999 - accuracy: 0.8729 - val_loss: 1.8739 - val_accuracy: 0.5222
Epoch 23/50
90/90 [==============================] - 5s 57ms/step - loss: 0.3670 - accuracy: 0.8993 - val_loss: 1.7193 - val_accuracy: 0.5750
Epoch 24/50
90/90 [==============================] - 5s 57ms/step - loss: 0.3245 - accuracy: 0.9097 - val_loss: 1.6382 - val_accuracy: 0.5778
Epoch 25/50
90/90 [==============================] - 5s 57ms/step - loss: 0.3051 - accuracy: 0.9056 - val_loss: 1.7925 - val_accuracy: 0.5111
Epoch 26/50
90/90 [==============================] - 5s 57ms/step - loss: 0.3031 - accuracy: 0.9132 - val_loss: 1.9919 - val_accuracy: 0.5556
Epoch 27/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2988 - accuracy: 0.9118 - val_loss: 1.6263 - val_accuracy: 0.5806
Epoch 28/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2925 - accuracy: 0.9097 - val_loss: 1.8371 - val_accuracy: 0.5722
Epoch 29/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2667 - accuracy: 0.9167 - val_loss: 1.9677 - val_accuracy: 0.5611
Epoch 30/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2545 - accuracy: 0.9146 - val_loss: 1.8921 - val_accuracy: 0.5667
Epoch 31/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2576 - accuracy: 0.9278 - val_loss: 2.1348 - val_accuracy: 0.5556
Epoch 32/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2679 - accuracy: 0.9174 - val_loss: 1.9040 - val_accuracy: 0.5611
Epoch 33/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2430 - accuracy: 0.9215 - val_loss: 2.3460 - val_accuracy: 0.5389
Epoch 34/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2625 - accuracy: 0.9215 - val_loss: 2.3996 - val_accuracy: 0.5222
Epoch 35/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2309 - accuracy: 0.9347 - val_loss: 2.2831 - val_accuracy: 0.5194
Epoch 36/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1993 - accuracy: 0.9417 - val_loss: 2.1004 - val_accuracy: 0.4917
Epoch 37/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1996 - accuracy: 0.9424 - val_loss: 1.7912 - val_accuracy: 0.5917
Epoch 38/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2246 - accuracy: 0.9236 - val_loss: 2.3634 - val_accuracy: 0.5250
Epoch 39/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1955 - accuracy: 0.9326 - val_loss: 2.3584 - val_accuracy: 0.5111
Epoch 40/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2164 - accuracy: 0.9278 - val_loss: 2.6774 - val_accuracy: 0.4611
Epoch 41/50
90/90 [==============================] - 5s 57ms/step - loss: 0.2209 - accuracy: 0.9319 - val_loss: 1.9872 - val_accuracy: 0.5694
Epoch 42/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1910 - accuracy: 0.9451 - val_loss: 2.3987 - val_accuracy: 0.5278
Epoch 43/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1984 - accuracy: 0.9312 - val_loss: 2.2877 - val_accuracy: 0.5500
Epoch 44/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1807 - accuracy: 0.9486 - val_loss: 2.8193 - val_accuracy: 0.5000
Epoch 45/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1740 - accuracy: 0.9507 - val_loss: 2.5531 - val_accuracy: 0.4944
Epoch 46/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1689 - accuracy: 0.9521 - val_loss: 2.2834 - val_accuracy: 0.5417
Epoch 47/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1873 - accuracy: 0.9403 - val_loss: 2.9073 - val_accuracy: 0.4583
Epoch 48/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1489 - accuracy: 0.9563 - val_loss: 2.3967 - val_accuracy: 0.5306
Epoch 49/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1653 - accuracy: 0.9500 - val_loss: 2.2389 - val_accuracy: 0.5500
Epoch 50/50
90/90 [==============================] - 5s 57ms/step - loss: 0.1903 - accuracy: 0.9417 - val_loss: 2.0730 - val_accuracy: 0.5639
-------------------model1与model2分割线--------------------model1与model2分割线--------------model1与model2分割线------------------
Epoch 1/50
90/90 [==============================] - 6s 58ms/step - loss: 3.0653 - accuracy: 0.1063 - val_loss: 2.7644 - val_accuracy: 0.0889
Epoch 2/50
90/90 [==============================] - 5s 57ms/step - loss: 2.5411 - accuracy: 0.1826 - val_loss: 2.6030 - val_accuracy: 0.1639
Epoch 3/50
90/90 [==============================] - 5s 56ms/step - loss: 2.2811 - accuracy: 0.2653 - val_loss: 2.4249 - val_accuracy: 0.2583
Epoch 4/50
90/90 [==============================] - 5s 56ms/step - loss: 2.1075 - accuracy: 0.3125 - val_loss: 2.2462 - val_accuracy: 0.3222
Epoch 5/50
90/90 [==============================] - 5s 57ms/step - loss: 1.9616 - accuracy: 0.3632 - val_loss: 2.0982 - val_accuracy: 0.3222
Epoch 6/50
90/90 [==============================] - 5s 57ms/step - loss: 1.8794 - accuracy: 0.3840 - val_loss: 2.0764 - val_accuracy: 0.3444
Epoch 7/50
90/90 [==============================] - 5s 57ms/step - loss: 1.7429 - accuracy: 0.4271 - val_loss: 1.9110 - val_accuracy: 0.3750
Epoch 8/50
90/90 [==============================] - 5s 56ms/step - loss: 1.6374 - accuracy: 0.4639 - val_loss: 1.8036 - val_accuracy: 0.4167
Epoch 9/50
90/90 [==============================] - 5s 57ms/step - loss: 1.5845 - accuracy: 0.4819 - val_loss: 1.7186 - val_accuracy: 0.4250
Epoch 10/50
90/90 [==============================] - 5s 57ms/step - loss: 1.5254 - accuracy: 0.5167 - val_loss: 1.6976 - val_accuracy: 0.4556
Epoch 11/50
90/90 [==============================] - 5s 56ms/step - loss: 1.4359 - accuracy: 0.5382 - val_loss: 1.6670 - val_accuracy: 0.4583
Epoch 12/50
90/90 [==============================] - 5s 57ms/step - loss: 1.4012 - accuracy: 0.5618 - val_loss: 1.7041 - val_accuracy: 0.4417
Epoch 13/50
90/90 [==============================] - 5s 57ms/step - loss: 1.3705 - accuracy: 0.5625 - val_loss: 2.0102 - val_accuracy: 0.3972
Epoch 14/50
90/90 [==============================] - 5s 56ms/step - loss: 1.3036 - accuracy: 0.5806 - val_loss: 1.5762 - val_accuracy: 0.4778
Epoch 15/50
90/90 [==============================] - 5s 57ms/step - loss: 1.2801 - accuracy: 0.6014 - val_loss: 1.6313 - val_accuracy: 0.4444
Epoch 16/50
90/90 [==============================] - 5s 57ms/step - loss: 1.2526 - accuracy: 0.6076 - val_loss: 1.6924 - val_accuracy: 0.4333
Epoch 17/50
90/90 [==============================] - 5s 56ms/step - loss: 1.2186 - accuracy: 0.6062 - val_loss: 1.5168 - val_accuracy: 0.4639
Epoch 18/50
90/90 [==============================] - 5s 57ms/step - loss: 1.1590 - accuracy: 0.6424 - val_loss: 1.5752 - val_accuracy: 0.4889
Epoch 19/50
90/90 [==============================] - 5s 56ms/step - loss: 1.1484 - accuracy: 0.6403 - val_loss: 1.5400 - val_accuracy: 0.4722
Epoch 20/50
90/90 [==============================] - 5s 56ms/step - loss: 1.0875 - accuracy: 0.6604 - val_loss: 1.6116 - val_accuracy: 0.4861
Epoch 21/50
90/90 [==============================] - 5s 57ms/step - loss: 1.0891 - accuracy: 0.6535 - val_loss: 1.4242 - val_accuracy: 0.5389
Epoch 22/50
90/90 [==============================] - 5s 56ms/step - loss: 1.0708 - accuracy: 0.6451 - val_loss: 1.4444 - val_accuracy: 0.5306
Epoch 23/50
90/90 [==============================] - 5s 56ms/step - loss: 0.9997 - accuracy: 0.6812 - val_loss: 1.4391 - val_accuracy: 0.5278
Epoch 24/50
90/90 [==============================] - 5s 57ms/step - loss: 0.9655 - accuracy: 0.7111 - val_loss: 1.4313 - val_accuracy: 0.5167
Epoch 25/50
90/90 [==============================] - 5s 57ms/step - loss: 0.9420 - accuracy: 0.6972 - val_loss: 1.4766 - val_accuracy: 0.5083
Epoch 26/50
90/90 [==============================] - 5s 56ms/step - loss: 0.9558 - accuracy: 0.7125 - val_loss: 1.5196 - val_accuracy: 0.5278
Epoch 27/50
90/90 [==============================] - 5s 57ms/step - loss: 0.9254 - accuracy: 0.7153 - val_loss: 1.4293 - val_accuracy: 0.5389
Epoch 28/50
90/90 [==============================] - 5s 57ms/step - loss: 0.8833 - accuracy: 0.7368 - val_loss: 1.4984 - val_accuracy: 0.5167
Epoch 29/50
90/90 [==============================] - 5s 57ms/step - loss: 0.8637 - accuracy: 0.7285 - val_loss: 1.4497 - val_accuracy: 0.5417
Epoch 30/50
90/90 [==============================] - 5s 56ms/step - loss: 0.8413 - accuracy: 0.7403 - val_loss: 1.4456 - val_accuracy: 0.5667
Epoch 31/50
90/90 [==============================] - 5s 57ms/step - loss: 0.8554 - accuracy: 0.7201 - val_loss: 1.6269 - val_accuracy: 0.5028
Epoch 32/50
90/90 [==============================] - 5s 57ms/step - loss: 0.8333 - accuracy: 0.7431 - val_loss: 1.4693 - val_accuracy: 0.5333
Epoch 33/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7793 - accuracy: 0.7542 - val_loss: 1.4383 - val_accuracy: 0.5667
Epoch 34/50
90/90 [==============================] - 5s 56ms/step - loss: 0.8054 - accuracy: 0.7417 - val_loss: 1.4326 - val_accuracy: 0.5583
Epoch 35/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7325 - accuracy: 0.7646 - val_loss: 1.4527 - val_accuracy: 0.5500
Epoch 36/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7828 - accuracy: 0.7410 - val_loss: 1.5415 - val_accuracy: 0.5250
Epoch 37/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7757 - accuracy: 0.7556 - val_loss: 1.7324 - val_accuracy: 0.5083
Epoch 38/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7465 - accuracy: 0.7632 - val_loss: 1.4701 - val_accuracy: 0.5444
Epoch 39/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7566 - accuracy: 0.7646 - val_loss: 1.7689 - val_accuracy: 0.4333
Epoch 40/50
90/90 [==============================] - 5s 57ms/step - loss: 0.7488 - accuracy: 0.7632 - val_loss: 1.7090 - val_accuracy: 0.4917
Epoch 41/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6995 - accuracy: 0.7812 - val_loss: 1.3353 - val_accuracy: 0.5778
Epoch 42/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6801 - accuracy: 0.7799 - val_loss: 1.3661 - val_accuracy: 0.5806
Epoch 43/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6918 - accuracy: 0.7861 - val_loss: 1.5759 - val_accuracy: 0.5528
Epoch 44/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6696 - accuracy: 0.7861 - val_loss: 1.5987 - val_accuracy: 0.5389
Epoch 45/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6537 - accuracy: 0.7972 - val_loss: 1.5512 - val_accuracy: 0.5333
Epoch 46/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6393 - accuracy: 0.7972 - val_loss: 2.0156 - val_accuracy: 0.4528
Epoch 47/50
90/90 [==============================] - 5s 56ms/step - loss: 0.6270 - accuracy: 0.8104 - val_loss: 1.5352 - val_accuracy: 0.5194
Epoch 48/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6495 - accuracy: 0.7944 - val_loss: 1.7281 - val_accuracy: 0.5139
Epoch 49/50
90/90 [==============================] - 5s 57ms/step - loss: 0.5989 - accuracy: 0.8083 - val_loss: 1.4802 - val_accuracy: 0.5833
Epoch 50/50
90/90 [==============================] - 5s 57ms/step - loss: 0.6235 - accuracy: 0.8083 - val_loss: 1.4240 - val_accuracy: 0.5417

五、评估模型

  1. Accuracy与Loss图
from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']epochs_range = range(len(acc1))plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.show()
代码输出:

在这里插入图片描述

  1. 模型评估
def test_accuracy_report(model):score = model.evaluate(val_ds, verbose=0)print('Loss function: %s, accuracy:' % score[0], score[1])
test_accuracy_report(model1)

代码输出:

Loss function: 2.072991371154785, accuracy: 0.5638889074325562
test_accuracy_report(model2)

代码输出:

Loss function: 1.4239585399627686, accuracy: 0.5416666865348816

http://www.mrgr.cn/news/28122.html

相关文章:

  • 使用python编写工具:快速生成chrome插件相关文件结构
  • 博睿数据登顶中国应用性能管理及可观测性APMO市场份额第一!
  • 【3D Slicer】的小白入门使用指南九
  • 使用 Python 和 OpenCV 实现摄像头人脸检测并截图
  • HTML5:网页开发的新纪元
  • Vue3 动态获取 assets 文件夹图片
  • Vue基础
  • 【C++】深入理解作用域和命名空间:从基础到进阶详解
  • 深入浅出Java匿名内部类:用法详解与实例演示
  • 有了数据中台,是否需要升级到数据飞轮?怎么做才能升级到数据飞轮?
  • 包装盒型自动生成插件 Origami Boxshot illustrator盒型自动生成插件
  • 北大对齐团队独家解读:OpenAI o1开启「后训练」时代强化学习新范式
  • SpringCloud-05 Resilience4J 服务降级和熔断
  • 汽车英文单词缩写汇总
  • 【Multi-UAV】多无人机实现凸多边形区域覆盖--Voronoi分割
  • 进程状态、进程创建和进程分类
  • 使用合成数据进行自我提升的扩散模型
  • 【AI视频】复刻抖音爆款AI数字人作品初体验
  • 链表在开空间时候出现的问题
  • 2024年9月15日
  • 添加文字+更改音乐+更改比例+添加背景
  • 【推荐项目】大学生心理预约管理系统
  • 【每日一题】LeetCode 1184.公交站间的距离问题(数组)
  • 二叉树OJ题——二叉树的前序遍历
  • 12. DataLoader的基本使用
  • Java:抽象类和接口(1)