tensorflow案例4--人脸识别(损失函数选取,调用VGG16模型以及改进写法)
- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍖 原作者:K同学啊
前言
- 这个模型结构算上之前的pytorch版本的,算是花了不少时间,但是效果一直没有达到理想情况,主要是验证集和训练集准确率差距过大;
- 在猜想会不会是模型复杂性不够的问题,但是如果继续叠加,又会出现模型退化结果,解决方法,我感觉可以试一下
ResNet
模型,后面再更新吧; - 这一次对*VGG16的修改主要修改了3个方面,具体情况如下讲解;
- 欢迎收藏 + 关注,本人将会持续更新。
1、知识讲解与API积累
1、模型改进与简介
VGG16模型
VGG16模型是一个很基础的模型,由13层卷积,3层全连接层构成,图示如下:
本次实验VGG16模型修改
- 冻结前13层卷积,只修改全连接
- 在全连接层前添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
- 全连接层中添加Dropout层
- 修改后代码:
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()
结果
最好结果loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
,个人感觉想要继续提升精度,最简单方法,是结合ResNet
网络, 这个后面我再试一下.
2、API积累
Loss函数
损失函数Loss详解(结合加载数据的时候,label_mode参数进行相应选取):
1. binary_crossentropy(对数损失函数)
与 sigmoid
相对应的损失函数,针对于二分类问题。
2. categorical_crossentropy(多分类的对数损失函数)
与 softmax
相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy
Tensorflow加载VGG16模型
1. 加载包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16# 加载包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
model = VGG16(include_top=True, weights='imagenet', input_shape=(224, 224, 3))model.summary()
2. 加载不包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model# 加载不包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))# 冻结卷积基的权重(可选)
for layer in base_model.layers:layer.trainable = False# 获取卷积基的输出
x = base_model.output# 添加新的全连接层,******* 就这一步结合实际场景进行修改 ****
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x) # 2 个输出类别# 创建新的模型
model = Model(inputs=base_model.input, outputs=predictions)model.summary()
3. 使用自定义输入张量
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Input# 定义输入张量
input_tensor = Input(shape=(224, 224, 3))# 加载 VGG16 模型,使用自定义输入张量
model = VGG16(include_top=True, weights='imagenet', input_tensor=input_tensor)model.summary()
4. 不加载预训练权重
from tensorflow.keras.applications import VGG16# 加载 VGG16 模型,不加载预训练权重
model = VGG16(include_top=True, weights=None, input_shape=(224, 224, 3))model.summary()
2、人脸识别求解
1、数据处理
1、导入库
import tensorflow as tf
from tensorflow.keras import datasets, models, layers
import numpy as np # 获取所有GPU
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] # 有多块,取一块tf.config.experimental.set_memory_growth(gpu0, True) # 设置显存空间tf.config.set_visible_devices([gpu0], "GPU") # 设置第一块gpus # 输出GPU
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
2、查看文件目录
在数据文件夹中,不同人的图像都存储在单独的文件夹中
import os, PIL, pathlibdata_dir = './data/'
data_dir = pathlib.Path(data_dir)# 获取所有改目文件夹下所有文件夹名字
classnames = os.listdir(data_dir)
classnames
['Angelina Jolie','Brad Pitt','Denzel Washington','Hugh Jackman','Jennifer Lawrence','Johnny Depp','Kate Winslet','Leonardo DiCaprio','Megan Fox','Natalie Portman','Nicole Kidman','Robert Downey Jr','Sandra Bullock','Scarlett Johansson','Tom Cruise','Tom Hanks','Will Smith']
3、数据划分
batch_size = 32train_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2, # 验证集 0.1,训练集 0.9subset='training',seed=42,label_mode='categorical', # 使用独热编码编码数据分类image_size=(256, 256)
)val_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2,seed=42,subset='validation', image_size=(256, 256),label_mode='categorical' # 使用独热编码对数据进行分类
)
Found 1800 files belonging to 17 classes.
Using 1440 files for training.
Found 1800 files belonging to 17 classes.
Using 360 files for validation.
# 输出数据维度
for image, label in train_ds.take(1):print(image.shape)print(label.shape)
(32, 256, 256, 3)
(32, 17)
4、数据展示
# 随机展示一批数据
import matplotlib.pyplot as plt plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):for i in range(20):plt.subplot(5, 10, i + 1)plt.imshow(images[i].numpy().astype("uint8"))plt.title(classnames[np.argmax(labels[i], axis=0)]) plt.axis('off')plt.show()
2、构建VGG16模型
VGG16修改:
- 冻结前13层卷积,只修改全连接
- 在全连接层前添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
- 全连接层中添加Dropout层
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()
Model: "model"
_________________________________________________________________Layer (type) Output Shape Param #
=================================================================input_1 (InputLayer) [(None, 256, 256, 3)] 0 block1_conv1 (Conv2D) (None, 256, 256, 64) 1792 block1_conv2 (Conv2D) (None, 256, 256, 64) 36928 block1_pool (MaxPooling2D) (None, 128, 128, 64) 0 block2_conv1 (Conv2D) (None, 128, 128, 128) 73856 block2_conv2 (Conv2D) (None, 128, 128, 128) 147584 block2_pool (MaxPooling2D) (None, 64, 64, 128) 0 block3_conv1 (Conv2D) (None, 64, 64, 256) 295168 block3_conv2 (Conv2D) (None, 64, 64, 256) 590080 block3_conv3 (Conv2D) (None, 64, 64, 256) 590080 block3_pool (MaxPooling2D) (None, 32, 32, 256) 0 block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160 block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808 block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808 block4_pool (MaxPooling2D) (None, 16, 16, 512) 0 block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808 block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808 block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808 block5_pool (MaxPooling2D) (None, 8, 8, 512) 0 batch_normalization (BatchN (None, 8, 8, 512) 2048 ormalization) global_average_pooling2d (G (None, 512) 0 lobalAveragePooling2D) dense (Dense) (None, 1024) 525312 dropout (Dropout) (None, 1024) 0 dense_1 (Dense) (None, 512) 524800 dropout_1 (Dropout) (None, 512) 0 dense_2 (Dense) (None, 17) 8721 =================================================================
Total params: 15,775,569
Trainable params: 15,774,545
Non-trainable params: 1,024
_________________________________________________________________
3、模型训练
1、设置超参数
# 初始化学习率
learing_rate = 1e-3# 设置动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(learing_rate,decay_steps=60, # 每60步衰减一次decay_rate=0.96, # 原来的0.96staircase=True
)# 定义优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
# 设置超参数
model.compile(optimizer=optimizer,loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
2、模型正式训练
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStoppingepoches = 100# 训练模型中最佳模型
checkpointer = ModelCheckpoint('best_model.h5',monitor='val_accuracy', # 被检测参数verbose=1,save_best_only=True,save_weights_only=True
)# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',verbose=1, # 信息模型patience=20, min_delta=0.01, # 20次没有提示0.01,则停止
)history = model.fit(train_ds, validation_data=val_ds,epochs=epoches,callbacks=[checkpointer, earlystopper] # 设置回调函数
)
Epoch 1/100
2024-11-01 19:31:48.093783: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-11-01 19:31:50.361608: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
45/45 [==============================] - ETA: 0s - loss: 2.8548 - accuracy: 0.0826
Epoch 1: val_accuracy improved from -inf to 0.13056, saving model to best_model.h5
45/45 [==============================] - 14s 205ms/step - loss: 2.8548 - accuracy: 0.0826 - val_loss: 8.0561 - val_accuracy: 0.1306
Epoch 2/100
45/45 [==============================] - ETA: 0s - loss: 2.7271 - accuracy: 0.1181
Epoch 2: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.7271 - accuracy: 0.1181 - val_loss: 3.7047 - val_accuracy: 0.0639
Epoch 3/100
45/45 [==============================] - ETA: 0s - loss: 2.6583 - accuracy: 0.1354
Epoch 3: val_accuracy did not improve from 0.13056
45/45 [==============================] - 7s 144ms/step - loss: 2.6583 - accuracy: 0.1354 - val_loss: 8.0687 - val_accuracy: 0.0806
Epoch 4/100
45/45 [==============================] - ETA: 0s - loss: 2.5833 - accuracy: 0.1444
Epoch 4: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.5833 - accuracy: 0.1444 - val_loss: 4.7184 - val_accuracy: 0.1000
Epoch 5/100
45/45 [==============================] - ETA: 0s - loss: 2.5115 - accuracy: 0.1576
Epoch 5: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.5115 - accuracy: 0.1576 - val_loss: 61.5911 - val_accuracy: 0.0639
Epoch 6/100
45/45 [==============================] - ETA: 0s - loss: 2.4402 - accuracy: 0.1674
Epoch 6: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.4402 - accuracy: 0.1674 - val_loss: 4.6790 - val_accuracy: 0.0944
Epoch 7/100
45/45 [==============================] - ETA: 0s - loss: 2.3911 - accuracy: 0.1951
Epoch 7: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.3911 - accuracy: 0.1951 - val_loss: 2.7717 - val_accuracy: 0.1028
Epoch 8/100
45/45 [==============================] - ETA: 0s - loss: 2.3331 - accuracy: 0.1931
Epoch 8: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 144ms/step - loss: 2.3331 - accuracy: 0.1931 - val_loss: 8.2605 - val_accuracy: 0.0639
Epoch 9/100
45/45 [==============================] - ETA: 0s - loss: 2.2922 - accuracy: 0.2021
Epoch 9: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2922 - accuracy: 0.2021 - val_loss: 51.5976 - val_accuracy: 0.0306
Epoch 10/100
45/45 [==============================] - ETA: 0s - loss: 2.2182 - accuracy: 0.2313
Epoch 10: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2182 - accuracy: 0.2313 - val_loss: 4.3942 - val_accuracy: 0.0611
Epoch 11/100
45/45 [==============================] - ETA: 0s - loss: 2.2049 - accuracy: 0.2361
Epoch 11: val_accuracy improved from 0.13056 to 0.17778, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.2049 - accuracy: 0.2361 - val_loss: 2.4072 - val_accuracy: 0.1778
Epoch 12/100
45/45 [==============================] - ETA: 0s - loss: 2.1242 - accuracy: 0.2576
Epoch 12: val_accuracy improved from 0.17778 to 0.18056, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.1242 - accuracy: 0.2576 - val_loss: 2.6218 - val_accuracy: 0.1806
Epoch 13/100
45/45 [==============================] - ETA: 0s - loss: 2.0634 - accuracy: 0.2639
Epoch 13: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 142ms/step - loss: 2.0634 - accuracy: 0.2639 - val_loss: 14.2102 - val_accuracy: 0.1556
Epoch 14/100
45/45 [==============================] - ETA: 0s - loss: 2.0379 - accuracy: 0.2861
Epoch 14: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 143ms/step - loss: 2.0379 - accuracy: 0.2861 - val_loss: 931.4739 - val_accuracy: 0.1556
Epoch 15/100
45/45 [==============================] - ETA: 0s - loss: 1.9782 - accuracy: 0.3063
Epoch 15: val_accuracy improved from 0.18056 to 0.21667, saving model to best_model.h5
45/45 [==============================] - 7s 144ms/step - loss: 1.9782 - accuracy: 0.3063 - val_loss: 2.3025 - val_accuracy: 0.2167
Epoch 16/100
45/45 [==============================] - ETA: 0s - loss: 1.9299 - accuracy: 0.3306
Epoch 16: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.9299 - accuracy: 0.3306 - val_loss: 2.2587 - val_accuracy: 0.2000
Epoch 17/100
45/45 [==============================] - ETA: 0s - loss: 1.8289 - accuracy: 0.3590
Epoch 17: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.8289 - accuracy: 0.3590 - val_loss: 2.5047 - val_accuracy: 0.1722
Epoch 18/100
45/45 [==============================] - ETA: 0s - loss: 1.7912 - accuracy: 0.3694
Epoch 18: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7912 - accuracy: 0.3694 - val_loss: 3.1102 - val_accuracy: 0.1722
Epoch 19/100
45/45 [==============================] - ETA: 0s - loss: 1.7762 - accuracy: 0.3764
Epoch 19: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7762 - accuracy: 0.3764 - val_loss: 2.7225 - val_accuracy: 0.2083
Epoch 20/100
45/45 [==============================] - ETA: 0s - loss: 1.7182 - accuracy: 0.3979
Epoch 20: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.7182 - accuracy: 0.3979 - val_loss: 3.4486 - val_accuracy: 0.1528
Epoch 21/100
45/45 [==============================] - ETA: 0s - loss: 1.6341 - accuracy: 0.4208
Epoch 21: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.6341 - accuracy: 0.4208 - val_loss: 2.7709 - val_accuracy: 0.1806
Epoch 22/100
45/45 [==============================] - ETA: 0s - loss: 1.5667 - accuracy: 0.4486
Epoch 22: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 144ms/step - loss: 1.5667 - accuracy: 0.4486 - val_loss: 4.2764 - val_accuracy: 0.1583
Epoch 23/100
45/45 [==============================] - ETA: 0s - loss: 1.4579 - accuracy: 0.4875
Epoch 23: val_accuracy improved from 0.21667 to 0.26111, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 1.4579 - accuracy: 0.4875 - val_loss: 32579.7422 - val_accuracy: 0.2611
Epoch 24/100
45/45 [==============================] - ETA: 0s - loss: 1.4373 - accuracy: 0.4854
Epoch 24: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.4373 - accuracy: 0.4854 - val_loss: 8038.8555 - val_accuracy: 0.1972
Epoch 25/100
45/45 [==============================] - ETA: 0s - loss: 1.3630 - accuracy: 0.5139
Epoch 25: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.3630 - accuracy: 0.5139 - val_loss: 2.3408 - val_accuracy: 0.2528
Epoch 26/100
45/45 [==============================] - ETA: 0s - loss: 1.3181 - accuracy: 0.5375
Epoch 26: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.3181 - accuracy: 0.5375 - val_loss: 2.1877 - val_accuracy: 0.2500
Epoch 27/100
45/45 [==============================] - ETA: 0s - loss: 1.2544 - accuracy: 0.5583
Epoch 27: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.2544 - accuracy: 0.5583 - val_loss: 2.6184 - val_accuracy: 0.1861
Epoch 28/100
45/45 [==============================] - ETA: 0s - loss: 1.1877 - accuracy: 0.5813
Epoch 28: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 144ms/step - loss: 1.1877 - accuracy: 0.5813 - val_loss: 3.0485 - val_accuracy: 0.2500
Epoch 29/100
45/45 [==============================] - ETA: 0s - loss: 1.0968 - accuracy: 0.6132
Epoch 29: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 143ms/step - loss: 1.0968 - accuracy: 0.6132 - val_loss: 61754.2734 - val_accuracy: 0.1917
Epoch 30/100
45/45 [==============================] - ETA: 0s - loss: 1.0537 - accuracy: 0.6424
Epoch 30: val_accuracy improved from 0.26111 to 0.26667, saving model to best_model.h5
45/45 [==============================] - 7s 148ms/step - loss: 1.0537 - accuracy: 0.6424 - val_loss: 2.3469 - val_accuracy: 0.2667
Epoch 31/100
45/45 [==============================] - ETA: 0s - loss: 1.0427 - accuracy: 0.6306
Epoch 31: val_accuracy did not improve from 0.26667
45/45 [==============================] - 6s 143ms/step - loss: 1.0427 - accuracy: 0.6306 - val_loss: 3.4498 - val_accuracy: 0.2250
Epoch 32/100
45/45 [==============================] - ETA: 0s - loss: 1.0697 - accuracy: 0.6403
Epoch 32: val_accuracy improved from 0.26667 to 0.37222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 1.0697 - accuracy: 0.6403 - val_loss: 2.8960 - val_accuracy: 0.3722
Epoch 33/100
45/45 [==============================] - ETA: 0s - loss: 0.9062 - accuracy: 0.6840
Epoch 33: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 143ms/step - loss: 0.9062 - accuracy: 0.6840 - val_loss: 102.1351 - val_accuracy: 0.3028
Epoch 34/100
45/45 [==============================] - ETA: 0s - loss: 0.8220 - accuracy: 0.7118
Epoch 34: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.8220 - accuracy: 0.7118 - val_loss: 3.1855 - val_accuracy: 0.2583
Epoch 35/100
45/45 [==============================] - ETA: 0s - loss: 0.7424 - accuracy: 0.7431
Epoch 35: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.7424 - accuracy: 0.7431 - val_loss: 34309.0664 - val_accuracy: 0.3028
Epoch 36/100
45/45 [==============================] - ETA: 0s - loss: 0.7257 - accuracy: 0.7535
Epoch 36: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 144ms/step - loss: 0.7257 - accuracy: 0.7535 - val_loss: 89.2148 - val_accuracy: 0.2361
Epoch 37/100
45/45 [==============================] - ETA: 0s - loss: 0.6695 - accuracy: 0.7799
Epoch 37: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 146ms/step - loss: 0.6695 - accuracy: 0.7799 - val_loss: 3590.8940 - val_accuracy: 0.1889
Epoch 38/100
45/45 [==============================] - ETA: 0s - loss: 0.5841 - accuracy: 0.7917
Epoch 38: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5841 - accuracy: 0.7917 - val_loss: 5.1283 - val_accuracy: 0.2222
Epoch 39/100
45/45 [==============================] - ETA: 0s - loss: 0.5989 - accuracy: 0.7840
Epoch 39: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5989 - accuracy: 0.7840 - val_loss: 3.7647 - val_accuracy: 0.2833
Epoch 40/100
45/45 [==============================] - ETA: 0s - loss: 0.5431 - accuracy: 0.8181
Epoch 40: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.5431 - accuracy: 0.8181 - val_loss: 3.9703 - val_accuracy: 0.3028
Epoch 41/100
45/45 [==============================] - ETA: 0s - loss: 0.4810 - accuracy: 0.8333
Epoch 41: val_accuracy improved from 0.37222 to 0.40278, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.4810 - accuracy: 0.8333 - val_loss: 2.7934 - val_accuracy: 0.4028
Epoch 42/100
45/45 [==============================] - ETA: 0s - loss: 0.5016 - accuracy: 0.8278
Epoch 42: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.5016 - accuracy: 0.8278 - val_loss: 58485.9453 - val_accuracy: 0.2583
Epoch 43/100
45/45 [==============================] - ETA: 0s - loss: 0.4782 - accuracy: 0.8424
Epoch 43: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 144ms/step - loss: 0.4782 - accuracy: 0.8424 - val_loss: 3.6065 - val_accuracy: 0.3694
Epoch 44/100
45/45 [==============================] - ETA: 0s - loss: 0.3587 - accuracy: 0.8785
Epoch 44: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3587 - accuracy: 0.8785 - val_loss: 5.5882 - val_accuracy: 0.3806
Epoch 45/100
45/45 [==============================] - ETA: 0s - loss: 0.3143 - accuracy: 0.8889
Epoch 45: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3143 - accuracy: 0.8889 - val_loss: 2.7883 - val_accuracy: 0.3861
Epoch 46/100
45/45 [==============================] - ETA: 0s - loss: 0.3707 - accuracy: 0.8757
Epoch 46: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3707 - accuracy: 0.8757 - val_loss: 3.2097 - val_accuracy: 0.3583
Epoch 47/100
45/45 [==============================] - ETA: 0s - loss: 0.3418 - accuracy: 0.8799
Epoch 47: val_accuracy did not improve from 0.40278
45/45 [==============================] - 6s 144ms/step - loss: 0.3418 - accuracy: 0.8799 - val_loss: 3.1672 - val_accuracy: 0.4028
Epoch 48/100
45/45 [==============================] - ETA: 0s - loss: 0.3202 - accuracy: 0.8931
Epoch 48: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3202 - accuracy: 0.8931 - val_loss: 16.9275 - val_accuracy: 0.3944
Epoch 49/100
45/45 [==============================] - ETA: 0s - loss: 0.2668 - accuracy: 0.9118
Epoch 49: val_accuracy improved from 0.40278 to 0.41944, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2668 - accuracy: 0.9118 - val_loss: 2.8230 - val_accuracy: 0.4194
Epoch 50/100
45/45 [==============================] - ETA: 0s - loss: 0.2676 - accuracy: 0.9021
Epoch 50: val_accuracy did not improve from 0.41944
45/45 [==============================] - 7s 144ms/step - loss: 0.2676 - accuracy: 0.9021 - val_loss: 2671.1196 - val_accuracy: 0.3639
Epoch 51/100
45/45 [==============================] - ETA: 0s - loss: 0.2152 - accuracy: 0.9306
Epoch 51: val_accuracy improved from 0.41944 to 0.45556, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2152 - accuracy: 0.9306 - val_loss: 2.5370 - val_accuracy: 0.4556
Epoch 52/100
45/45 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9611
Epoch 52: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 144ms/step - loss: 0.1308 - accuracy: 0.9611 - val_loss: 2.9426 - val_accuracy: 0.4444
Epoch 53/100
45/45 [==============================] - ETA: 0s - loss: 0.1306 - accuracy: 0.9556
Epoch 53: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1306 - accuracy: 0.9556 - val_loss: 3.2494 - val_accuracy: 0.3917
Epoch 54/100
45/45 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9500
Epoch 54: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.1515 - accuracy: 0.9500 - val_loss: 4461.8813 - val_accuracy: 0.3611
Epoch 55/100
45/45 [==============================] - ETA: 0s - loss: 0.2079 - accuracy: 0.9285
Epoch 55: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.2079 - accuracy: 0.9285 - val_loss: 4.7424 - val_accuracy: 0.3917
Epoch 56/100
45/45 [==============================] - ETA: 0s - loss: 0.2407 - accuracy: 0.9076
Epoch 56: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.2407 - accuracy: 0.9076 - val_loss: 3.3555 - val_accuracy: 0.3889
Epoch 57/100
45/45 [==============================] - ETA: 0s - loss: 0.1948 - accuracy: 0.9333
Epoch 57: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1948 - accuracy: 0.9333 - val_loss: 3.4168 - val_accuracy: 0.3861
Epoch 58/100
45/45 [==============================] - ETA: 0s - loss: 0.1534 - accuracy: 0.9431
Epoch 58: val_accuracy improved from 0.45556 to 0.47222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 0.1534 - accuracy: 0.9431 - val_loss: 2.7895 - val_accuracy: 0.4722
Epoch 59/100
45/45 [==============================] - ETA: 0s - loss: 0.1457 - accuracy: 0.9549
Epoch 59: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1457 - accuracy: 0.9549 - val_loss: 6.3610 - val_accuracy: 0.3444
Epoch 60/100
45/45 [==============================] - ETA: 0s - loss: 0.2078 - accuracy: 0.9306
Epoch 60: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.2078 - accuracy: 0.9306 - val_loss: 3.5834 - val_accuracy: 0.4056
Epoch 61/100
45/45 [==============================] - ETA: 0s - loss: 0.2005 - accuracy: 0.9361
Epoch 61: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.2005 - accuracy: 0.9361 - val_loss: 4.0683 - val_accuracy: 0.3861
Epoch 62/100
45/45 [==============================] - ETA: 0s - loss: 0.1815 - accuracy: 0.9375
Epoch 62: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1815 - accuracy: 0.9375 - val_loss: 3.1445 - val_accuracy: 0.4611
Epoch 63/100
45/45 [==============================] - ETA: 0s - loss: 0.1027 - accuracy: 0.9722
Epoch 63: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1027 - accuracy: 0.9722 - val_loss: 3.0654 - val_accuracy: 0.4500
Epoch 64/100
45/45 [==============================] - ETA: 0s - loss: 0.1370 - accuracy: 0.9535
Epoch 64: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1370 - accuracy: 0.9535 - val_loss: 3.1589 - val_accuracy: 0.4667
Epoch 65/100
45/45 [==============================] - ETA: 0s - loss: 0.1530 - accuracy: 0.9576
Epoch 65: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1530 - accuracy: 0.9576 - val_loss: 19.4580 - val_accuracy: 0.3722
Epoch 66/100
45/45 [==============================] - ETA: 0s - loss: 0.1092 - accuracy: 0.9625
Epoch 66: val_accuracy did not improve from 0.47222
45/45 [==============================] - 6s 143ms/step - loss: 0.1092 - accuracy: 0.9625 - val_loss: 263474.1250 - val_accuracy: 0.2639
Epoch 67/100
45/45 [==============================] - ETA: 0s - loss: 0.1094 - accuracy: 0.9639
Epoch 67: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1094 - accuracy: 0.9639 - val_loss: 50495.4219 - val_accuracy: 0.4222
Epoch 68/100
45/45 [==============================] - ETA: 0s - loss: 0.0843 - accuracy: 0.9694
Epoch 68: val_accuracy improved from 0.47222 to 0.47500, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 0.0843 - accuracy: 0.9694 - val_loss: 20.9734 - val_accuracy: 0.4750
Epoch 69/100
45/45 [==============================] - ETA: 0s - loss: 0.1767 - accuracy: 0.9458
Epoch 69: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1767 - accuracy: 0.9458 - val_loss: 1322.2261 - val_accuracy: 0.3583
Epoch 70/100
45/45 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9479
Epoch 70: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1305 - accuracy: 0.9479 - val_loss: 4.3810 - val_accuracy: 0.3889
Epoch 71/100
45/45 [==============================] - ETA: 0s - loss: 0.1202 - accuracy: 0.9569
Epoch 71: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1202 - accuracy: 0.9569 - val_loss: 144.1233 - val_accuracy: 0.1361
Epoch 72/100
45/45 [==============================] - ETA: 0s - loss: 0.0746 - accuracy: 0.9785
Epoch 72: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.0746 - accuracy: 0.9785 - val_loss: 3.0208 - val_accuracy: 0.4417
Epoch 73/100
45/45 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9542
Epoch 73: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1549 - accuracy: 0.9542 - val_loss: 4.0066 - val_accuracy: 0.4333
Epoch 74/100
45/45 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9444
Epoch 74: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1743 - accuracy: 0.9444 - val_loss: 373.7328 - val_accuracy: 0.4250
Epoch 75/100
45/45 [==============================] - ETA: 0s - loss: 0.1104 - accuracy: 0.9611
Epoch 75: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1104 - accuracy: 0.9611 - val_loss: 4.0707 - val_accuracy: 0.4222
Epoch 76/100
45/45 [==============================] - ETA: 0s - loss: 0.1021 - accuracy: 0.9639
Epoch 76: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 144ms/step - loss: 0.1021 - accuracy: 0.9639 - val_loss: 4.0057 - val_accuracy: 0.3944
Epoch 77/100
45/45 [==============================] - ETA: 0s - loss: 0.1100 - accuracy: 0.9618
Epoch 77: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 143ms/step - loss: 0.1100 - accuracy: 0.9618 - val_loss: 4.1805 - val_accuracy: 0.4389
Epoch 78/100
45/45 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9847
Epoch 78: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
Epoch 78: early stopping
4、结果展示和预测
1、结果展示
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(len(loss))plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
- loss这个没办法,我试了好几次,都会有那么一次会很大,我之前用
pytorch
没有遇到这种情况。这次训练最好的结果是:loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
- 验证集准确率一直上不去,本人感觉如果结果
ResNet
会更好,这次本人感觉有很多原因,比如说图片训练少,模型结果计算量有点大了,可以结合ResNet
进行优化,这个就后面在更新吧。
2、预测
from PIL import Image # 加载模型权重
model.load_weights('best_model.h5')# 加载图片
img = Image.open("./data/Brad Pitt/001_c04300ef.jpg")
image = tf.image.resize(img, [256, 256])img_array = tf.expand_dims(image, 0) # 新插入一个元素predict = model.predict(img_array)
print("预测结果: ", classnames[np.argmax(predict)])
1/1 [==============================] - 0s 384ms/step
预测结果: Brad Pitt