当前位置: 首页 > news >正文

子词级别分词器(BPE)在IMDB数据集上训练情感分析模型

在本篇博客中,我将介绍如何通过Python和TensorFlow/Keras创建一个简单的情感分析模型。该模型使用子词级别的分词器(Byte Pair Encoding, BPE)对文本进行分词,并使用IMDB电影评论数据集来训练。整个过程包括数据读取、分词器训练、模型构建和训练。

1. 数据集介绍

我们使用的是经典的 aclImdb 数据集,数据集下载参考这篇博客。这是一组二分类的电影评论文本,分别标记为正面(positive)和负面(negative)。数据集分为训练集和测试集,分别包含正面和负面的评论。我们首先将数据集读取进来,并做基本的处理。

def read_imdb_data(directory):texts, labels = [], []for label_type in ['pos', 'neg']:dir_name = os.path.join(directory, label_type)for fname in os.listdir(dir_name):if fname.endswith('.txt'):with open(os.path.join(dir_name, fname), encoding='utf-8') as f:texts.append(f.read())labels.append(1 if label_type == 'pos' else 0)return texts, labels

在这段代码中,我们遍历文件目录,读取文本文件,并为每个评论添加标签:1表示正面,0表示负面。

2. 自定义子词级别分词器

分词器是文本处理的核心部分之一。我们使用 tokenizers 库中的BPE(字节对编码)分词器来对文本进行子词级别的分割。BPE是一种常用的分词算法,可以有效处理词汇稀疏性的问题。

def train_tokenizer(train_texts):tokenizer = Tokenizer(BPE())tokenizer.pre_tokenizer = Whitespace()trainer = BpeTrainer(vocab_size=30000, special_tokens=["<s>", "<pad>", "</s>", "<unk>", "<mask>"])tokenizer.train_from_iterator(train_texts, trainer)tokenizer.save("custom_tokenizer.json")return tokenizer

我们设置词汇表大小为 30,000,并定义一些特殊符号,例如 <pad> 用于填充序列,<unk> 用于处理未登录词。通过 train_from_iterator 方法,使用训练数据对分词器进行训练。

3. 文本分词和数据准备

在对数据进行分词后,我们还需要将分词后的序列进行填充或截断,以确保输入到模型的序列长度一致。

def tokenize_texts(texts, tokenizer, max_length=200):tokenized_texts = [tokenizer.encode(text.lower()).ids for text in texts]return np.array([t[:max_length] if len(t) > max_length else np.pad(t, (0, max_length - len(t)), 'constant') for t in tokenized_texts])

这里我们对文本进行分词,并将其填充或截断至指定长度(例如200个子词),以确保输入序列的维度一致。

4. 模型构建

我们使用了一个简单的卷积神经网络(CNN)来处理分词后的文本序列,并进行情感分类。

def build_model():tokenizer = Tokenizer.from_file("custom_tokenizer.json")model = models.Sequential([layers.Input(shape=(200,)),layers.Embedding(input_dim=tokenizer.get_vocab_size(), output_dim=200),layers.Conv1D(filters=200, kernel_size=5, activation='relu'),layers.MaxPooling1D(pool_size=2),layers.Conv1D(filters=200, kernel_size=5, activation='relu'),layers.MaxPooling1D(pool_size=2),layers.GlobalMaxPooling1D(),layers.Dense(200, activation='relu'),layers.Dropout(0.5),layers.Dense(64, activation='relu'),layers.Dropout(0.5),layers.Dense(1, activation='sigmoid')])model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])return model

模型首先通过嵌入层将输入序列转换为密集向量表示,之后经过多层卷积层、池化层和全局池化层,最后通过全连接层输出分类结果。我们使用 sigmoid 激活函数来进行二分类,并使用 binary_crossentropy 作为损失函数。

5. 模型训练

def train_model(model, train_texts, train_labels):callbacks = [tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, mode='min', verbose=1),tf.keras.callbacks.ModelCheckpoint("model.h5", monitor='val_loss', save_best_only=True, mode='min', verbose=1),tf.keras.callbacks.TensorBoard(log_dir="logs")]model.fit(train_texts, train_labels, epochs=20, batch_size=100, validation_split=0.1, callbacks=callbacks)

在训练过程中,我们使用了早停(EarlyStopping)和检查点保存(ModelCheckpoint)来避免过拟合。TensorBoard 也被用来跟踪训练过程。

6. 模型评估

def evaluate_model(model, test_texts, test_labels):test_loss, test_acc = model.evaluate(test_texts, test_labels)print(f"Test Accuracy: {test_acc}")

我们通过评估测试集的准确率来了解模型的性能。你也可以使用一些自定义文本来测试模型的预测能力。

7. 总结

通过这篇博客,我们学习了如何使用自定义的子词级别分词器来训练一个情感分类模型。该方法不仅适用于情感分析,还可以扩展到其他自然语言处理任务。希望这篇文章能帮助你更好地理解文本预处理和深度学习模型的构建过程。

实践

完整代码

import os
import numpy as np
from sklearn.model_selection import train_test_split
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Whitespace
import tensorflow as tf
from keras import layers, models# Constants
MAX_LEN = 200
VOCAB_SIZE = 30000
TRAIN_DIR = 'aclImdb/train'
TEST_DIR = 'aclImdb/test'def read_imdb_data(directory):texts, labels = [], []for label_type in ['pos', 'neg']:dir_name = os.path.join(directory, label_type)for fname in os.listdir(dir_name):if fname.endswith('.txt'):with open(os.path.join(dir_name, fname), encoding='utf-8') as f:texts.append(f.read())labels.append(1 if label_type == 'pos' else 0)  # 1 for pos, 0 for negreturn texts, labelsdef get_dataset():# Load the datatrain_texts, train_labels = read_imdb_data(TRAIN_DIR)test_texts, test_labels = read_imdb_data(TEST_DIR)# Merge and split dataset for training and validationtexts = train_texts + test_textslabels = train_labels + test_labelstrain_texts, test_texts, train_labels, test_labels = train_test_split(texts, labels, test_size=0.2, random_state=42)print(f"Train set size: {len(train_texts)}, Test set size: {len(test_texts)}")return train_texts, train_labels, test_texts, test_labelsdef tokenize_texts(texts, tokenizer, max_length=MAX_LEN):tokenized_texts = [tokenizer.encode(text.lower()).ids for text in texts]# Padding/truncating to fixed lengthreturn np.array([t[:max_length] if len(t) > max_length else np.pad(t, (0, max_length - len(t)), 'constant') for t in tokenized_texts])def train_tokenizer(train_texts):# Initialize and train the BPE tokenizertokenizer = Tokenizer(BPE())tokenizer.pre_tokenizer = Whitespace()trainer = BpeTrainer(vocab_size=VOCAB_SIZE, special_tokens=["<s>", "<pad>", "</s>", "<unk>", "<mask>"])tokenizer.train_from_iterator(train_texts, trainer)tokenizer.save("custom_tokenizer.json")return tokenizerdef get_tokenized_dataset(train_texts, train_labels, test_texts, test_labels):tokenizer = Tokenizer.from_file("custom_tokenizer.json")tokenized_train_texts = tokenize_texts(train_texts, tokenizer)tokenized_test_texts = tokenize_texts(test_texts, tokenizer)return np.array(tokenized_train_texts), np.array(train_labels), np.array(tokenized_test_texts), np.array(test_labels)def build_model():tokenizer = Tokenizer.from_file("custom_tokenizer.json")model = models.Sequential([layers.Input(shape=(MAX_LEN,)),layers.Embedding(input_dim=tokenizer.get_vocab_size(), output_dim=MAX_LEN),layers.Conv1D(filters=MAX_LEN, kernel_size=5, activation='relu'),layers.MaxPooling1D(pool_size=2),layers.Conv1D(filters=MAX_LEN, kernel_size=5, activation='relu'),layers.MaxPooling1D(pool_size=2),layers.GlobalMaxPooling1D(),layers.Dense(MAX_LEN, activation='relu'),layers.Dropout(0.5),layers.Dense(64, activation='relu'),layers.Dropout(0.5),layers.Dense(1, activation='sigmoid')])model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])model.summary()return modeldef train_model(model, train_texts, train_labels):callbacks = [tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, mode='min', verbose=1),tf.keras.callbacks.ModelCheckpoint("model.h5", monitor='val_loss', save_best_only=True, mode='min', verbose=1),tf.keras.callbacks.TensorBoard(log_dir="logs")]model.fit(train_texts, train_labels, epochs=20, batch_size=100, validation_split=0.1, callbacks=callbacks)def evaluate_model(model, test_texts, test_labels):test_loss, test_acc = model.evaluate(test_texts, test_labels)print(f"Test Accuracy: {test_acc}")# Example predictionsnew_texts = ["This movie was amazing!","I did not like this movie at all.","I didn't like this movie at all.","this is bad movie ","This is good movie","This isn't good movie","This is not good movie","I don't like this movie at all","i think this is bad movie"]tokenizer = Tokenizer.from_file("custom_tokenizer.json")tokenized_new_texts = tokenize_texts(new_texts, tokenizer)predictions = model.predict(tokenized_new_texts)print(predictions)  # Output probabilitiesif __name__ == '__main__':train_texts, train_labels, test_texts, test_labels = get_dataset()train_tokenizer(train_texts)train_texts, train_labels, test_texts, test_labels = get_tokenized_dataset(train_texts, train_labels, test_texts, test_labels)model = build_model()train_model(model, train_texts, train_labels)evaluate_model(model, test_texts, test_labels)

运行结果

Train set size: 40000, Test set size: 10000Model: "sequential"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================embedding (Embedding)       (None, 200, 200)          6000000   conv1d (Conv1D)             (None, 196, 200)          200200    max_pooling1d (MaxPooling1D  (None, 98, 200)          0         )                                                               conv1d_1 (Conv1D)           (None, 94, 200)           200200    max_pooling1d_1 (MaxPooling  (None, 47, 200)          0         1D)                                                             global_max_pooling1d (Globa  (None, 200)              0         lMaxPooling1D)                                                  dense (Dense)               (None, 200)               40200     dropout (Dropout)           (None, 200)               0         dense_1 (Dense)             (None, 64)                12864     dropout_1 (Dropout)         (None, 64)                0         dense_2 (Dense)             (None, 1)                 65        =================================================================
Total params: 6,453,529
Trainable params: 6,453,529
Non-trainable params: 0
_________________________________________________________________
2024-10-06 15:39:17.700278: W 
Epoch 1/20
360/360 [==============================] - ETA: 0s - loss: 0.5572 - accuracy: 0.6817
Epoch 1: val_loss improved from inf to 0.40582, saving model to model.h5
360/360 [==============================] - 34s 94ms/step - loss: 0.5572 - accuracy: 0.6817 - val_loss: 0.4058 - val_accuracy: 0.8253
Epoch 2/20
360/360 [==============================] - ETA: 0s - loss: 0.3114 - accuracy: 0.8716
Epoch 2: val_loss improved from 0.40582 to 0.32963, saving model to model.h5
360/360 [==============================] - 33s 91ms/step - loss: 0.3114 - accuracy: 0.8716 - val_loss: 0.3296 - val_accuracy: 0.8560
Epoch 3/20
360/360 [==============================] - ETA: 0s - loss: 0.1894 - accuracy: 0.9292
Epoch 3: val_loss did not improve from 0.32963
360/360 [==============================] - 32s 88ms/step - loss: 0.1894 - accuracy: 0.9292 - val_loss: 0.3775 - val_accuracy: 0.8548
Epoch 4/20
360/360 [==============================] - ETA: 0s - loss: 0.0958 - accuracy: 0.9679
Epoch 4: val_loss did not improve from 0.32963
360/360 [==============================] - 31s 87ms/step - loss: 0.0958 - accuracy: 0.9679 - val_loss: 0.4851 - val_accuracy: 0.8558
Epoch 5/20
360/360 [==============================] - ETA: 0s - loss: 0.0492 - accuracy: 0.9834
Epoch 5: val_loss did not improve from 0.32963
360/360 [==============================] - 31s 87ms/step - loss: 0.0492 - accuracy: 0.9834 - val_loss: 0.5948 - val_accuracy: 0.8515
Epoch 6/20
360/360 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9896
Epoch 6: val_loss did not improve from 0.32963
360/360 [==============================] - 32s 89ms/step - loss: 0.0320 - accuracy: 0.9896 - val_loss: 0.7611 - val_accuracy: 0.8533
Epoch 6: early stopping
313/313 [==============================] - 3s 10ms/step - loss: 0.7179 - accuracy: 0.8512
Test Accuracy: 0.8512000441551208
1/1 [==============================] - 0s 96ms/step
[[9.9767965e-01][2.1777144e-01][3.8228199e-01][3.7854039e-03][5.3338373e-01][4.3341559e-01][3.9853936e-01][3.9961618e-01][1.2732150e-04]]Process finished with exit code 0


http://www.mrgr.cn/news/44061.html

相关文章:

  • 【2024】前端学习笔记14-JavaScript常用数据类型-变量常量
  • 基于阻塞队列及环形队列的生产消费模型
  • c++进阶篇——初窥多线程(五) 条件变量与信号量
  • 华为最新业绩出炉!上半年营收4175亿元,同比增长34%!
  • c++剪枝
  • 业务封装与映射 -- OTUk/ODUk/OPUk比特速率和容量
  • 精准选择大模型:消费品行业的营销与体验创新之路
  • C++新特性汇总
  • Docker:安装 MongoDB 的详细指南
  • ROS中显示标记教程
  • Vue-快速入门
  • pillow常用知识
  • JavaScript进行数据可视化:D3.js入门
  • 视频转文字免费的软件有哪些?6款工具一键把视频转成文字!又快又方便!
  • MySql数据引擎InnoDB引起的锁问题
  • 什么是 Koa?
  • VMware WorkStation Pro 15.5(低版本安装) 教学用
  • 优化理论及应用精解【24】
  • 【Redis】持久化(上)---RDB
  • 【数据结构】【链表代码】 回文