当前位置: 首页 > news >正文

PyTorch实践-CNN-验证码识别

1 需求

GitHub - xhh890921/cnn-captcha-pytorch: 小黑黑讲AI,AI实战项目《验证码识别》


2 接口

  1. 含义
    • optim.Adam接口中,lr参数代表学习率(Learning Rate)。学习率是优化算法中的一个关键超参数,它决定了在每次迭代过程中,模型参数沿着梯度方向更新的步长大小。简单来说,它控制着模型学习的速度。
  2. 工作原理
    • 以梯度下降为例,在每次迭代中,模型参数的更新公式一般为:,其中是模型参数,是学习率,是损失函数关于参数的梯度。在Adam优化器中,虽然更新过程比简单的梯度下降更复杂(涉及到一阶矩估计和二阶矩估计等),但学习率lr仍然起着类似的作用。
    • Adam会根据梯度的一阶矩估计(类似于均值)和二阶矩估计(类似于方差)来调整参数更新的方向和大小,而学习率lr则是在此基础上进一步缩放更新的步长。例如,当lr较大时,参数更新的步长就大,模型在参数空间中的移动速度快;当lr较小时,参数更新的步长小,模型在参数空间中的移动速度慢。
  3. 对训练的影响
    • 学习率过大
      • 如果学习率设置得过大,可能会导致模型在训练过程中无法收敛,甚至出现梯度爆炸的情况。例如,在训练神经网络时,参数可能会在每次迭代中过度更新,使得损失函数的值越来越大,而不是越来越小。以一个简单的线性回归模型为例,如果学习率过大,模型可能会在参数空间中 “跳过” 最优解,并且由于更新步长过大,很难再回到最优解附近。
    • 学习率过小
      • 当学习率设置得过小时,模型训练的速度会非常慢,需要更多的迭代次数才能达到较好的收敛效果。这会增加训练的时间和计算成本。例如,在一个复杂的深度学习模型(如卷积神经网络用于图像识别)的训练中,如果学习率过小,可能需要花费数倍甚至数十倍的时间才能达到与合适学习率相当的训练效果。
  4. 选择合适学习率的方法
    • 经验法则:通常可以先尝试一些常用的学习率,如 0.001、0.0001 等,观察模型在训练初期的表现,如损失函数的下降速度和稳定性。
    • 学习率调度(Learning Rate Scheduling):可以根据训练的阶段动态地调整学习率。例如,在训练初期可以使用较大的学习率让模型快速学习数据的大致模式,随着训练的进行,逐渐减小学习率,使模型能够更精细地调整参数以接近最优解。常见的学习率调度策略包括阶梯式下降(在特定的训练阶段降低学习率)、余弦退火(根据余弦函数的形状来降低学习率)等。
    • 超参数搜索方法:使用超参数搜索算法,如网格搜索(Grid Search)、随机搜索(Random Search)或更高级的贝叶斯优化(Bayesian Optimization)来寻找合适的学习率。这些方法通过在一定范围内尝试不同的学习率值,并根据模型在验证集上的性能来确定最优的学习率。

3 示例

config.json

{"train_data_path": "./data/train-digit/","test_data_path": "./data/test-digit/","train_num": 2000,"test_num": 1000,"characters": "0123456789","digit_num": 1,"img_width": 200,"img_height": 100,"resize_width": 128,"resize_height": 128,"batch_size": 128,"epoch_num": 200,"learning_rate": 0.0001,"model_save_path": "./model/","model_name": "captcha.1digit.2k","test_model_path": "./model/captcha.1digit.2k"
}

generate.py

# 导入验证码模块ImageCaptcha和随机数模块random
from captcha.image import ImageCaptcha
import random# 定义函数generate_data,用于生成验证码图片
# num是需要生成的验证码图片数量
# count是验证码图中包含的字符数量
# chars保存验证码中包含的字符
# path是图片结果的保存路径
# width是height是图片的宽和高
def generate_data(num, count, chars, path, width, height):# 使用变量i,循环生成num个验证码图片for i in range(num):# 打印当前的验证码编号print("generate %d"%(i))# 使用ImageCaptcha,创建验证码生成器generatorgenerator = ImageCaptcha(width=width, height=height)random_str = "" #保存验证码图片上的字符# 向random_str中,循环添加count个字符for j in range(count):# 每个字符,使用random.choice,随机的从chars中选择choose = random.choice(chars)random_str += choose# 调用generate_image,生成验证码图片imgimg = generator.generate_image(random_str)# 在验证码上加干扰点generator.create_noise_dots(img, '#000000', 4, 40)# 在验证码上加干扰线generator.create_noise_curve(img, '#000000')# 设置文件名,命名规则为,验证码字符串random_str,加下划线,加数据编号file_name = path + random_str + '_' + str(i) + '.jpg'img.save(file_name) # 保存文件import json
import osif __name__ == '__main__':# 使用open函数,打开config.json配置文件with open("config.json", "r") as f:# 使用json.load读取解析json,结果保存在configconfig = json.load(f)# 接着从配置中获取各项参数# 具体使用config加中括号中括号中为参数名,这样的方式读取配置内容train_data_path = config["train_data_path"]  # 训练数据路径test_data_path = config["test_data_path"]  # 测试数据路径train_num = config["train_num"]  # 训练样本个数test_num = config["test_num"] # 测试样本个数characters = config["characters"]  # 验证码使用的字符集digit_num = config["digit_num"]  # 图片上的字符数量img_width = config["img_width"]  # 图片的宽度img_height = config["img_height"]  # 图片的高度# 检查数据路径上的文件夹是否存在# 如果不存在,则创建保存数据的文件夹if not os.path.exists(train_data_path):os.makedirs(train_data_path)if not os.path.exists(test_data_path):os.makedirs(test_data_path)# 调用generate_data,生成训练数据generate_data(train_num, digit_num, characters,train_data_path, img_width, img_height)# 调用generate_data,生成测试数据generate_data(test_num, digit_num, characters,test_data_path, img_width, img_height)

dataset.py

from torch.utils.data import Dataset
from PIL import Image
import torch
import os# 设置CaptchaDataset继承Dataset,用于读取验证码数据
class CaptchaDataset(Dataset):# init函数用于初始化# 函数传入数据的路径data_dir和数据转换对象transform# 将验证码使用的字符集characters,通过参数传入def __init__(self, data_dir, transform, characters):self.file_list = list() #保存每个训练数据的路径# 使用os.listdir,获取data_dir中的全部文件files = os.listdir(data_dir)for file in files: #遍历files# 将目录路径与文件名组合为文件路径path = os.path.join(data_dir, file)# 将path添加到file_list列表self.file_list.append(path)# 将数据转换对象transform保存到类中self.transform = transform# 创建一个字符到数字的字典self.char2int = {}# 在创建字符到数字的字典时,使用外界传入的字符集charactersfor i, char in enumerate(characters):self.char2int[char] = idef __len__(self):# 直接返回数据集中的样本数量# 重写该方法后可以使用len(dataset)语法,来获取数据集的大小return len(self.file_list)# 函数传入索引index,函数应当返回与该索引对应的数据和标签# 通过dataset[i],就可以获取到第i个样本了def __getitem__(self, index):file_path = self.file_list[index] #获取数据的路径# 打开文件,并使用convert('L'),将图片转换为灰色# 不需要通过颜色来判断验证码中的字符,转为灰色后,可以提升模型的鲁棒性image = Image.open(file_path).convert('L')# 使用transform转换数据,将图片数据转为张量数据image = self.transform(image)# 获取该数据图片中的字符标签label_char = os.path.basename(file_path).split('_')[0]# 在获取到该数据图片中的字符标签label_char后label = list()for char in label_char: # 遍历字符串label_char# 将其中的字符转为数字,添加到列表label中label.append(self.char2int[char])# 将label转为张量,作为训练数据的标签label = torch.tensor(label, dtype=torch.long)return image, label #返回image和labelfrom torch.utils.data import DataLoader
from torchvision import transforms
import jsonif __name__ == '__main__':with open("config.json", "r") as f:config = json.load(f)height = config["resize_height"]  # 图片的高度width = config["resize_width"]  # 图片的宽度# 定义数据转换对象transform# 使用transforms.Compose定义数据预处理流水线# 在transform添加Resize和ToTensor两个数据处理操作transform = transforms.Compose([transforms.Resize((height, width)),  # 将图片缩放到指定的大小transforms.ToTensor()])  # 将图片数据转换为张量data_path = config["train_data_path"]  # 训练数据储存路径characters = config["characters"]  # 验证码使用的字符集batch_size = config["batch_size"]epoch_num = config["epoch_num"]# 定义CaptchaDataset对象datasetdataset = CaptchaDataset(data_path, transform, characters)# 定义数据加载器data_load# 其中参数dataset是数据集# batch_size=8代表每个小批量数据的大小是8# shuffle = True表示每个epoch,都会随机打乱数据的顺序data_load = DataLoader(dataset,batch_size = batch_size,shuffle = True)# 编写一个循环,模拟小批量梯度下降迭代时的数据读取# 外层循环,代表了整个训练数据集的迭代轮数,3个epoch就是3轮循环# 对于每个epoch,都会遍历全部的训练数据for epoch in range(epoch_num):print("epoch = %d"%(epoch))# 内层循环代表了,在一个迭代轮次中,以小批量的方式# 使用dataloader对数据进行遍历# batch_idx表示当前遍历的批次# data和label表示这个批次的训练数据和标记for batch_idx, (data, label) in enumerate(data_load):print("batch_idx = %d label = %s"%(batch_idx, label))

model.py

import torch.nn as nn# 设置类CNNModel,它继承了torch.nn中的Module模块
class CNNModel(nn.Module):# 定义卷积神经网络# 修改初始化函数init的参数列表# 需要将训练图片的高height、宽width、# 图片中的字符数量digit_num,类别数量class_num传入def __init__(self, height, width, digit_num, class_num):super(CNNModel, self).__init__()self.digit_num = digit_num # 将digit_num保存在类中# 定义第1个卷积层组conv1# 其中包括了1个卷积层# 1个ReLU激活函数和1个2乘2的最大池化self.conv1 = nn.Sequential(# 卷积层使用Conv2d定义# 包括了1个输入通道,8个输出通道# 卷积核的大小是3乘3的# 使用padding='same'进行填充# 这样可以保证输入和输出的特征图大小相同nn.Conv2d(1, 32, kernel_size=3, padding='same'),nn.ReLU(),nn.MaxPool2d(2),nn.Dropout(0.25))# 第2个卷积层组,和conv1具有相同的结self.conv2 = nn.Sequential(# 包括8个输入通道和16个输出通道nn.Conv2d(32, 64, kernel_size=3, padding='same'),nn.ReLU(),nn.MaxPool2d(2),nn.Dropout(0.25))# 第3个卷积层组,和conv1具有相同的结self.conv3 = nn.Sequential(# 包括16个输入通道和16个输出通道nn.Conv2d(64, 64, kernel_size=3, padding='same'),nn.ReLU(),nn.MaxPool2d(2),nn.Dropout(0.25))# 完成三个卷积层的计算后,计算全连接层的输入数据数量input_num# 它等于图片的高和宽,分别除以8,再乘以输出特征图的个数16# 除以8的原因是,由于经过了3个2*2的最大池化# 因此图片的高和宽,都被缩小到原来的1/8input_num = (height//8) * (width//8) * 64self.fc1 = nn.Sequential(nn.Linear(input_num, 1024),nn.ReLU(),nn.Dropout(0.25))# 将输出层的神经元个数设置为class_numself.fc2 = nn.Sequential(nn.Linear(1024, class_num),)# 后面训练会使用交叉熵损失函数CrossEntropyLoss# softmax函数会定义在损失函数中,所以这里就不显示的定义了# 前向传播函数# 函数输入一个四维张量x# 这四个维度分别是样本数量、输入通道、图片的高度和宽度def forward(self, x): # [n, 1, 128, 128]# 将输入张量x按照顺序,输入至每一层中进行计算# 每层都会使张量x的维度发生变化out = self.conv1(x) # [n, 8, 64, 64]out = self.conv2(out) # [n, 16, 32, 32]out = self.conv3(out) # [n, 16, 16, 16]# 使用view函数,将张量的维度从n*16*16*16转为n*4096out = out.view(out.size(0), -1) # [n, 4096]out = self.fc1(out) # [n, 128]# 经过3个卷积层与2个全连接层后,会计算得到n*40的张量out = self.fc2(out) # [n, 40]# 使用初始化时传入的digit_num# 也就是将模型的最终输出,修改为n*digit_num*字符种类out = out.view(out.size(0), self.digit_num, -1)return outimport json
if __name__ == '__main__':with open("config.json", "r") as f:config = json.load(f)height = config["resize_height"]  # 图片的高度width = config["resize_width"]  # 图片的宽度characters = config["characters"]  # 验证码使用的字符集digit_num = config["digit_num"]class_num = len(characters) * digit_num# 定义一个CNNModelUp1实例model = CNNModel(height, width, digit_num, class_num)print(model) #将其打印,观察打印结果可以了解模型的结构print("")

train.py

# 直接导入dataset.py中的CaptchaDataset类
from dataset import CaptchaDataset
# 直接导入model.py中的CNNModel类
from model import CNNModelimport torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch import optim
import json
import osif __name__ == '__main__':# 打开配置文件with open("config.json", "r") as f:config = json.load(f)# 读取resize_height和resize_width两个参数# 它们代表图片数据最终缩放的高和宽,用于创建transformheight = config["resize_height"]  # 图片的高度width = config["resize_width"]  # 图片的宽度# 定义数据转换对象transform# 使用transforms.Compose定义数据预处理流水线# 在transform添加Resize和ToTensor两个数据处理操作transform = transforms.Compose([transforms.RandomRotation(10), # 添加旋转方案transforms.Resize((height, width)),  # 将图片缩放到指定的大小transforms.ToTensor()])  # 将图片数据转换为张量train_data_path = config["train_data_path"]  # 获取训练数据路径characters = config["characters"]  # 验证码字符集batch_size = config["batch_size"] # 批量大小epoch_num = config["epoch_num"] # 迭代轮数digit_num = config["digit_num"] # 字符个数learning_rate = config["learning_rate"] #迭代速率# 计算类别个数class_num,等于使用的字符数量*字符个数class_num = len(characters) * digit_nummodel_save_path = config["model_save_path"] #获取模型的保存路径model_name = config["model_name"] #模型名称model_save_name = model_save_path + "/" + model_name# 创建模型文件夹if not os.path.exists(model_save_path):os.makedirs(model_save_path)print("resize_height = %d"%(height))print("resize_width = %d" %(width))print("train_data_path = %s"%(train_data_path))print("characters = %s" % (characters))print("batch_size = %d" % (batch_size))print("epoch_num = %d" % (epoch_num))print("digit_num = %d" % (digit_num))print("class_num = %d" % (class_num))print("learning_rate = %lf" % (learning_rate))print("model_save_name = %s" % (model_save_name))print("")# 定义CaptchaDataset对象train_datatrain_data = CaptchaDataset(train_data_path, transform, characters)# 使用DataLoader,定义数据加载器train_load# 其中参数train_data是训练集# batch_size=64代表每个小批量数据的大小是64# shuffle = True表示每一轮训练,都会随机打乱数据的顺序train_load = DataLoader(train_data,batch_size = batch_size,shuffle = True)# 训练集有3000个数据,由于每个小批量大小是64,# 3000个数据就会分成47个小批量,前46个小批量包括64个数据,# 最后一个小批量包括56个数据。46*64+56=3000# 定义设备对象device,这里如果cuda可用则使用GPU,否则使用CPUdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")# 创建一个CNNModel模型对象,并转移到GPU上model = CNNModel(height, width, digit_num, class_num).to(device)model.train()    # 需要指定迭代速率。默认情况下是0.001,我们将迭代速率修改0.0001# 因为面对更复杂的数据,较小的迭代速率可以使迭代更稳定optimizer = optim.Adam(model.parameters(), lr=learning_rate)criterion = nn.CrossEntropyLoss()  # 创建一个交叉熵损失函数print("Begin training:")# 提升迭代轮数,从50轮训练提升至200轮训练for epoch in range(epoch_num):  # 外层循环,代表了整个训练数据集的遍历次数# 内层循环代表了,在一个epoch中,以批量的方式,使用train_load对于数据进行遍历# batch_idx 表示当前遍历的批次# (data, label) 表示这个批次的训练数据和标记。for batch_idx, (data, label) in enumerate(train_load):# 将数据data和标签label转移到GPU上data, label = data.to(device), label.to(device)# 使用当前的模型,预测训练数据data,结果保存在output中output = model(data)# 修改损失值loss的计算方法# 将4位验证码的每一位的损失,都累加到一起loss = torch.tensor(0.0).to(device)for i in range(digit_num): #使用i,循环4位验证码# 每一位验证码的模型计算输出为output[:, i, :]# 标记为label[:, i]# 交叉熵损失函数criterion,计算一位验证码的损失# 将4位验证码的损失,累加到lossloss += criterion(output[:, i, :], label[:, i])loss.backward()  # 计算损失函数关于模型参数的梯度optimizer.step()  # 更新模型参数optimizer.zero_grad()  # 将梯度清零,以便于下一次迭代# 计算训练时每个batch的正确率accpredicted = torch.argmax(output, dim=2)correct = (predicted == label).all(dim=1).sum().item()acc = correct / data.size(0)# 对于每个epoch,每训练10个batch,打印一次当前的损失if batch_idx % 10 == 0:print(f"Epoch {epoch + 1}/{epoch_num} "f"| Batch {batch_idx}/{len(train_load)} "f"| Loss: {loss.item():.4f} "f"| accuracy {correct}/{data.size(0)}={acc:.3f}")# 每10轮训练,就保存一次checkpoint模型,用来调试使用if (epoch + 1) % 10 == 0:checkpoint = model_save_path + "/check.epoch" + str(epoch+1)torch.save(model.state_dict(), checkpoint)print("checkpoint saved : %s" % (checkpoint))# 程序的最后,使用配置中的路径,保存训练结果torch.save(model.state_dict(), model_save_name)print("model saved : %s" % (model_save_name))

test.py

from dataset import CaptchaDataset
from model import CNNModelimport torch
from torch.utils.data import DataLoader
import torchvision.transforms as transformsimport jsonif __name__ == '__main__':with open("config.json", "r") as f:config = json.load(f)height = config["resize_height"]  # 图片的高度width = config["resize_width"]  # 图片的宽度# 定义数据转换对象transform# 将图片缩放到指定的大小,并将图片数据转换为张量transform = transforms.Compose([transforms.Resize((height, width)),transforms.ToTensor()])test_data_path = config["test_data_path"]  # 训练数据储存路径characters = config["characters"]  # 验证码使用的字符集digit_num = config["digit_num"]class_num = len(characters) * digit_numtest_model_path = config["test_model_path"]print("resize_height = %d" % (height))print("resize_width = %d" % (width))print("test_data_path = %s" % (test_data_path))print("characters = %s" % (characters))print("digit_num = %d" % (digit_num))print("class_num = %d" % (class_num))print("test_model_path = %s" % (test_model_path))print("")# 使用CaptchaDataset构造测试数据集test_data = CaptchaDataset(test_data_path, transform, characters)# 使用DataLoader读取test_data# 不需要设置任何参数,这样会一个一个数据的读取test_loader = DataLoader(test_data)# 定义设备对象device,这里如果cuda可用则使用GPU,否则使用CPUdevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")# 创建一个CNNModel模型对象,并转移到GPU上model = CNNModel(height, width, digit_num, class_num).to(device)model.eval()# 调用load_state_dict,读取已经训练好的模型文件captcha.digitmodel.load_state_dict(torch.load(test_model_path))right = 0  # 设置right变量,保存预测正确的样本数量all = 0  # all保存全部的样本数量# 遍历test_loader中的数据# x表示样本的特征张量,y表示样本的标签for (x, y) in test_loader:x, y = x.to(device), y.to(device)  # 转移数据至GPUpred = model(x)  # 使用模型预测x的结果,保存在pred中# 使用pred.argmax(dim=2).squeeze(0),获取4位验证码数据的预测结果# y.squeeze(0)是4验证码的标记结果if torch.equal(pred.argmax(dim=2).squeeze(0),y.squeeze(0)):right += 1  # 如果相同,那么right加1all += 1  # 每次循环,all变量加1# 循环结束后,计算模型的正确率acc = right * 1.0 / allprint("test accuracy = %d / %d = %.3lf" % (right, all, acc))

D:\Python310\python.exe D:/project/PycharmProjects/CNN/train.py
resize_height = 128
resize_width = 128
train_data_path = ./data/train-digit/
characters = 0123456789
batch_size = 128
epoch_num = 200
digit_num = 1
class_num = 10
learning_rate = 0.000100
model_save_name = ./model//captcha.1digit.2kBegin training:
Epoch 1/200 | Batch 0/16 | Loss: 2.3091 | accuracy 15/128=0.117
Epoch 1/200 | Batch 10/16 | Loss: 2.3238 | accuracy 10/128=0.078
Epoch 2/200 | Batch 0/16 | Loss: 2.3016 | accuracy 14/128=0.109
Epoch 2/200 | Batch 10/16 | Loss: 2.3000 | accuracy 15/128=0.117
Epoch 3/200 | Batch 0/16 | Loss: 2.3062 | accuracy 13/128=0.102
Epoch 3/200 | Batch 10/16 | Loss: 2.3053 | accuracy 12/128=0.094
Epoch 4/200 | Batch 0/16 | Loss: 2.3071 | accuracy 15/128=0.117
Epoch 4/200 | Batch 10/16 | Loss: 2.3018 | accuracy 18/128=0.141
Epoch 5/200 | Batch 0/16 | Loss: 2.2999 | accuracy 14/128=0.109
Epoch 5/200 | Batch 10/16 | Loss: 2.3003 | accuracy 17/128=0.133
Epoch 6/200 | Batch 0/16 | Loss: 2.3056 | accuracy 10/128=0.078
Epoch 6/200 | Batch 10/16 | Loss: 2.3008 | accuracy 17/128=0.133
Epoch 7/200 | Batch 0/16 | Loss: 2.3007 | accuracy 10/128=0.078
Epoch 7/200 | Batch 10/16 | Loss: 2.3061 | accuracy 10/128=0.078
Epoch 8/200 | Batch 0/16 | Loss: 2.3027 | accuracy 16/128=0.125
Epoch 8/200 | Batch 10/16 | Loss: 2.3041 | accuracy 11/128=0.086
Epoch 9/200 | Batch 0/16 | Loss: 2.3063 | accuracy 14/128=0.109
Epoch 9/200 | Batch 10/16 | Loss: 2.3000 | accuracy 12/128=0.094
Epoch 10/200 | Batch 0/16 | Loss: 2.2981 | accuracy 17/128=0.133
Epoch 10/200 | Batch 10/16 | Loss: 2.3018 | accuracy 17/128=0.133
checkpoint saved : ./model//check.epoch10
Epoch 11/200 | Batch 0/16 | Loss: 2.3048 | accuracy 13/128=0.102
Epoch 11/200 | Batch 10/16 | Loss: 2.3009 | accuracy 18/128=0.141
Epoch 12/200 | Batch 0/16 | Loss: 2.3007 | accuracy 5/128=0.039
Epoch 12/200 | Batch 10/16 | Loss: 2.3052 | accuracy 13/128=0.102
Epoch 13/200 | Batch 0/16 | Loss: 2.3016 | accuracy 15/128=0.117
Epoch 13/200 | Batch 10/16 | Loss: 2.2970 | accuracy 16/128=0.125
Epoch 14/200 | Batch 0/16 | Loss: 2.2986 | accuracy 19/128=0.148
Epoch 14/200 | Batch 10/16 | Loss: 2.3021 | accuracy 14/128=0.109
Epoch 15/200 | Batch 0/16 | Loss: 2.2987 | accuracy 17/128=0.133
Epoch 15/200 | Batch 10/16 | Loss: 2.3041 | accuracy 14/128=0.109
Epoch 16/200 | Batch 0/16 | Loss: 2.2994 | accuracy 16/128=0.125
Epoch 16/200 | Batch 10/16 | Loss: 2.3019 | accuracy 16/128=0.125
Epoch 17/200 | Batch 0/16 | Loss: 2.2933 | accuracy 14/128=0.109
Epoch 17/200 | Batch 10/16 | Loss: 2.2991 | accuracy 12/128=0.094
Epoch 18/200 | Batch 0/16 | Loss: 2.3012 | accuracy 16/128=0.125
Epoch 18/200 | Batch 10/16 | Loss: 2.3045 | accuracy 13/128=0.102
Epoch 19/200 | Batch 0/16 | Loss: 2.2907 | accuracy 25/128=0.195
Epoch 19/200 | Batch 10/16 | Loss: 2.3016 | accuracy 10/128=0.078
Epoch 20/200 | Batch 0/16 | Loss: 2.3050 | accuracy 13/128=0.102
Epoch 20/200 | Batch 10/16 | Loss: 2.2988 | accuracy 14/128=0.109
checkpoint saved : ./model//check.epoch20
Epoch 21/200 | Batch 0/16 | Loss: 2.2999 | accuracy 17/128=0.133
Epoch 21/200 | Batch 10/16 | Loss: 2.2937 | accuracy 15/128=0.117
Epoch 22/200 | Batch 0/16 | Loss: 2.3047 | accuracy 16/128=0.125
Epoch 22/200 | Batch 10/16 | Loss: 2.2853 | accuracy 18/128=0.141
Epoch 23/200 | Batch 0/16 | Loss: 2.2850 | accuracy 19/128=0.148
Epoch 23/200 | Batch 10/16 | Loss: 2.2959 | accuracy 13/128=0.102
Epoch 24/200 | Batch 0/16 | Loss: 2.2884 | accuracy 18/128=0.141
Epoch 24/200 | Batch 10/16 | Loss: 2.2940 | accuracy 18/128=0.141
Epoch 25/200 | Batch 0/16 | Loss: 2.2775 | accuracy 18/128=0.141
Epoch 25/200 | Batch 10/16 | Loss: 2.2858 | accuracy 15/128=0.117
Epoch 26/200 | Batch 0/16 | Loss: 2.2522 | accuracy 27/128=0.211
Epoch 26/200 | Batch 10/16 | Loss: 2.3032 | accuracy 16/128=0.125
Epoch 27/200 | Batch 0/16 | Loss: 2.2583 | accuracy 24/128=0.188
Epoch 27/200 | Batch 10/16 | Loss: 2.2422 | accuracy 28/128=0.219
Epoch 28/200 | Batch 0/16 | Loss: 2.2255 | accuracy 29/128=0.227
Epoch 28/200 | Batch 10/16 | Loss: 2.2325 | accuracy 16/128=0.125
Epoch 29/200 | Batch 0/16 | Loss: 2.1752 | accuracy 28/128=0.219
Epoch 29/200 | Batch 10/16 | Loss: 2.2192 | accuracy 23/128=0.180
Epoch 30/200 | Batch 0/16 | Loss: 2.2291 | accuracy 18/128=0.141
Epoch 30/200 | Batch 10/16 | Loss: 2.1861 | accuracy 25/128=0.195
checkpoint saved : ./model//check.epoch30
Epoch 31/200 | Batch 0/16 | Loss: 2.1700 | accuracy 35/128=0.273
Epoch 31/200 | Batch 10/16 | Loss: 2.0598 | accuracy 33/128=0.258
Epoch 32/200 | Batch 0/16 | Loss: 2.1042 | accuracy 29/128=0.227
Epoch 32/200 | Batch 10/16 | Loss: 2.0796 | accuracy 27/128=0.211
Epoch 33/200 | Batch 0/16 | Loss: 2.1144 | accuracy 23/128=0.180
Epoch 33/200 | Batch 10/16 | Loss: 2.1632 | accuracy 26/128=0.203
Epoch 34/200 | Batch 0/16 | Loss: 2.0593 | accuracy 38/128=0.297
Epoch 34/200 | Batch 10/16 | Loss: 2.0564 | accuracy 37/128=0.289
Epoch 35/200 | Batch 0/16 | Loss: 1.9282 | accuracy 42/128=0.328
Epoch 35/200 | Batch 10/16 | Loss: 2.0059 | accuracy 36/128=0.281
Epoch 36/200 | Batch 0/16 | Loss: 2.0065 | accuracy 35/128=0.273
Epoch 36/200 | Batch 10/16 | Loss: 1.9090 | accuracy 42/128=0.328
Epoch 37/200 | Batch 0/16 | Loss: 1.9358 | accuracy 39/128=0.305
Epoch 37/200 | Batch 10/16 | Loss: 1.9197 | accuracy 45/128=0.352
Epoch 38/200 | Batch 0/16 | Loss: 1.9248 | accuracy 42/128=0.328
Epoch 38/200 | Batch 10/16 | Loss: 1.9072 | accuracy 40/128=0.312
Epoch 39/200 | Batch 0/16 | Loss: 1.9429 | accuracy 41/128=0.320
Epoch 39/200 | Batch 10/16 | Loss: 1.9401 | accuracy 39/128=0.305
Epoch 40/200 | Batch 0/16 | Loss: 1.8600 | accuracy 44/128=0.344
Epoch 40/200 | Batch 10/16 | Loss: 1.8164 | accuracy 46/128=0.359
checkpoint saved : ./model//check.epoch40
Epoch 41/200 | Batch 0/16 | Loss: 1.8458 | accuracy 48/128=0.375
Epoch 41/200 | Batch 10/16 | Loss: 1.7130 | accuracy 54/128=0.422
Epoch 42/200 | Batch 0/16 | Loss: 1.6807 | accuracy 53/128=0.414
Epoch 42/200 | Batch 10/16 | Loss: 1.8174 | accuracy 41/128=0.320
Epoch 43/200 | Batch 0/16 | Loss: 1.8646 | accuracy 40/128=0.312
Epoch 43/200 | Batch 10/16 | Loss: 1.6046 | accuracy 54/128=0.422
Epoch 44/200 | Batch 0/16 | Loss: 1.7627 | accuracy 43/128=0.336
Epoch 44/200 | Batch 10/16 | Loss: 1.7279 | accuracy 48/128=0.375
Epoch 45/200 | Batch 0/16 | Loss: 1.6728 | accuracy 50/128=0.391
Epoch 45/200 | Batch 10/16 | Loss: 1.6171 | accuracy 53/128=0.414
Epoch 46/200 | Batch 0/16 | Loss: 1.6969 | accuracy 51/128=0.398
Epoch 46/200 | Batch 10/16 | Loss: 1.6196 | accuracy 48/128=0.375
Epoch 47/200 | Batch 0/16 | Loss: 1.6617 | accuracy 56/128=0.438
Epoch 47/200 | Batch 10/16 | Loss: 1.5410 | accuracy 67/128=0.523
Epoch 48/200 | Batch 0/16 | Loss: 1.6146 | accuracy 55/128=0.430
Epoch 48/200 | Batch 10/16 | Loss: 1.7213 | accuracy 44/128=0.344
Epoch 49/200 | Batch 0/16 | Loss: 1.5919 | accuracy 61/128=0.477
Epoch 49/200 | Batch 10/16 | Loss: 1.5982 | accuracy 51/128=0.398
Epoch 50/200 | Batch 0/16 | Loss: 1.6092 | accuracy 59/128=0.461
Epoch 50/200 | Batch 10/16 | Loss: 1.4322 | accuracy 65/128=0.508
checkpoint saved : ./model//check.epoch50
Epoch 51/200 | Batch 0/16 | Loss: 1.5115 | accuracy 65/128=0.508
Epoch 51/200 | Batch 10/16 | Loss: 1.5191 | accuracy 58/128=0.453
Epoch 52/200 | Batch 0/16 | Loss: 1.5553 | accuracy 64/128=0.500
Epoch 52/200 | Batch 10/16 | Loss: 1.5587 | accuracy 60/128=0.469
Epoch 53/200 | Batch 0/16 | Loss: 1.5137 | accuracy 61/128=0.477
Epoch 53/200 | Batch 10/16 | Loss: 1.3685 | accuracy 67/128=0.523
Epoch 54/200 | Batch 0/16 | Loss: 1.6554 | accuracy 50/128=0.391
Epoch 54/200 | Batch 10/16 | Loss: 1.4803 | accuracy 59/128=0.461
Epoch 55/200 | Batch 0/16 | Loss: 1.3825 | accuracy 66/128=0.516
Epoch 55/200 | Batch 10/16 | Loss: 1.4612 | accuracy 62/128=0.484
Epoch 56/200 | Batch 0/16 | Loss: 1.3605 | accuracy 73/128=0.570
Epoch 56/200 | Batch 10/16 | Loss: 1.4856 | accuracy 66/128=0.516
Epoch 57/200 | Batch 0/16 | Loss: 1.5354 | accuracy 51/128=0.398
Epoch 57/200 | Batch 10/16 | Loss: 1.4573 | accuracy 59/128=0.461
Epoch 58/200 | Batch 0/16 | Loss: 1.3566 | accuracy 61/128=0.477
Epoch 58/200 | Batch 10/16 | Loss: 1.3901 | accuracy 63/128=0.492
Epoch 59/200 | Batch 0/16 | Loss: 1.3130 | accuracy 70/128=0.547
Epoch 59/200 | Batch 10/16 | Loss: 1.1667 | accuracy 76/128=0.594
Epoch 60/200 | Batch 0/16 | Loss: 1.3881 | accuracy 70/128=0.547
Epoch 60/200 | Batch 10/16 | Loss: 1.2703 | accuracy 68/128=0.531
checkpoint saved : ./model//check.epoch60
Epoch 61/200 | Batch 0/16 | Loss: 1.4010 | accuracy 62/128=0.484
Epoch 61/200 | Batch 10/16 | Loss: 1.3181 | accuracy 72/128=0.562
Epoch 62/200 | Batch 0/16 | Loss: 1.2716 | accuracy 69/128=0.539
Epoch 62/200 | Batch 10/16 | Loss: 1.3523 | accuracy 62/128=0.484
Epoch 63/200 | Batch 0/16 | Loss: 1.2137 | accuracy 78/128=0.609
Epoch 63/200 | Batch 10/16 | Loss: 1.2490 | accuracy 75/128=0.586
Epoch 64/200 | Batch 0/16 | Loss: 1.2601 | accuracy 77/128=0.602
Epoch 64/200 | Batch 10/16 | Loss: 1.2207 | accuracy 72/128=0.562
Epoch 65/200 | Batch 0/16 | Loss: 1.1812 | accuracy 73/128=0.570
Epoch 65/200 | Batch 10/16 | Loss: 1.2019 | accuracy 74/128=0.578
Epoch 66/200 | Batch 0/16 | Loss: 1.0996 | accuracy 77/128=0.602
Epoch 66/200 | Batch 10/16 | Loss: 1.1076 | accuracy 72/128=0.562
Epoch 67/200 | Batch 0/16 | Loss: 1.2806 | accuracy 71/128=0.555
Epoch 67/200 | Batch 10/16 | Loss: 1.2237 | accuracy 74/128=0.578
Epoch 68/200 | Batch 0/16 | Loss: 1.1196 | accuracy 81/128=0.633
Epoch 68/200 | Batch 10/16 | Loss: 1.1982 | accuracy 78/128=0.609
Epoch 69/200 | Batch 0/16 | Loss: 1.0038 | accuracy 93/128=0.727
Epoch 69/200 | Batch 10/16 | Loss: 1.2466 | accuracy 72/128=0.562
Epoch 70/200 | Batch 0/16 | Loss: 1.0274 | accuracy 79/128=0.617
Epoch 70/200 | Batch 10/16 | Loss: 1.0536 | accuracy 82/128=0.641
checkpoint saved : ./model//check.epoch70
Epoch 71/200 | Batch 0/16 | Loss: 1.1594 | accuracy 79/128=0.617
Epoch 71/200 | Batch 10/16 | Loss: 1.0447 | accuracy 80/128=0.625
Epoch 72/200 | Batch 0/16 | Loss: 1.2550 | accuracy 68/128=0.531
Epoch 72/200 | Batch 10/16 | Loss: 1.1217 | accuracy 79/128=0.617
Epoch 73/200 | Batch 0/16 | Loss: 1.0504 | accuracy 78/128=0.609
Epoch 73/200 | Batch 10/16 | Loss: 1.2043 | accuracy 77/128=0.602
Epoch 74/200 | Batch 0/16 | Loss: 1.0929 | accuracy 74/128=0.578
Epoch 74/200 | Batch 10/16 | Loss: 1.0416 | accuracy 82/128=0.641
Epoch 75/200 | Batch 0/16 | Loss: 0.9702 | accuracy 89/128=0.695
Epoch 75/200 | Batch 10/16 | Loss: 0.9303 | accuracy 95/128=0.742
Epoch 76/200 | Batch 0/16 | Loss: 0.8531 | accuracy 93/128=0.727
Epoch 76/200 | Batch 10/16 | Loss: 1.0092 | accuracy 87/128=0.680
Epoch 77/200 | Batch 0/16 | Loss: 1.0739 | accuracy 78/128=0.609
Epoch 77/200 | Batch 10/16 | Loss: 1.0276 | accuracy 81/128=0.633
Epoch 78/200 | Batch 0/16 | Loss: 0.9078 | accuracy 91/128=0.711
Epoch 78/200 | Batch 10/16 | Loss: 0.9602 | accuracy 80/128=0.625
Epoch 79/200 | Batch 0/16 | Loss: 0.9347 | accuracy 85/128=0.664
Epoch 79/200 | Batch 10/16 | Loss: 0.9257 | accuracy 87/128=0.680
Epoch 80/200 | Batch 0/16 | Loss: 1.0276 | accuracy 84/128=0.656
Epoch 80/200 | Batch 10/16 | Loss: 0.8795 | accuracy 88/128=0.688
checkpoint saved : ./model//check.epoch80
Epoch 81/200 | Batch 0/16 | Loss: 0.7719 | accuracy 96/128=0.750
Epoch 81/200 | Batch 10/16 | Loss: 0.9031 | accuracy 90/128=0.703
Epoch 82/200 | Batch 0/16 | Loss: 0.8802 | accuracy 91/128=0.711
Epoch 82/200 | Batch 10/16 | Loss: 0.8708 | accuracy 88/128=0.688
Epoch 83/200 | Batch 0/16 | Loss: 0.8398 | accuracy 91/128=0.711
Epoch 83/200 | Batch 10/16 | Loss: 0.7149 | accuracy 99/128=0.773
Epoch 84/200 | Batch 0/16 | Loss: 0.7306 | accuracy 101/128=0.789
Epoch 84/200 | Batch 10/16 | Loss: 0.8610 | accuracy 92/128=0.719
Epoch 85/200 | Batch 0/16 | Loss: 0.8118 | accuracy 92/128=0.719
Epoch 85/200 | Batch 10/16 | Loss: 0.8698 | accuracy 94/128=0.734
Epoch 86/200 | Batch 0/16 | Loss: 0.7987 | accuracy 93/128=0.727
Epoch 86/200 | Batch 10/16 | Loss: 0.7173 | accuracy 101/128=0.789
Epoch 87/200 | Batch 0/16 | Loss: 0.7868 | accuracy 93/128=0.727
Epoch 87/200 | Batch 10/16 | Loss: 0.9372 | accuracy 80/128=0.625
Epoch 88/200 | Batch 0/16 | Loss: 0.8355 | accuracy 91/128=0.711
Epoch 88/200 | Batch 10/16 | Loss: 0.7740 | accuracy 93/128=0.727
Epoch 89/200 | Batch 0/16 | Loss: 0.8853 | accuracy 86/128=0.672
Epoch 89/200 | Batch 10/16 | Loss: 0.7612 | accuracy 91/128=0.711
Epoch 90/200 | Batch 0/16 | Loss: 0.6926 | accuracy 99/128=0.773
Epoch 90/200 | Batch 10/16 | Loss: 0.6736 | accuracy 97/128=0.758
checkpoint saved : ./model//check.epoch90
Epoch 91/200 | Batch 0/16 | Loss: 0.7096 | accuracy 95/128=0.742
Epoch 91/200 | Batch 10/16 | Loss: 0.7188 | accuracy 103/128=0.805
Epoch 92/200 | Batch 0/16 | Loss: 0.7054 | accuracy 96/128=0.750
Epoch 92/200 | Batch 10/16 | Loss: 0.6021 | accuracy 110/128=0.859
Epoch 93/200 | Batch 0/16 | Loss: 0.7780 | accuracy 96/128=0.750
Epoch 93/200 | Batch 10/16 | Loss: 0.7090 | accuracy 103/128=0.805
Epoch 94/200 | Batch 0/16 | Loss: 0.6440 | accuracy 102/128=0.797
Epoch 94/200 | Batch 10/16 | Loss: 0.8302 | accuracy 88/128=0.688
Epoch 95/200 | Batch 0/16 | Loss: 0.7757 | accuracy 96/128=0.750
Epoch 95/200 | Batch 10/16 | Loss: 0.6106 | accuracy 104/128=0.812
Epoch 96/200 | Batch 0/16 | Loss: 0.6474 | accuracy 96/128=0.750
Epoch 96/200 | Batch 10/16 | Loss: 0.6675 | accuracy 102/128=0.797
Epoch 97/200 | Batch 0/16 | Loss: 0.5350 | accuracy 106/128=0.828
Epoch 97/200 | Batch 10/16 | Loss: 0.8105 | accuracy 93/128=0.727
Epoch 98/200 | Batch 0/16 | Loss: 0.7731 | accuracy 87/128=0.680
Epoch 98/200 | Batch 10/16 | Loss: 0.6888 | accuracy 96/128=0.750
Epoch 99/200 | Batch 0/16 | Loss: 0.6044 | accuracy 106/128=0.828
Epoch 99/200 | Batch 10/16 | Loss: 0.5313 | accuracy 101/128=0.789
Epoch 100/200 | Batch 0/16 | Loss: 0.7274 | accuracy 96/128=0.750
Epoch 100/200 | Batch 10/16 | Loss: 0.6472 | accuracy 100/128=0.781
checkpoint saved : ./model//check.epoch100
Epoch 101/200 | Batch 0/16 | Loss: 0.6915 | accuracy 98/128=0.766
Epoch 101/200 | Batch 10/16 | Loss: 0.5370 | accuracy 109/128=0.852
Epoch 102/200 | Batch 0/16 | Loss: 0.5760 | accuracy 104/128=0.812
Epoch 102/200 | Batch 10/16 | Loss: 0.7622 | accuracy 93/128=0.727
Epoch 103/200 | Batch 0/16 | Loss: 0.5385 | accuracy 102/128=0.797
Epoch 103/200 | Batch 10/16 | Loss: 0.6802 | accuracy 103/128=0.805
Epoch 104/200 | Batch 0/16 | Loss: 0.5285 | accuracy 110/128=0.859
Epoch 104/200 | Batch 10/16 | Loss: 0.5555 | accuracy 110/128=0.859
Epoch 105/200 | Batch 0/16 | Loss: 0.6075 | accuracy 102/128=0.797
Epoch 105/200 | Batch 10/16 | Loss: 0.5659 | accuracy 101/128=0.789
Epoch 106/200 | Batch 0/16 | Loss: 0.4936 | accuracy 108/128=0.844
Epoch 106/200 | Batch 10/16 | Loss: 0.6707 | accuracy 102/128=0.797
Epoch 107/200 | Batch 0/16 | Loss: 0.5391 | accuracy 105/128=0.820
Epoch 107/200 | Batch 10/16 | Loss: 0.4698 | accuracy 105/128=0.820
Epoch 108/200 | Batch 0/16 | Loss: 0.4267 | accuracy 108/128=0.844
Epoch 108/200 | Batch 10/16 | Loss: 0.5509 | accuracy 102/128=0.797
Epoch 109/200 | Batch 0/16 | Loss: 0.4462 | accuracy 107/128=0.836
Epoch 109/200 | Batch 10/16 | Loss: 0.5380 | accuracy 105/128=0.820
Epoch 110/200 | Batch 0/16 | Loss: 0.4637 | accuracy 110/128=0.859
Epoch 110/200 | Batch 10/16 | Loss: 0.4375 | accuracy 109/128=0.852
checkpoint saved : ./model//check.epoch110
Epoch 111/200 | Batch 0/16 | Loss: 0.5567 | accuracy 105/128=0.820
Epoch 111/200 | Batch 10/16 | Loss: 0.4808 | accuracy 108/128=0.844
Epoch 112/200 | Batch 0/16 | Loss: 0.4961 | accuracy 109/128=0.852
Epoch 112/200 | Batch 10/16 | Loss: 0.5008 | accuracy 104/128=0.812
Epoch 113/200 | Batch 0/16 | Loss: 0.4603 | accuracy 112/128=0.875
Epoch 113/200 | Batch 10/16 | Loss: 0.4817 | accuracy 108/128=0.844
Epoch 114/200 | Batch 0/16 | Loss: 0.3971 | accuracy 111/128=0.867
Epoch 114/200 | Batch 10/16 | Loss: 0.4703 | accuracy 105/128=0.820
Epoch 115/200 | Batch 0/16 | Loss: 0.5089 | accuracy 102/128=0.797
Epoch 115/200 | Batch 10/16 | Loss: 0.4242 | accuracy 112/128=0.875
Epoch 116/200 | Batch 0/16 | Loss: 0.5037 | accuracy 103/128=0.805
Epoch 116/200 | Batch 10/16 | Loss: 0.4972 | accuracy 102/128=0.797
Epoch 117/200 | Batch 0/16 | Loss: 0.4382 | accuracy 109/128=0.852
Epoch 117/200 | Batch 10/16 | Loss: 0.3487 | accuracy 116/128=0.906
Epoch 118/200 | Batch 0/16 | Loss: 0.3746 | accuracy 112/128=0.875
Epoch 118/200 | Batch 10/16 | Loss: 0.3572 | accuracy 114/128=0.891
Epoch 119/200 | Batch 0/16 | Loss: 0.3941 | accuracy 110/128=0.859
Epoch 119/200 | Batch 10/16 | Loss: 0.4587 | accuracy 110/128=0.859
Epoch 120/200 | Batch 0/16 | Loss: 0.3700 | accuracy 114/128=0.891
Epoch 120/200 | Batch 10/16 | Loss: 0.3846 | accuracy 112/128=0.875
checkpoint saved : ./model//check.epoch120
Epoch 121/200 | Batch 0/16 | Loss: 0.4735 | accuracy 110/128=0.859
Epoch 121/200 | Batch 10/16 | Loss: 0.5561 | accuracy 104/128=0.812
Epoch 122/200 | Batch 0/16 | Loss: 0.3554 | accuracy 115/128=0.898
Epoch 122/200 | Batch 10/16 | Loss: 0.4541 | accuracy 113/128=0.883
Epoch 123/200 | Batch 0/16 | Loss: 0.4274 | accuracy 110/128=0.859
Epoch 123/200 | Batch 10/16 | Loss: 0.3901 | accuracy 112/128=0.875
Epoch 124/200 | Batch 0/16 | Loss: 0.3440 | accuracy 118/128=0.922
Epoch 124/200 | Batch 10/16 | Loss: 0.3341 | accuracy 113/128=0.883
Epoch 125/200 | Batch 0/16 | Loss: 0.3978 | accuracy 111/128=0.867
Epoch 125/200 | Batch 10/16 | Loss: 0.4012 | accuracy 113/128=0.883
Epoch 126/200 | Batch 0/16 | Loss: 0.3910 | accuracy 114/128=0.891
Epoch 126/200 | Batch 10/16 | Loss: 0.4164 | accuracy 113/128=0.883
Epoch 127/200 | Batch 0/16 | Loss: 0.3342 | accuracy 114/128=0.891
Epoch 127/200 | Batch 10/16 | Loss: 0.3473 | accuracy 120/128=0.938
Epoch 128/200 | Batch 0/16 | Loss: 0.3794 | accuracy 111/128=0.867
Epoch 128/200 | Batch 10/16 | Loss: 0.4186 | accuracy 110/128=0.859
Epoch 129/200 | Batch 0/16 | Loss: 0.3165 | accuracy 117/128=0.914
Epoch 129/200 | Batch 10/16 | Loss: 0.3586 | accuracy 112/128=0.875
Epoch 130/200 | Batch 0/16 | Loss: 0.3648 | accuracy 113/128=0.883
Epoch 130/200 | Batch 10/16 | Loss: 0.4095 | accuracy 115/128=0.898
checkpoint saved : ./model//check.epoch130
Epoch 131/200 | Batch 0/16 | Loss: 0.3751 | accuracy 114/128=0.891
Epoch 131/200 | Batch 10/16 | Loss: 0.2695 | accuracy 122/128=0.953
Epoch 132/200 | Batch 0/16 | Loss: 0.3491 | accuracy 115/128=0.898
Epoch 132/200 | Batch 10/16 | Loss: 0.2876 | accuracy 118/128=0.922
Epoch 133/200 | Batch 0/16 | Loss: 0.3161 | accuracy 116/128=0.906
Epoch 133/200 | Batch 10/16 | Loss: 0.3067 | accuracy 115/128=0.898
Epoch 134/200 | Batch 0/16 | Loss: 0.3532 | accuracy 117/128=0.914
Epoch 134/200 | Batch 10/16 | Loss: 0.3171 | accuracy 116/128=0.906
Epoch 135/200 | Batch 0/16 | Loss: 0.3430 | accuracy 113/128=0.883
Epoch 135/200 | Batch 10/16 | Loss: 0.3494 | accuracy 116/128=0.906
Epoch 136/200 | Batch 0/16 | Loss: 0.3088 | accuracy 116/128=0.906
Epoch 136/200 | Batch 10/16 | Loss: 0.3662 | accuracy 115/128=0.898
Epoch 137/200 | Batch 0/16 | Loss: 0.3178 | accuracy 117/128=0.914
Epoch 137/200 | Batch 10/16 | Loss: 0.4010 | accuracy 112/128=0.875
Epoch 138/200 | Batch 0/16 | Loss: 0.3349 | accuracy 114/128=0.891
Epoch 138/200 | Batch 10/16 | Loss: 0.3311 | accuracy 114/128=0.891
Epoch 139/200 | Batch 0/16 | Loss: 0.3263 | accuracy 115/128=0.898
Epoch 139/200 | Batch 10/16 | Loss: 0.3045 | accuracy 117/128=0.914
Epoch 140/200 | Batch 0/16 | Loss: 0.2755 | accuracy 117/128=0.914
Epoch 140/200 | Batch 10/16 | Loss: 0.2942 | accuracy 116/128=0.906
checkpoint saved : ./model//check.epoch140
Epoch 141/200 | Batch 0/16 | Loss: 0.2904 | accuracy 115/128=0.898
Epoch 141/200 | Batch 10/16 | Loss: 0.2317 | accuracy 121/128=0.945
Epoch 142/200 | Batch 0/16 | Loss: 0.4009 | accuracy 112/128=0.875
Epoch 142/200 | Batch 10/16 | Loss: 0.2950 | accuracy 117/128=0.914
Epoch 143/200 | Batch 0/16 | Loss: 0.2833 | accuracy 114/128=0.891
Epoch 143/200 | Batch 10/16 | Loss: 0.2006 | accuracy 121/128=0.945
Epoch 144/200 | Batch 0/16 | Loss: 0.3718 | accuracy 117/128=0.914
Epoch 144/200 | Batch 10/16 | Loss: 0.4305 | accuracy 106/128=0.828
Epoch 145/200 | Batch 0/16 | Loss: 0.2323 | accuracy 118/128=0.922
Epoch 145/200 | Batch 10/16 | Loss: 0.2974 | accuracy 120/128=0.938
Epoch 146/200 | Batch 0/16 | Loss: 0.2393 | accuracy 120/128=0.938
Epoch 146/200 | Batch 10/16 | Loss: 0.2414 | accuracy 120/128=0.938
Epoch 147/200 | Batch 0/16 | Loss: 0.2520 | accuracy 117/128=0.914
Epoch 147/200 | Batch 10/16 | Loss: 0.1956 | accuracy 123/128=0.961
Epoch 148/200 | Batch 0/16 | Loss: 0.3122 | accuracy 112/128=0.875
Epoch 148/200 | Batch 10/16 | Loss: 0.2806 | accuracy 119/128=0.930
Epoch 149/200 | Batch 0/16 | Loss: 0.2155 | accuracy 120/128=0.938
Epoch 149/200 | Batch 10/16 | Loss: 0.2039 | accuracy 119/128=0.930
Epoch 150/200 | Batch 0/16 | Loss: 0.2909 | accuracy 115/128=0.898
Epoch 150/200 | Batch 10/16 | Loss: 0.2923 | accuracy 119/128=0.930
checkpoint saved : ./model//check.epoch150
Epoch 151/200 | Batch 0/16 | Loss: 0.2236 | accuracy 119/128=0.930
Epoch 151/200 | Batch 10/16 | Loss: 0.2395 | accuracy 116/128=0.906
Epoch 152/200 | Batch 0/16 | Loss: 0.2158 | accuracy 122/128=0.953
Epoch 152/200 | Batch 10/16 | Loss: 0.3395 | accuracy 115/128=0.898
Epoch 153/200 | Batch 0/16 | Loss: 0.1672 | accuracy 122/128=0.953
Epoch 153/200 | Batch 10/16 | Loss: 0.2050 | accuracy 122/128=0.953
Epoch 154/200 | Batch 0/16 | Loss: 0.1663 | accuracy 123/128=0.961
Epoch 154/200 | Batch 10/16 | Loss: 0.3110 | accuracy 115/128=0.898
Epoch 155/200 | Batch 0/16 | Loss: 0.2082 | accuracy 121/128=0.945
Epoch 155/200 | Batch 10/16 | Loss: 0.1615 | accuracy 126/128=0.984
Epoch 156/200 | Batch 0/16 | Loss: 0.1987 | accuracy 120/128=0.938
Epoch 156/200 | Batch 10/16 | Loss: 0.2378 | accuracy 120/128=0.938
Epoch 157/200 | Batch 0/16 | Loss: 0.2627 | accuracy 119/128=0.930
Epoch 157/200 | Batch 10/16 | Loss: 0.2107 | accuracy 119/128=0.930
Epoch 158/200 | Batch 0/16 | Loss: 0.2405 | accuracy 117/128=0.914
Epoch 158/200 | Batch 10/16 | Loss: 0.1911 | accuracy 121/128=0.945
Epoch 159/200 | Batch 0/16 | Loss: 0.2335 | accuracy 116/128=0.906
Epoch 159/200 | Batch 10/16 | Loss: 0.1842 | accuracy 124/128=0.969
Epoch 160/200 | Batch 0/16 | Loss: 0.1570 | accuracy 122/128=0.953
Epoch 160/200 | Batch 10/16 | Loss: 0.2303 | accuracy 118/128=0.922
checkpoint saved : ./model//check.epoch160
Epoch 161/200 | Batch 0/16 | Loss: 0.1888 | accuracy 122/128=0.953
Epoch 161/200 | Batch 10/16 | Loss: 0.1389 | accuracy 123/128=0.961
Epoch 162/200 | Batch 0/16 | Loss: 0.2047 | accuracy 121/128=0.945
Epoch 162/200 | Batch 10/16 | Loss: 0.1748 | accuracy 120/128=0.938
Epoch 163/200 | Batch 0/16 | Loss: 0.1451 | accuracy 124/128=0.969
Epoch 163/200 | Batch 10/16 | Loss: 0.1395 | accuracy 124/128=0.969
Epoch 164/200 | Batch 0/16 | Loss: 0.1824 | accuracy 120/128=0.938
Epoch 164/200 | Batch 10/16 | Loss: 0.1795 | accuracy 120/128=0.938
Epoch 165/200 | Batch 0/16 | Loss: 0.1478 | accuracy 123/128=0.961
Epoch 165/200 | Batch 10/16 | Loss: 0.1997 | accuracy 123/128=0.961
Epoch 166/200 | Batch 0/16 | Loss: 0.1808 | accuracy 120/128=0.938
Epoch 166/200 | Batch 10/16 | Loss: 0.1875 | accuracy 119/128=0.930
Epoch 167/200 | Batch 0/16 | Loss: 0.1764 | accuracy 118/128=0.922
Epoch 167/200 | Batch 10/16 | Loss: 0.1592 | accuracy 124/128=0.969
Epoch 168/200 | Batch 0/16 | Loss: 0.2030 | accuracy 118/128=0.922
Epoch 168/200 | Batch 10/16 | Loss: 0.1260 | accuracy 123/128=0.961
Epoch 169/200 | Batch 0/16 | Loss: 0.1836 | accuracy 119/128=0.930
Epoch 169/200 | Batch 10/16 | Loss: 0.2194 | accuracy 120/128=0.938
Epoch 170/200 | Batch 0/16 | Loss: 0.2251 | accuracy 120/128=0.938
Epoch 170/200 | Batch 10/16 | Loss: 0.1552 | accuracy 123/128=0.961
checkpoint saved : ./model//check.epoch170
Epoch 171/200 | Batch 0/16 | Loss: 0.0859 | accuracy 127/128=0.992
Epoch 171/200 | Batch 10/16 | Loss: 0.1966 | accuracy 121/128=0.945
Epoch 172/200 | Batch 0/16 | Loss: 0.1674 | accuracy 120/128=0.938
Epoch 172/200 | Batch 10/16 | Loss: 0.1515 | accuracy 124/128=0.969
Epoch 173/200 | Batch 0/16 | Loss: 0.1992 | accuracy 115/128=0.898
Epoch 173/200 | Batch 10/16 | Loss: 0.1338 | accuracy 123/128=0.961
Epoch 174/200 | Batch 0/16 | Loss: 0.1419 | accuracy 124/128=0.969
Epoch 174/200 | Batch 10/16 | Loss: 0.1699 | accuracy 121/128=0.945
Epoch 175/200 | Batch 0/16 | Loss: 0.2120 | accuracy 120/128=0.938
Epoch 175/200 | Batch 10/16 | Loss: 0.2010 | accuracy 119/128=0.930
Epoch 176/200 | Batch 0/16 | Loss: 0.2256 | accuracy 120/128=0.938
Epoch 176/200 | Batch 10/16 | Loss: 0.1252 | accuracy 122/128=0.953
Epoch 177/200 | Batch 0/16 | Loss: 0.1566 | accuracy 123/128=0.961
Epoch 177/200 | Batch 10/16 | Loss: 0.1291 | accuracy 122/128=0.953
Epoch 178/200 | Batch 0/16 | Loss: 0.1606 | accuracy 120/128=0.938
Epoch 178/200 | Batch 10/16 | Loss: 0.1472 | accuracy 125/128=0.977
Epoch 179/200 | Batch 0/16 | Loss: 0.1642 | accuracy 121/128=0.945
Epoch 179/200 | Batch 10/16 | Loss: 0.1051 | accuracy 125/128=0.977
Epoch 180/200 | Batch 0/16 | Loss: 0.2038 | accuracy 121/128=0.945
Epoch 180/200 | Batch 10/16 | Loss: 0.1333 | accuracy 122/128=0.953
checkpoint saved : ./model//check.epoch180
Epoch 181/200 | Batch 0/16 | Loss: 0.2143 | accuracy 120/128=0.938
Epoch 181/200 | Batch 10/16 | Loss: 0.1642 | accuracy 121/128=0.945
Epoch 182/200 | Batch 0/16 | Loss: 0.1173 | accuracy 123/128=0.961
Epoch 182/200 | Batch 10/16 | Loss: 0.1296 | accuracy 125/128=0.977
Epoch 183/200 | Batch 0/16 | Loss: 0.1144 | accuracy 126/128=0.984
Epoch 183/200 | Batch 10/16 | Loss: 0.1317 | accuracy 124/128=0.969
Epoch 184/200 | Batch 0/16 | Loss: 0.1667 | accuracy 124/128=0.969
Epoch 184/200 | Batch 10/16 | Loss: 0.0716 | accuracy 126/128=0.984
Epoch 185/200 | Batch 0/16 | Loss: 0.1296 | accuracy 122/128=0.953
Epoch 185/200 | Batch 10/16 | Loss: 0.1412 | accuracy 124/128=0.969
Epoch 186/200 | Batch 0/16 | Loss: 0.1750 | accuracy 121/128=0.945
Epoch 186/200 | Batch 10/16 | Loss: 0.1369 | accuracy 121/128=0.945
Epoch 187/200 | Batch 0/16 | Loss: 0.2256 | accuracy 121/128=0.945
Epoch 187/200 | Batch 10/16 | Loss: 0.1291 | accuracy 122/128=0.953
Epoch 188/200 | Batch 0/16 | Loss: 0.1657 | accuracy 120/128=0.938
Epoch 188/200 | Batch 10/16 | Loss: 0.0768 | accuracy 126/128=0.984
Epoch 189/200 | Batch 0/16 | Loss: 0.1616 | accuracy 122/128=0.953
Epoch 189/200 | Batch 10/16 | Loss: 0.1312 | accuracy 121/128=0.945
Epoch 190/200 | Batch 0/16 | Loss: 0.1196 | accuracy 126/128=0.984
Epoch 190/200 | Batch 10/16 | Loss: 0.0910 | accuracy 128/128=1.000
checkpoint saved : ./model//check.epoch190
Epoch 191/200 | Batch 0/16 | Loss: 0.1195 | accuracy 123/128=0.961
Epoch 191/200 | Batch 10/16 | Loss: 0.1772 | accuracy 121/128=0.945
Epoch 192/200 | Batch 0/16 | Loss: 0.1274 | accuracy 124/128=0.969
Epoch 192/200 | Batch 10/16 | Loss: 0.1134 | accuracy 123/128=0.961
Epoch 193/200 | Batch 0/16 | Loss: 0.1581 | accuracy 123/128=0.961
Epoch 193/200 | Batch 10/16 | Loss: 0.0965 | accuracy 126/128=0.984
Epoch 194/200 | Batch 0/16 | Loss: 0.1425 | accuracy 123/128=0.961
Epoch 194/200 | Batch 10/16 | Loss: 0.1087 | accuracy 124/128=0.969
Epoch 195/200 | Batch 0/16 | Loss: 0.1437 | accuracy 122/128=0.953
Epoch 195/200 | Batch 10/16 | Loss: 0.1568 | accuracy 123/128=0.961
Epoch 196/200 | Batch 0/16 | Loss: 0.0746 | accuracy 127/128=0.992
Epoch 196/200 | Batch 10/16 | Loss: 0.1321 | accuracy 124/128=0.969
Epoch 197/200 | Batch 0/16 | Loss: 0.1514 | accuracy 121/128=0.945
Epoch 197/200 | Batch 10/16 | Loss: 0.1016 | accuracy 126/128=0.984
Epoch 198/200 | Batch 0/16 | Loss: 0.1348 | accuracy 123/128=0.961
Epoch 198/200 | Batch 10/16 | Loss: 0.1297 | accuracy 123/128=0.961
Epoch 199/200 | Batch 0/16 | Loss: 0.1765 | accuracy 121/128=0.945
Epoch 199/200 | Batch 10/16 | Loss: 0.1166 | accuracy 122/128=0.953
Epoch 200/200 | Batch 0/16 | Loss: 0.0859 | accuracy 126/128=0.984
Epoch 200/200 | Batch 10/16 | Loss: 0.1667 | accuracy 121/128=0.945
checkpoint saved : ./model//check.epoch200
model saved : ./model//captcha.1digit.2kProcess finished with exit code 0

D:\Python310\python.exe D:/project/PycharmProjects/CNN/test.py
resize_height = 128
resize_width = 128
test_data_path = ./data/test-digit/
characters = 0123456789
digit_num = 1
class_num = 10
test_model_path = ./model/captcha.1digit.2ktest accuracy = 859 / 1000 = 0.859

4 参考资料

相关文章:

PyTorch实践-CNN-验证码识别

1 需求 GitHub - xhh890921/cnn-captcha-pytorch: 小黑黑讲AI,AI实战项目《验证码识别》 2 接口 含义 在optim.Adam接口中,lr参数代表学习率(Learning Rate)。学习率是优化算法中的一个关键超参数,它决定了在每次迭代…...

json和pb的比较

1.介绍 在数据序列化和通信领域,schema 指的是用于定义数据结构的模式或结构描述。它描述了数据的字段、类型、嵌套结构和约束,并在数据验证和解释上发挥重要作用。常见的 schema 格式包括 Protocol Buffers (proto)、JSON Schema、XML Schema 等。 Pr…...

Redis-基本了解

一、Redis 初识 Redis 是⼀种基于键值对(key-value)的NoSQL数据库,与很多键值对数据库不同的是,Redis 中的值可以是由string(字符串)、hash(哈希)、list(列表&#xff09…...

HarmonyOS第一课 06 构建更加丰富的页面-习题解析

判断题 1. Tabs组件可以通过接口传入一个TabsController,该TabsController可以控制Tabs组件进行页签切换。T 正确(True) 错误(False) 使用 this.tabsController.changeIndex(this.currentIndex); 可以切换页签 WebviewController提供了变更Web组件显示内容的接口…...

计算机的错误计算(一百四十三)

摘要 探讨 MATLAB 中 附近数的余弦函数的计算精度问题。 例1. 已知 计算 与 直接贴图吧: 另外,16位的正确值分别为 -0.3012758451921695e-7 与 -0.3765996542384011e-10(ISRealsoft 提供)。 容易看出,MATLAB的输…...

大数据之——Window电脑本地配置hadoop系统(100%包避坑!!方便日常测试,不用再去虚拟机那么麻烦)

之前我们的hadoop不管是伪分布式还是分布式,都是配置在虚拟机上,我们有的时候想要运行一些mapreduce、hdfs的操作,又要把文件移到虚拟机,又要上传hdfs,麻烦得要死,那么有的时候我们写的一些java、python的h…...

汽车固态电池深度报告

固态电池符合未来大容量二次电池发展方向,半固态电池已装车,高端长续航车型、e-VTOL 等方向对固态电池需求明确。固态电池理论上具备更高的能量密度、更好的热稳定性、更长的循环寿命等优点,是未来大容量二次电池发展方向。根据中国汽车动力…...

HTB-Cicada 靶机笔记

Cicada 靶机笔记 概述 HTB 的靶机 Cicada 靶机 靶机地址:https://app.hackthebox.com/machines/Cicada 很有意思且简单的 windows 靶机,这台靶机多次利用了信息枚举,利用不同的信息一步一步获得 root 权限 一、nmap 扫描 1)…...

使用DJL和PaddlePaddle的口罩检测详细指南

使用DJL和PaddlePaddle的口罩检测详细指南 完整代码 该项目利用DJL和PaddlePaddle的预训练模型,构建了一个口罩检测应用程序。该应用能够在图片中检测人脸,并将每张人脸分类为“戴口罩”或“未戴口罩”。我们将深入分析代码的每个部分,以便…...

基于stm32的多旋翼无人机(Multi-rotor UAV based on stm32)

在现代无人机技术中,多旋翼无人机因其稳定性和操控性而受到广泛应用。STM32微控制器因其强大的处理能力和丰富的外设接口,成为实现多旋翼无人机控制的理想选择。本文将详细介绍如何基于STM32实现多旋翼无人机的控制,包括硬件设计、软件设计和…...

第二十四章 v-model原理及v-model简化表单类组件封装

目录 一、v-model 原理 二、表单类组件封装 三、v-model简化组件封装代码 一、v-model 原理 原理:v-model本质上是一个语法糖。例如应用在输入框上,就是 value属性 和 input事件 的合写。 作用:提供数据的双向绑定 ① 数据变&#x…...

Java基于SpringBoot 的校园外卖点餐平台微信小程序(附源码,文档)

大家好,我是Java徐师兄,今天为大家带来的是Java基于SpringBoot 的校园外卖点餐平台微信小程序。该系统采用 Java 语言 开发,MySql 作为数据库,系统功能完善 ,实用性强 ,可供大学生实战项目参考使用。 博主介…...

细说STM32单片机USART中断收发RTC实时时间并改善其鲁棒性的方法

目录 一、工程目的 1、 目标 2、通讯协议及应对错误指令的处理目标 二、工程设置 三、程序改进 四、下载与调试 1、合规的指令 2、 proBuffer[0]不是# 3、proBuffer[4]不是; 4、指令长度小于5 5、指令长度大于5 6、proBuffer[2]或proBuffer[3]不是数字 7、;位于p…...

无人机场景 - 目标检测数据集 - 夜间车辆检测数据集下载「包含VOC、COCO、YOLO三种格式」

数据集介绍:无人机场景夜间车辆检测数据集,真实场景高质量图片数据,涉及场景丰富,比如夜间无人机场景城市道路行驶车辆图片、夜间无人机场景城市道边停车车辆图片、夜间无人机场景停车场车辆图片、夜间无人机场景小区车辆图片、夜…...

Dubbo 构建高效分布式服务架构

一、引言 随着软件系统的复杂性不断增加,传统的单体架构已经难以满足大规模业务的需求。分布式系统架构通过将系统拆分成多个独立的服务,实现了更好的可扩展性、可维护性和高可用性。在分布式系统中,服务之间的通信和协调是一个关键问题&…...

Unity XR Interaction Toolkit 开发教程(1):OpenXR 与 XRI 概述【3.0 以上版本】

文章目录 📕Unity XR 开发架构🔍底层插件(对接硬件)🔍高层 SDK(面向应用交互层) 📕OpenXR📕XR Interaction Toolkit🔍特点🔍XRI 能够实现的交互类…...

自扶正救生艇,保障水上救援的安全卫士_鼎跃安全

在应急事件中,自扶正救生艇能够发挥关键的救援和保障作用,确保救援人员和被困人员的生命安全,尤其在极端天气或突发水上事故中展现出明显优势。 在救援过程中如果遭遇翻船,救生艇能够迅速恢复正常姿态,确保救援人员不会…...

《Qwen2-VL》论文精读【下】:发表于2024年10月 Qwen2-VL 迅速崛起 | 性能与GPT-4o和Claude3.5相当

1 前言 《Qwen2-VL》论文精读【上】:发表于2024年10月 Qwen2-VL 迅速崛起 | 性能与GPT-4o和Claude3.5相当 上回详细分析了Qwen2-VL的论文摘要、引言、实验,下面继续精读Qwen2-VL的方法部分。 文章目录 1 前言2 方法2.1 Model Architecture2.2 改进措施2…...

WebSocket消息帧的组成结构

WebSocket消息帧是WebSocket协议中的一个基本单位,它定义了数据在客户端和服务器之间传递的格式。每个数据帧包含了不同类型的数据和各种控制信息。以下是WebSocket消息帧的组成结构: WebSocket 帧结构 FIN、RSV1、RSV2、RSV3 和 opcode(第一…...

如何利用低代码开源框架实现高效开发?

随着数字化转型步伐的加快,越来越多的企业开始关注提高软件开发效率的方法。低代码平台因其能够大幅减少编码量而受到欢迎,而开源框架则因其灵活性和社区支持成为开发者的首选。如何利用低代码开源框架实现高效开发,成为许多企业和开发者面临…...

使用 RabbitMQ 有什么好处?

大家好,我是锋哥。今天分享关于【使用 RabbitMQ 有什么好处?】面试题。希望对大家有帮助; 使用 RabbitMQ 有什么好处? 1000道 互联网大厂Java工程师 精选面试题-Java资源分享网 RabbitMQ 是一种流行的开源消息代理,广…...

机器学习周报(RNN的梯度消失和LSTM缓解梯度消失公式推导)

文章目录 摘要Abstract 1 RNN的梯度消失问题2 LSTM缓解梯度消失总结 摘要 在深度学习领域,循环神经网络(Recurrent Neural Network, RNN)被广泛应用于处理序列数据,特别是在自然语言处理、时间序列预测等任务中。然而&#xff0c…...

一篇文章理解前端中的 File 和 Blob

概述: js处理文件、二进制数据和数据转换的时候,提供了一些API和对象,例如:File、Blob、FileReader、ArraryBuffer、Base64、Object URL 和 DataURL。现在主要介绍File和Blob这两个对象。 1.Blob介绍 在js中,Blob&am…...

串口屏控制的自动滑轨(未完工)

序言 疫情期间自己制作了一个自动滑轨,基于无线遥控的,但是整体太大了,非常不方便携带,所以重新设计了一个新的,以2020铝型材做导轨的滑轨,目前2020做滑轨已经很成熟了,配件也都非常便宜&#x…...

DFA算法实现敏感词过滤

DFA算法实现敏感词过滤 需求:检测一段文本中是否含有敏感词。 比如检测一段文本中是否含有:“滚蛋”,“滚蛋吧你”,“有病”, 可使用的方法有: 遍历敏感词,判断文本中是否含有这个敏感词。 …...

Python自动化运维:技能掌握与快速入门指南

#编程小白如何成为大神?大学生的最佳入门攻略# 在当今快速发展的IT行业中,Python自动化运维已经成为了一个不可或缺的技能。本文将为您详细介绍Python自动化运维所需的技能,并提供快速入门的资源,帮助您迅速掌握这一领域。 必备…...

在linux系统中安装pygtftk软件

1.下载和安装 网址&#xff1a; https://dputhier.github.io/pygtftk/index.html ## 手动安装 git clone http://gitgithub.com:dputhier/pygtftk.git pygtftk cd pygtftk # Check your Python version (>3.8,<3.9) pip install -r requirements.txt python setup.py in…...

decodeURIComponentSafe转义%问题记录URI malformed

decodeURIComponentSafe转义%问题记录 问题背景 当我们解析包涵 % 字符的字符串时&#xff0c;会出现错误如下 Uncaught URIError: URI malformed 解决方案&#xff1a; function decodeURIComponentSafe(s) {if (!s) {return s;}return decodeURIComponent(s.replace(/%(?…...

自由学习记录(18)

动画事件的碰撞器触发 Physics 类的常用方法 RaycastHit hit; if (Physics.Raycast(origin, direction, out hit, maxDistance)) {Debug.Log("Hit: " hit.collider.name); } Physics.Raycast&#xff1a;从指定点向某个方向发射射线&#xff0c;检测是否与碰撞体…...

vue3-ref 和 reactive

文章目录 vue3 中 ref 和 reactivereactive 与 ref 不同之处ref 处理复杂类型ref在dom中的应用 vue3 中 ref 和 reactive ref原理 基本原理 ref是Vue 3中用于创建响应式数据的一个函数。它的基本原理是通过Object.defineProperty()&#xff08;在JavaScript的规范中用于定义对…...