J1 ResNet-50算法实战与解析
- 🍨 本文為🔗365天深度學習訓練營 中的學習紀錄博客
- 🍖 原作者:K同学啊 | 接輔導、項目定制
一、理论知识储备
1. 残差网络的由来
ResNet主要解决了CNN在深度加深时的退化问题(梯度消失与梯度爆炸)。 虽然BN可以在一定程度上保持梯度的大小稳定,但当层级数加大时不但不容易收敛,还容易出现准确率饱和并迅速下降,这一下降由网络过于复杂导致。

ResNet有一个额外的分支把输入直接连在输出上,使输出为分支输出+卷积输出,通过人为制造恒等映射使整个网络朝恒等映射的方向去收敛。
复杂网络通用规则:如果一个网络通过简单的手工设置参数值就可以达到想要的结果,那这种结构很容易通过训练来收敛到该结果

较浅的ResNet网络(左):两层残差单元包含两个相同输出通道数的3x3卷积。
较深的ResNet网络(右):先用1x1卷积进行降维,然后3x3卷积,最后用1x1升维恢复原有维度,又称bottleneck结构。
2. ResNet50
包含两个基本块:Conv Block和Identity Block

二、前期准备
1. 导入数据
import matplotlib.pyplot as plt
# set the font to SimHei to display Chinese characters
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False import os, PIL, pathlib
import numpy as npfrom tensorflow import keras
from tensorflow.keras import layers,modelsdata_dir = 'C:/Self_Learning/Deep_Learning/K_Codes/data/8_data/bird_photos/'
data_dir = pathlib.Path(data_dir)
2. 查看数据
# Check the data
image_count = len(list(data_dir.glob('*/*.jpg')))
print('Total images:', image_count)
![]()
三、数据预处理
1. 加载数据
# Load the data
batch_size = 8
img_height = 224
img_width = 224train_ds = keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset='training',seed=123,image_size=(img_height, img_width),batch_size=batch_size)

val_ds = keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset='validation',seed=123,image_size=(img_height, img_width),batch_size=batch_size)

# Check the class names
class_names = train_ds.class_names
print(class_names)
![]()
2. 数据可视化
# Visualize the data
plt.figure(figsize=(10, 5))for images, labels in train_ds.take(1):for i in range(8):ax = plt.subplot(2, 4, i + 1)image = images[i].numpy().astype("uint8")plt.imshow(image)plt.title(class_names[labels[i]])plt.axis("off")plt.show()

3. 检查数据
# Check the data shape
for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break

4. 配置数据集
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
四、训练模型
1. 构建ResNet-50模型
# Define the ResNet50 modeldef identity_block(input_tensor, kernel_size, filters, stage, block):filters1, filters2, filters3 = filtersname_base = str(stage) + block + '_identity_block_'x = Conv2D(filters1, (1, 1), name=name_base + 'conv1')(input_tensor)x = BatchNormalization(name=name_base + 'bn1')(x)x = Activation('relu', name=name_base + 'relu1')(x)x = Conv2D(filters2, kernel_size, padding='same', name=name_base + 'conv2')(x)x = BatchNormalization(name=name_base + 'bn2')(x)x = Activation('relu', name=name_base + 'relu2')(x)x = Conv2D(filters3, (1, 1), name=name_base + 'conv3')(x)x = BatchNormalization(name=name_base + 'bn3')(x)x = layers.add([x, input_tensor], name=name_base + 'add')x = Activation('relu', name=name_base + 'relu4')(x)return xdef conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):filters1, filters2, filters3 = filtersres_name_base = str(stage) + block + '_conv_block_res_'name_base = str(stage) + block + '_conv_block_'x = Conv2D(filters1, (1, 1), strides=strides, name=name_base + 'conv1')(input_tensor)x = BatchNormalization(name=name_base + 'bn1')(x)x = Activation('relu', name=name_base + 'relu1')(x)x = Conv2D(filters2, kernel_size, padding='same', name=name_base + 'conv2')(x)x = BatchNormalization(name=name_base + 'bn2')(x)x = Activation('relu', name=name_base + 'relu2')(x)x = Conv2D(filters3, (1, 1), name=name_base + 'conv3')(x)x = BatchNormalization(name=name_base + 'bn3')(x)shortcut = Conv2D(filters3, (1, 1), strides=strides, name=res_name_base + 'conv')(input_tensor)shortcut = BatchNormalization(name=res_name_base + 'bn')(shortcut)x = layers.add([x, shortcut], name=name_base + 'add')x = Activation('relu', name=name_base + 'relu4')(x)return xdef ResNet50(input_shape=(224, 224, 3), classes=1000):img_input = Input(shape=input_shape)x = ZeroPadding2D((3, 3))(img_input)x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x)x = BatchNormalization(name='bn_conv1')(x)x = Activation('relu')(x)x = MaxPooling2D((3, 3), strides=(2, 2))(x)x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')x = identity_block(x, 3, [128, 128, 512], stage=3, block='b')x = identity_block(x, 3, [128, 128, 512], stage=3, block='c')x = identity_block(x, 3, [128, 128, 512], stage=3, block='d')x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')x = AveragePooling2D((7, 7), name='avg_pool')(x)x = Flatten()(x)x = Dense(classes, activation='softmax', name='fc1000')(x)model = Model(img_input, x, name='ResNet50')model.load_weights('C:/Self_Learning/Deep_Learning/K_Codes/data/8_data/ResNet50_weights_tf_dim_ordering_tf_kernels.h5', by_name=True, skip_mismatch=True)return modelmodel = ResNet50()
model.summary()
Model: "ResNet50"
__________________________________________________________________________________________________Layer (type) Output Shape Param # Connected to
==================================================================================================input_3 (InputLayer) [(None, 224, 224, 3)] 0 [] zero_padding2d_2 (ZeroPadd (None, 230, 230, 3) 0 ['input_3[0][0]'] ing2D) conv1 (Conv2D) (None, 112, 112, 64) 9472 ['zero_padding2d_2[0][0]'] bn_conv1 (BatchNormalizati (None, 112, 112, 64) 256 ['conv1[0][0]'] on) activation_4 (Activation) (None, 112, 112, 64) 0 ['bn_conv1[0][0]'] max_pooling2d_2 (MaxPoolin (None, 55, 55, 64) 0 ['activation_4[0][0]'] g2D) 2a_conv_block_conv1 (Conv2 (None, 55, 55, 64) 4160 ['max_pooling2d_2[0][0]'] D) 2a_conv_block_bn1 (BatchNo (None, 55, 55, 64) 256 ['2a_conv_block_conv1[0][0]'] rmalization) 2a_conv_block_relu1 (Activ (None, 55, 55, 64) 0 ['2a_conv_block_bn1[0][0]'] ation) 2a_conv_block_conv2 (Conv2 (None, 55, 55, 64) 36928 ['2a_conv_block_relu1[0][0]'] D) 2a_conv_block_bn2 (BatchNo (None, 55, 55, 64) 256 ['2a_conv_block_conv2[0][0]'] rmalization) 2a_conv_block_relu2 (Activ (None, 55, 55, 64) 0 ['2a_conv_block_bn2[0][0]'] ation) 2a_conv_block_conv3 (Conv2 (None, 55, 55, 256) 16640 ['2a_conv_block_relu2[0][0]'] D) 2a_conv_block_res_conv (Co (None, 55, 55, 256) 16640 ['max_pooling2d_2[0][0]'] nv2D) 2a_conv_block_bn3 (BatchNo (None, 55, 55, 256) 1024 ['2a_conv_block_conv3[0][0]'] rmalization) 2a_conv_block_res_bn (Batc (None, 55, 55, 256) 1024 ['2a_conv_block_res_conv[0][0]hNormalization) '] 2a_conv_block_add (Add) (None, 55, 55, 256) 0 ['2a_conv_block_bn3[0][0]', '2a_conv_block_res_bn[0][0]']2a_conv_block_relu4 (Activ (None, 55, 55, 256) 0 ['2a_conv_block_add[0][0]'] ation) 2b_identity_block_conv1 (C (None, 55, 55, 64) 16448 ['2a_conv_block_relu4[0][0]'] onv2D) 2b_identity_block_bn1 (Bat (None, 55, 55, 64) 256 ['2b_identity_block_conv1[0][0chNormalization) ]'] 2b_identity_block_relu1 (A (None, 55, 55, 64) 0 ['2b_identity_block_bn1[0][0]'ctivation) ] 2b_identity_block_conv2 (C (None, 55, 55, 64) 36928 ['2b_identity_block_relu1[0][0onv2D) ]'] 2b_identity_block_bn2 (Bat (None, 55, 55, 64) 256 ['2b_identity_block_conv2[0][0chNormalization) ]'] 2b_identity_block_relu2 (A (None, 55, 55, 64) 0 ['2b_identity_block_bn2[0][0]'ctivation) ] 2b_identity_block_conv3 (C (None, 55, 55, 256) 16640 ['2b_identity_block_relu2[0][0onv2D) ]'] 2b_identity_block_bn3 (Bat (None, 55, 55, 256) 1024 ['2b_identity_block_conv3[0][0chNormalization) ]'] 2b_identity_block_add (Add (None, 55, 55, 256) 0 ['2b_identity_block_bn3[0][0]') , '2a_conv_block_relu4[0][0]']2b_identity_block_relu4 (A (None, 55, 55, 256) 0 ['2b_identity_block_add[0][0]'ctivation) ] 2c_identity_block_conv1 (C (None, 55, 55, 64) 16448 ['2b_identity_block_relu4[0][0onv2D) ]'] 2c_identity_block_bn1 (Bat (None, 55, 55, 64) 256 ['2c_identity_block_conv1[0][0chNormalization) ]'] 2c_identity_block_relu1 (A (None, 55, 55, 64) 0 ['2c_identity_block_bn1[0][0]'ctivation) ] 2c_identity_block_conv2 (C (None, 55, 55, 64) 36928 ['2c_identity_block_relu1[0][0onv2D) ]'] 2c_identity_block_bn2 (Bat (None, 55, 55, 64) 256 ['2c_identity_block_conv2[0][0chNormalization) ]'] 2c_identity_block_relu2 (A (None, 55, 55, 64) 0 ['2c_identity_block_bn2[0][0]'ctivation) ] 2c_identity_block_conv3 (C (None, 55, 55, 256) 16640 ['2c_identity_block_relu2[0][0onv2D) ]'] 2c_identity_block_bn3 (Bat (None, 55, 55, 256) 1024 ['2c_identity_block_conv3[0][0chNormalization) ]'] 2c_identity_block_add (Add (None, 55, 55, 256) 0 ['2c_identity_block_bn3[0][0]') , '2b_identity_block_relu4[0][0]'] 2c_identity_block_relu4 (A (None, 55, 55, 256) 0 ['2c_identity_block_add[0][0]'ctivation) ] 3a_conv_block_conv1 (Conv2 (None, 28, 28, 128) 32896 ['2c_identity_block_relu4[0][0D) ]'] 3a_conv_block_bn1 (BatchNo (None, 28, 28, 128) 512 ['3a_conv_block_conv1[0][0]'] rmalization) 3a_conv_block_relu1 (Activ (None, 28, 28, 128) 0 ['3a_conv_block_bn1[0][0]'] ation) 3a_conv_block_conv2 (Conv2 (None, 28, 28, 128) 147584 ['3a_conv_block_relu1[0][0]'] D) 3a_conv_block_bn2 (BatchNo (None, 28, 28, 128) 512 ['3a_conv_block_conv2[0][0]'] rmalization) 3a_conv_block_relu2 (Activ (None, 28, 28, 128) 0 ['3a_conv_block_bn2[0][0]'] ation) 3a_conv_block_conv3 (Conv2 (None, 28, 28, 512) 66048 ['3a_conv_block_relu2[0][0]'] D) 3a_conv_block_res_conv (Co (None, 28, 28, 512) 131584 ['2c_identity_block_relu4[0][0nv2D) ]'] 3a_conv_block_bn3 (BatchNo (None, 28, 28, 512) 2048 ['3a_conv_block_conv3[0][0]'] rmalization) 3a_conv_block_res_bn (Batc (None, 28, 28, 512) 2048 ['3a_conv_block_res_conv[0][0]hNormalization) '] 3a_conv_block_add (Add) (None, 28, 28, 512) 0 ['3a_conv_block_bn3[0][0]', '3a_conv_block_res_bn[0][0]']3a_conv_block_relu4 (Activ (None, 28, 28, 512) 0 ['3a_conv_block_add[0][0]'] ation) 3b_identity_block_conv1 (C (None, 28, 28, 128) 65664 ['3a_conv_block_relu4[0][0]'] onv2D) 3b_identity_block_bn1 (Bat (None, 28, 28, 128) 512 ['3b_identity_block_conv1[0][0chNormalization) ]'] 3b_identity_block_relu1 (A (None, 28, 28, 128) 0 ['3b_identity_block_bn1[0][0]'ctivation) ] 3b_identity_block_conv2 (C (None, 28, 28, 128) 147584 ['3b_identity_block_relu1[0][0onv2D) ]'] 3b_identity_block_bn2 (Bat (None, 28, 28, 128) 512 ['3b_identity_block_conv2[0][0chNormalization) ]'] 3b_identity_block_relu2 (A (None, 28, 28, 128) 0 ['3b_identity_block_bn2[0][0]'ctivation) ] 3b_identity_block_conv3 (C (None, 28, 28, 512) 66048 ['3b_identity_block_relu2[0][0onv2D) ]'] 3b_identity_block_bn3 (Bat (None, 28, 28, 512) 2048 ['3b_identity_block_conv3[0][0chNormalization) ]'] 3b_identity_block_add (Add (None, 28, 28, 512) 0 ['3b_identity_block_bn3[0][0]') , '3a_conv_block_relu4[0][0]']3b_identity_block_relu4 (A (None, 28, 28, 512) 0 ['3b_identity_block_add[0][0]'ctivation) ] 3c_identity_block_conv1 (C (None, 28, 28, 128) 65664 ['3b_identity_block_relu4[0][0onv2D) ]'] 3c_identity_block_bn1 (Bat (None, 28, 28, 128) 512 ['3c_identity_block_conv1[0][0chNormalization) ]'] 3c_identity_block_relu1 (A (None, 28, 28, 128) 0 ['3c_identity_block_bn1[0][0]'ctivation) ] 3c_identity_block_conv2 (C (None, 28, 28, 128) 147584 ['3c_identity_block_relu1[0][0onv2D) ]'] 3c_identity_block_bn2 (Bat (None, 28, 28, 128) 512 ['3c_identity_block_conv2[0][0chNormalization) ]'] 3c_identity_block_relu2 (A (None, 28, 28, 128) 0 ['3c_identity_block_bn2[0][0]'ctivation) ] 3c_identity_block_conv3 (C (None, 28, 28, 512) 66048 ['3c_identity_block_relu2[0][0onv2D) ]'] 3c_identity_block_bn3 (Bat (None, 28, 28, 512) 2048 ['3c_identity_block_conv3[0][0chNormalization) ]'] 3c_identity_block_add (Add (None, 28, 28, 512) 0 ['3c_identity_block_bn3[0][0]') , '3b_identity_block_relu4[0][0]'] 3c_identity_block_relu4 (A (None, 28, 28, 512) 0 ['3c_identity_block_add[0][0]'ctivation) ] 3d_identity_block_conv1 (C (None, 28, 28, 128) 65664 ['3c_identity_block_relu4[0][0onv2D) ]'] 3d_identity_block_bn1 (Bat (None, 28, 28, 128) 512 ['3d_identity_block_conv1[0][0chNormalization) ]'] 3d_identity_block_relu1 (A (None, 28, 28, 128) 0 ['3d_identity_block_bn1[0][0]'ctivation) ] 3d_identity_block_conv2 (C (None, 28, 28, 128) 147584 ['3d_identity_block_relu1[0][0onv2D) ]'] 3d_identity_block_bn2 (Bat (None, 28, 28, 128) 512 ['3d_identity_block_conv2[0][0chNormalization) ]'] 3d_identity_block_relu2 (A (None, 28, 28, 128) 0 ['3d_identity_block_bn2[0][0]'ctivation) ] 3d_identity_block_conv3 (C (None, 28, 28, 512) 66048 ['3d_identity_block_relu2[0][0onv2D) ]'] 3d_identity_block_bn3 (Bat (None, 28, 28, 512) 2048 ['3d_identity_block_conv3[0][0chNormalization) ]'] 3d_identity_block_add (Add (None, 28, 28, 512) 0 ['3d_identity_block_bn3[0][0]') , '3c_identity_block_relu4[0][0]'] 3d_identity_block_relu4 (A (None, 28, 28, 512) 0 ['3d_identity_block_add[0][0]'ctivation) ] 4a_conv_block_conv1 (Conv2 (None, 14, 14, 256) 131328 ['3d_identity_block_relu4[0][0D) ]'] 4a_conv_block_bn1 (BatchNo (None, 14, 14, 256) 1024 ['4a_conv_block_conv1[0][0]'] rmalization) 4a_conv_block_relu1 (Activ (None, 14, 14, 256) 0 ['4a_conv_block_bn1[0][0]'] ation) 4a_conv_block_conv2 (Conv2 (None, 14, 14, 256) 590080 ['4a_conv_block_relu1[0][0]'] D) 4a_conv_block_bn2 (BatchNo (None, 14, 14, 256) 1024 ['4a_conv_block_conv2[0][0]'] rmalization) 4a_conv_block_relu2 (Activ (None, 14, 14, 256) 0 ['4a_conv_block_bn2[0][0]'] ation) 4a_conv_block_conv3 (Conv2 (None, 14, 14, 1024) 263168 ['4a_conv_block_relu2[0][0]'] D) 4a_conv_block_res_conv (Co (None, 14, 14, 1024) 525312 ['3d_identity_block_relu4[0][0nv2D) ]'] 4a_conv_block_bn3 (BatchNo (None, 14, 14, 1024) 4096 ['4a_conv_block_conv3[0][0]'] rmalization) 4a_conv_block_res_bn (Batc (None, 14, 14, 1024) 4096 ['4a_conv_block_res_conv[0][0]hNormalization) '] 4a_conv_block_add (Add) (None, 14, 14, 1024) 0 ['4a_conv_block_bn3[0][0]', '4a_conv_block_res_bn[0][0]']4a_conv_block_relu4 (Activ (None, 14, 14, 1024) 0 ['4a_conv_block_add[0][0]'] ation) 4b_identity_block_conv1 (C (None, 14, 14, 256) 262400 ['4a_conv_block_relu4[0][0]'] onv2D) 4b_identity_block_bn1 (Bat (None, 14, 14, 256) 1024 ['4b_identity_block_conv1[0][0chNormalization) ]'] 4b_identity_block_relu1 (A (None, 14, 14, 256) 0 ['4b_identity_block_bn1[0][0]'ctivation) ] 4b_identity_block_conv2 (C (None, 14, 14, 256) 590080 ['4b_identity_block_relu1[0][0onv2D) ]'] 4b_identity_block_bn2 (Bat (None, 14, 14, 256) 1024 ['4b_identity_block_conv2[0][0chNormalization) ]'] 4b_identity_block_relu2 (A (None, 14, 14, 256) 0 ['4b_identity_block_bn2[0][0]'ctivation) ] 4b_identity_block_conv3 (C (None, 14, 14, 1024) 263168 ['4b_identity_block_relu2[0][0onv2D) ]'] 4b_identity_block_bn3 (Bat (None, 14, 14, 1024) 4096 ['4b_identity_block_conv3[0][0chNormalization) ]'] 4b_identity_block_add (Add (None, 14, 14, 1024) 0 ['4b_identity_block_bn3[0][0]') , '4a_conv_block_relu4[0][0]']4b_identity_block_relu4 (A (None, 14, 14, 1024) 0 ['4b_identity_block_add[0][0]'ctivation) ] 4c_identity_block_conv1 (C (None, 14, 14, 256) 262400 ['4b_identity_block_relu4[0][0onv2D) ]'] 4c_identity_block_bn1 (Bat (None, 14, 14, 256) 1024 ['4c_identity_block_conv1[0][0chNormalization) ]'] 4c_identity_block_relu1 (A (None, 14, 14, 256) 0 ['4c_identity_block_bn1[0][0]'ctivation) ] 4c_identity_block_conv2 (C (None, 14, 14, 256) 590080 ['4c_identity_block_relu1[0][0onv2D) ]'] 4c_identity_block_bn2 (Bat (None, 14, 14, 256) 1024 ['4c_identity_block_conv2[0][0chNormalization) ]'] 4c_identity_block_relu2 (A (None, 14, 14, 256) 0 ['4c_identity_block_bn2[0][0]'ctivation) ] 4c_identity_block_conv3 (C (None, 14, 14, 1024) 263168 ['4c_identity_block_relu2[0][0onv2D) ]'] 4c_identity_block_bn3 (Bat (None, 14, 14, 1024) 4096 ['4c_identity_block_conv3[0][0chNormalization) ]'] 4c_identity_block_add (Add (None, 14, 14, 1024) 0 ['4c_identity_block_bn3[0][0]') , '4b_identity_block_relu4[0][0]'] 4c_identity_block_relu4 (A (None, 14, 14, 1024) 0 ['4c_identity_block_add[0][0]'ctivation) ] 4d_identity_block_conv1 (C (None, 14, 14, 256) 262400 ['4c_identity_block_relu4[0][0onv2D) ]'] 4d_identity_block_bn1 (Bat (None, 14, 14, 256) 1024 ['4d_identity_block_conv1[0][0chNormalization) ]'] 4d_identity_block_relu1 (A (None, 14, 14, 256) 0 ['4d_identity_block_bn1[0][0]'ctivation) ] 4d_identity_block_conv2 (C (None, 14, 14, 256) 590080 ['4d_identity_block_relu1[0][0onv2D) ]'] 4d_identity_block_bn2 (Bat (None, 14, 14, 256) 1024 ['4d_identity_block_conv2[0][0chNormalization) ]'] 4d_identity_block_relu2 (A (None, 14, 14, 256) 0 ['4d_identity_block_bn2[0][0]'ctivation) ] 4d_identity_block_conv3 (C (None, 14, 14, 1024) 263168 ['4d_identity_block_relu2[0][0onv2D) ]'] 4d_identity_block_bn3 (Bat (None, 14, 14, 1024) 4096 ['4d_identity_block_conv3[0][0chNormalization) ]'] 4d_identity_block_add (Add (None, 14, 14, 1024) 0 ['4d_identity_block_bn3[0][0]') , '4c_identity_block_relu4[0][0]'] 4d_identity_block_relu4 (A (None, 14, 14, 1024) 0 ['4d_identity_block_add[0][0]'ctivation) ] 4e_identity_block_conv1 (C (None, 14, 14, 256) 262400 ['4d_identity_block_relu4[0][0onv2D) ]'] 4e_identity_block_bn1 (Bat (None, 14, 14, 256) 1024 ['4e_identity_block_conv1[0][0chNormalization) ]'] 4e_identity_block_relu1 (A (None, 14, 14, 256) 0 ['4e_identity_block_bn1[0][0]'ctivation) ] 4e_identity_block_conv2 (C (None, 14, 14, 256) 590080 ['4e_identity_block_relu1[0][0onv2D) ]'] 4e_identity_block_bn2 (Bat (None, 14, 14, 256) 1024 ['4e_identity_block_conv2[0][0chNormalization) ]'] 4e_identity_block_relu2 (A (None, 14, 14, 256) 0 ['4e_identity_block_bn2[0][0]'ctivation) ] 4e_identity_block_conv3 (C (None, 14, 14, 1024) 263168 ['4e_identity_block_relu2[0][0onv2D) ]'] 4e_identity_block_bn3 (Bat (None, 14, 14, 1024) 4096 ['4e_identity_block_conv3[0][0chNormalization) ]'] 4e_identity_block_add (Add (None, 14, 14, 1024) 0 ['4e_identity_block_bn3[0][0]') , '4d_identity_block_relu4[0][0]'] 4e_identity_block_relu4 (A (None, 14, 14, 1024) 0 ['4e_identity_block_add[0][0]'ctivation) ] 4f_identity_block_conv1 (C (None, 14, 14, 256) 262400 ['4e_identity_block_relu4[0][0onv2D) ]'] 4f_identity_block_bn1 (Bat (None, 14, 14, 256) 1024 ['4f_identity_block_conv1[0][0chNormalization) ]'] 4f_identity_block_relu1 (A (None, 14, 14, 256) 0 ['4f_identity_block_bn1[0][0]'ctivation) ] 4f_identity_block_conv2 (C (None, 14, 14, 256) 590080 ['4f_identity_block_relu1[0][0onv2D) ]'] 4f_identity_block_bn2 (Bat (None, 14, 14, 256) 1024 ['4f_identity_block_conv2[0][0chNormalization) ]'] 4f_identity_block_relu2 (A (None, 14, 14, 256) 0 ['4f_identity_block_bn2[0][0]'ctivation) ] 4f_identity_block_conv3 (C (None, 14, 14, 1024) 263168 ['4f_identity_block_relu2[0][0onv2D) ]'] 4f_identity_block_bn3 (Bat (None, 14, 14, 1024) 4096 ['4f_identity_block_conv3[0][0chNormalization) ]'] 4f_identity_block_add (Add (None, 14, 14, 1024) 0 ['4f_identity_block_bn3[0][0]') , '4e_identity_block_relu4[0][0]'] 4f_identity_block_relu4 (A (None, 14, 14, 1024) 0 ['4f_identity_block_add[0][0]'ctivation) ] 5a_conv_block_conv1 (Conv2 (None, 7, 7, 512) 524800 ['4f_identity_block_relu4[0][0D) ]'] 5a_conv_block_bn1 (BatchNo (None, 7, 7, 512) 2048 ['5a_conv_block_conv1[0][0]'] rmalization) 5a_conv_block_relu1 (Activ (None, 7, 7, 512) 0 ['5a_conv_block_bn1[0][0]'] ation) 5a_conv_block_conv2 (Conv2 (None, 7, 7, 512) 2359808 ['5a_conv_block_relu1[0][0]'] D) 5a_conv_block_bn2 (BatchNo (None, 7, 7, 512) 2048 ['5a_conv_block_conv2[0][0]'] rmalization) 5a_conv_block_relu2 (Activ (None, 7, 7, 512) 0 ['5a_conv_block_bn2[0][0]'] ation) 5a_conv_block_conv3 (Conv2 (None, 7, 7, 2048) 1050624 ['5a_conv_block_relu2[0][0]'] D) 5a_conv_block_res_conv (Co (None, 7, 7, 2048) 2099200 ['4f_identity_block_relu4[0][0nv2D) ]'] 5a_conv_block_bn3 (BatchNo (None, 7, 7, 2048) 8192 ['5a_conv_block_conv3[0][0]'] rmalization) 5a_conv_block_res_bn (Batc (None, 7, 7, 2048) 8192 ['5a_conv_block_res_conv[0][0]hNormalization) '] 5a_conv_block_add (Add) (None, 7, 7, 2048) 0 ['5a_conv_block_bn3[0][0]', '5a_conv_block_res_bn[0][0]']5a_conv_block_relu4 (Activ (None, 7, 7, 2048) 0 ['5a_conv_block_add[0][0]'] ation) 5b_identity_block_conv1 (C (None, 7, 7, 512) 1049088 ['5a_conv_block_relu4[0][0]'] onv2D) 5b_identity_block_bn1 (Bat (None, 7, 7, 512) 2048 ['5b_identity_block_conv1[0][0chNormalization) ]'] 5b_identity_block_relu1 (A (None, 7, 7, 512) 0 ['5b_identity_block_bn1[0][0]'ctivation) ] 5b_identity_block_conv2 (C (None, 7, 7, 512) 2359808 ['5b_identity_block_relu1[0][0onv2D) ]'] 5b_identity_block_bn2 (Bat (None, 7, 7, 512) 2048 ['5b_identity_block_conv2[0][0chNormalization) ]'] 5b_identity_block_relu2 (A (None, 7, 7, 512) 0 ['5b_identity_block_bn2[0][0]'ctivation) ] 5b_identity_block_conv3 (C (None, 7, 7, 2048) 1050624 ['5b_identity_block_relu2[0][0onv2D) ]'] 5b_identity_block_bn3 (Bat (None, 7, 7, 2048) 8192 ['5b_identity_block_conv3[0][0chNormalization) ]'] 5b_identity_block_add (Add (None, 7, 7, 2048) 0 ['5b_identity_block_bn3[0][0]') , '5a_conv_block_relu4[0][0]']5b_identity_block_relu4 (A (None, 7, 7, 2048) 0 ['5b_identity_block_add[0][0]'ctivation) ] 5c_identity_block_conv1 (C (None, 7, 7, 512) 1049088 ['5b_identity_block_relu4[0][0onv2D) ]'] 5c_identity_block_bn1 (Bat (None, 7, 7, 512) 2048 ['5c_identity_block_conv1[0][0chNormalization) ]'] 5c_identity_block_relu1 (A (None, 7, 7, 512) 0 ['5c_identity_block_bn1[0][0]'ctivation) ] 5c_identity_block_conv2 (C (None, 7, 7, 512) 2359808 ['5c_identity_block_relu1[0][0onv2D) ]'] 5c_identity_block_bn2 (Bat (None, 7, 7, 512) 2048 ['5c_identity_block_conv2[0][0chNormalization) ]'] 5c_identity_block_relu2 (A (None, 7, 7, 512) 0 ['5c_identity_block_bn2[0][0]'ctivation) ] 5c_identity_block_conv3 (C (None, 7, 7, 2048) 1050624 ['5c_identity_block_relu2[0][0onv2D) ]'] 5c_identity_block_bn3 (Bat (None, 7, 7, 2048) 8192 ['5c_identity_block_conv3[0][0chNormalization) ]'] 5c_identity_block_add (Add (None, 7, 7, 2048) 0 ['5c_identity_block_bn3[0][0]') , '5b_identity_block_relu4[0][0]'] 5c_identity_block_relu4 (A (None, 7, 7, 2048) 0 ['5c_identity_block_add[0][0]'ctivation) ] avg_pool (AveragePooling2D (None, 1, 1, 2048) 0 ['5c_identity_block_relu4[0][0) ]'] flatten (Flatten) (None, 2048) 0 ['avg_pool[0][0]'] fc1000 (Dense) (None, 1000) 2049000 ['flatten[0][0]'] ==================================================================================================
Total params: 25636712 (97.80 MB)
Trainable params: 25583592 (97.59 MB)
Non-trainable params: 53120 (207.50 KB)
__________________________________________________________________________________________________
2. 编译模型
# Compile the model
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
3. 训练模型
# Train the model
epochs = 10history = model.fit(train_ds,validation_data=val_ds,epochs=epochs
)
Epoch 1/10
57/57 [==============================] - 476s 7s/step - loss: 2.3338 - accuracy: 0.4779 - val_loss: 95.4034 - val_accuracy: 0.2301
Epoch 2/10
57/57 [==============================] - 259s 5s/step - loss: 1.0860 - accuracy: 0.6438 - val_loss: 4.2480 - val_accuracy: 0.2920
Epoch 3/10
57/57 [==============================] - 247s 4s/step - loss: 0.7115 - accuracy: 0.7212 - val_loss: 0.9247 - val_accuracy: 0.6637
Epoch 4/10
57/57 [==============================] - 254s 4s/step - loss: 0.7610 - accuracy: 0.7456 - val_loss: 0.7742 - val_accuracy: 0.6903
Epoch 5/10
57/57 [==============================] - 276s 5s/step - loss: 0.5945 - accuracy: 0.7544 - val_loss: 0.6813 - val_accuracy: 0.7434
Epoch 6/10
57/57 [==============================] - 282s 5s/step - loss: 0.6199 - accuracy: 0.8053 - val_loss: 0.9435 - val_accuracy: 0.8053
Epoch 7/10
57/57 [==============================] - 252s 4s/step - loss: 0.5086 - accuracy: 0.8363 - val_loss: 1.8492 - val_accuracy: 0.5752
Epoch 8/10
57/57 [==============================] - 253s 4s/step - loss: 0.3915 - accuracy: 0.8827 - val_loss: 1.2361 - val_accuracy: 0.6018
Epoch 9/10
57/57 [==============================] - 247s 4s/step - loss: 0.5021 - accuracy: 0.8296 - val_loss: 2.4609 - val_accuracy: 0.5929
Epoch 10/10
57/57 [==============================] - 248s 4s/step - loss: 0.5328 - accuracy: 0.8186 - val_loss: 2.3554 - val_accuracy: 0.4779
四、模型评估
1. Loss与Accuracy图
# Evaluate the model
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(epochs)plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
2. 预测
# Predict on new images
plt.figure(figsize=(10, 5))for images, labels in val_ds.take(1):for i in range(8):ax = plt.subplot(2, 4, i + 1)image = images[i].numpy().astype("uint8")plt.imshow(image)img_array = tf.expand_dims(images[i], 0)predictions = model.predict(img_array)plt.title(class_names[np.argmax(predictions)])plt.axis("off")
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 0s 145ms/step
1/1 [==============================] - 0s 157ms/step
1/1 [==============================] - 0s 166ms/step
1/1 [==============================] - 0s 137ms/step
1/1 [==============================] - 0s 126ms/step
1/1 [==============================] - 0s 144ms/step
1/1 [==============================] - 0s 162ms/step

五、Pytorch版本代码
import os
import pathlib
import numpy as np
import matplotlib.pyplot as plt
from PIL import Imageimport torch
from torch import nn
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
from torchvision.utils import make_grid# Set font for Chinese labels (SimHei)
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False# Paths
data_dir = 'C:/Self_Learning/Deep_Learning/K_Codes/data/8_data/bird_photos/'
data_dir = pathlib.Path(data_dir)# Transforms
img_height = 224
img_width = 224
batch_size = 8transform = transforms.Compose([transforms.Resize((img_height, img_width)),transforms.ToTensor()
])# Load datasets
train_ds = datasets.ImageFolder(data_dir, transform=transform)
class_names = train_ds.classes
num_classes = len(class_names)
print("Classes:", class_names)# Split into train and val
train_size = int(0.8 * len(train_ds))
val_size = len(train_ds) - train_size
train_dataset, val_dataset = torch.utils.data.random_split(train_ds, [train_size, val_size])train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)# Visualise images
def show_batch(images, labels):img_grid = make_grid(images, nrow=4)npimg = img_grid.numpy()plt.figure(figsize=(10, 5))plt.imshow(np.transpose(npimg, (1, 2, 0)))plt.title(" / ".join([class_names[label] for label in labels]))plt.axis("off")plt.show()images, labels = next(iter(train_loader))
show_batch(images, labels)# Define ResNet50 model
model = models.resnet50(pretrained=False)
model.fc = nn.Linear(model.fc.in_features, num_classes)# Load pretrained weights if needed (optional)
# model.load_state_dict(torch.load('your_resnet50_weights.pth'))device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)# Loss & Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)# Training loop
epochs = 10
train_acc_history = []
val_acc_history = []
train_loss_history = []
val_loss_history = []for epoch in range(epochs):model.train()train_loss, train_correct = 0.0, 0for inputs, labels in train_loader:inputs, labels = inputs.to(device), labels.to(device)optimizer.zero_grad()outputs = model(inputs)loss = criterion(outputs, labels)loss.backward()optimizer.step()train_loss += loss.item() * inputs.size(0)train_correct += (outputs.argmax(1) == labels).sum().item()train_loss /= len(train_loader.dataset)train_acc = train_correct / len(train_loader.dataset)# Validationmodel.eval()val_loss, val_correct = 0.0, 0with torch.no_grad():for inputs, labels in val_loader:inputs, labels = inputs.to(device), labels.to(device)outputs = model(inputs)loss = criterion(outputs, labels)val_loss += loss.item() * inputs.size(0)val_correct += (outputs.argmax(1) == labels).sum().item()val_loss /= len(val_loader.dataset)val_acc = val_correct / len(val_loader.dataset)train_loss_history.append(train_loss)val_loss_history.append(val_loss)train_acc_history.append(train_acc)val_acc_history.append(val_acc)print(f"Epoch {epoch+1}/{epochs}: "f"Train Loss {train_loss:.4f}, Acc {train_acc:.4f} | "f"Val Loss {val_loss:.4f}, Acc {val_acc:.4f}")# Plot training results
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(train_acc_history, label='Train Acc')
plt.plot(val_acc_history, label='Val Acc')
plt.legend()
plt.title('Accuracy')plt.subplot(1, 2, 2)
plt.plot(train_loss_history, label='Train Loss')
plt.plot(val_loss_history, label='Val Loss')
plt.legend()
plt.title('Loss')
plt.show()# Predict on validation batch
model.eval()
images, labels = next(iter(val_loader))
images = images.to(device)
outputs = model(images)
preds = outputs.argmax(1)# Show predictions
plt.figure(figsize=(10, 5))
for i in range(8):ax = plt.subplot(2, 4, i + 1)plt.imshow(images[i].cpu().permute(1, 2, 0).numpy())plt.title(class_names[preds[i]])plt.axis("off")
plt.show()
相关文章:
J1 ResNet-50算法实战与解析
🍨 本文為🔗365天深度學習訓練營 中的學習紀錄博客🍖 原作者:K同学啊 | 接輔導、項目定制 一、理论知识储备 1. 残差网络的由来 ResNet主要解决了CNN在深度加深时的退化问题(梯度消失与梯度爆炸)。 虽然B…...
[MySQL初阶]MySQL(8)索引机制:下
标题:[MySQL初阶]MySQL(8)索引机制:下 水墨不写bug 文章目录 四、从问题到底层,从现象到本质1.为什么插入的数据默认排好序2.MySQL的Page(1)为什么选择用Page?(2&#x…...
Muduo网络库实现 [九] - EventLoopThread模块
目录 设计思路 类的设计 模块的实现 私有接口 公有接口 设计思路 我们说过一个EventLoop要绑定一个线程,未来该EventLoop所管理的所有的连接的操作都需要在这个EventLoop绑定的线程中进行,所以我们该如何实现将EventLoop和线程绑定呢?…...
Vim操作指令全解析
Vim是我们在Linux日常工作中不可或缺的文本编辑器。它强大的功能和高效的编辑方式可以极大提升工作效率。本文将全面解析Vim的各种操作指令,从基础操作到高级技巧。 一、Vim模式解析 Vim是一个模式化编辑器,理解不同模式是掌握Vim的关键: …...
《K230 从熟悉到...》识别机器码(AprilTag)
《K230 从熟悉到...》识别机器码(aprirltag) tag id 《庐山派 K230 从熟悉到...》 识别机器码(AprilTag) AprilTag是一种基于二维码的视觉标记系统,最早是由麻省理工学院(MIT)在2008年开发的。A…...
VMware ESXi:企业级虚拟化平台详解
VMware ESXi:企业级虚拟化平台详解 目录 什么是VMware ESXi? ESXi的发展历史 ESXi的核心特性 3.1 裸机架构(Type-1 Hypervisor) 3.2 轻量化与高性能 3.3 集中管理(vCenter集成) ESXi的架构与工作原理…...
使用 PyTorch 的 `optim.lr_scheduler.CosineAnnealingLR` 学习率调度器
使用 PyTorch 的 optim.lr_scheduler.CosineAnnealingLR 学习率调度器 在深度学习中,学习率(Learning Rate, LR)是影响模型训练效果的一个关键超参数。一个合适的学习率调度策略可以帮助模型更快地收敛,同时避免陷入局部最优或振荡。PyTorch 提供了多种学习率调度器,其中…...
栈和队列的概念
1.栈的概念 只允许在固定的一端进行插入和删除,进行数据的插入和数据的删除操作的一端数栈顶,另一端称为栈底。 栈中数据元素遵循后进先出LIFO (Last In First Out) 压栈:栈的插入。 出栈:栈的删除。出入数据在栈顶。 那么下面…...
常用的元素操作API
click 触发当前元素的点击事件 clear() 清空内容 sendKeys(...) 往文本框一类元素中写入内容 getTagName() 获取元素的的标签名 getAttribute(属性名) 根据属性名获取元素属性值 getText() 获取当前元素的文本值 isDisplayed() 查看元素是否显示 get(String url) 访…...
红日靶场一实操笔记
一,网络拓扑图 二,信息搜集 1.kali机地址:192.168.50.129 2.探测靶机 注:需要win7开启c盘里面的phpstudy的服务。 nmap -sV -Pn 192.168.50.128 或者扫 nmap -PO 192.168.50.0/24 可以看出来win7(ip为192.168.50.128)的靶机开…...
SpringBoot集成Redis 灵活使用 TypedTuple 和 DefaultTypedTuple 实现 Redis ZSet 的复杂操作
以下是 Spring Boot 集成 Redis 中 TypedTuple 和 DefaultTypedTuple 的详细使用说明,包含代码示例和场景说明: 1. 什么是 TypedTuple 和 DefaultTypedTuple? TypedTuple<T> 接口: 定义了 Redis 中有序集合(ZSet…...
7-4 BCD解密
BCD数是用一个字节来表达两位十进制的数,每四个比特表示一位。所以如果一个BCD数的十六进制是0x12,它表达的就是十进制的12。但是小明没学过BCD,把所有的BCD数都当作二进制数转换成十进制输出了。于是BCD的0x12被输出成了十进制的18了&#x…...
Golang改进后的任务调度系统分析
以下是整合了所有改进点的完整代码实现: package mainimport ("bytes""context""fmt""io""log""net/http""sync""time""github.com/go-redis/redis/v8""github.com/robfig/…...
【目标检测】【深度学习】【Pytorch版本】YOLOV2模型算法详解
【目标检测】【深度学习】【Pytorch版本】YOLOV2模型算法详解 文章目录 【目标检测】【深度学习】【Pytorch版本】YOLOV2模型算法详解前言YOLOV2的模型结构YOLOV2模型的基本执行流程YOLOV2模型的网络参数YOLOV2模型的训练方式 YOLOV2的核心思想前向传播阶段反向传播阶段 总结 前…...
NineData云原生智能数据管理平台新功能发布|2025年3月版
本月发布 15 项更新,其中重点发布 3 项、功能优化 11 项、性能优化 1 项。 重点发布 基础服务 - MFA 多因子认证 新增 MFA 多因子认证,提升账号安全性。系统管理员开启后,所有组织成员需绑定认证器,登录时需输入动态验证码。 数…...
破局与赋能:信息系统战略规划方法论
信息系统战略规划是将组织的战略目标和发展规划转化为信息系统的战略目标和发展规划的过程,常见的方法有以下几种: 一、企业系统规划法(BSP) 1.基本概念:通过全面调查,分析企业信息需求,确定信…...
GLSL(OpenGL 着色器语言)基础语法
GLSL(OpenGL 着色器语言)基础语法 GLSL(OpenGL Shading Language)是 OpenGL 计算着色器的语言,语法类似于 C 语言,但提供了针对 GPU 的特殊功能,如向量运算和矩阵运算。 着色器的开头总是要声明…...
Redis基础知识-3
RedisTemplate对多种数据结构的操作 1. String类型 示例代码: // 保存数据 redisTemplate.opsForValue().set("user:1001", "John Doe"); // 设置键值对,无过期时间 redisTemplate.opsForValue().set("user:1002", &qu…...
Git Rebase 操作中丢失提交的恢复方法
背景介绍 在团队协作中,使用 Git 进行版本控制是常见实践。然而,有时在执行 git rebase 或者其他操作后,我们可能会发现自己的提交记录"消失"了,这往往让开发者感到恐慌。本文将介绍几种在 rebase 后恢复丢失提交的方法。 问题描述 当我们执行以下操作时,可能…...
【diffusers 进阶(十五)】dataset 工具,Parquet和Arrow 数据文件格式,load dataset 方法
系列文章目录 【diffusers 极速入门(一)】pipeline 实际调用的是什么? call 方法!【diffusers 极速入门(二)】如何得到扩散去噪的中间结果?Pipeline callbacks 管道回调函数【diffusers极速入门࿰…...
unity各个面板说明
游戏开发,unity各个面板说明 提示:帮帮志会陆续更新非常多的IT技术知识,希望分享的内容对您有用。本章分享的是Python基础语法。前后每一小节的内容是存在的有:学习and理解的关联性,希望对您有用~ unity简介-unity基础…...
游戏引擎学习第199天
回顾并发现我们可能破坏了某些东西 目前,我们的调试 UI 运行得相对顺利,可以创建可修改的调试变量,也可以插入分析器(profiler)等特殊视图组件,并进行一些交互操作。然而,在上一次结束时&#…...
Linux红帽:RHCSA认证知识讲解(十)使用 tar创建归档和压缩文件
Linux红帽:RHCSA认证知识讲解(十)使用 tar创建归档和压缩文件 前言一、归档与压缩的基本概念1.1 归档与压缩的区别 二、使用tar创建归档文件2.1 tar命令格式2.2 示例操作 三、使用tar进行压缩3.2 命令格式3.3 示例操作 前言 在红帽 Linux 系…...
端到端机器学习流水线(MLflow跟踪实验)
目录 端到端机器学习流水线(MLflow跟踪实验)1. 引言2. 项目背景与意义2.1 端到端机器学习流水线的重要性2.2 MLflow的作用2.3 工业级数据处理需求3. 数据集生成与介绍3.1 数据集构成3.2 数据生成方法4. 机器学习流水线与MLflow跟踪4.1 端到端机器学习流水线4.2 MLflow跟踪实验…...
相平面案例分析爱情故事
动态系统的分析可以分为三个步骤:第一步描述系统,通过语言来描述系统的特性,第一步描述系统,即通过语言来描述系统的特性;第二步数学分析,即使用数学工具对系统进行量化解析;第三步结果与讨论&a…...
《2024年全球DDoS攻击态势分析》
从攻击态势来看,2024年DDoS攻击频次继续呈增长趋势,2024年同步增加1.3倍;超大规模攻击激增,超800Gbps同比增长3.1倍,累计高达771次,且互联网史上最大带宽和最大包速率攻击均被刷新;瞬时泛洪攻击…...
RTC实时时钟M41T11M6F国产替代FRTC4111S
由NYFEA徕飞公司制造的FRTC4111S是一种低功耗的串行实时时钟(RTC),国产直接替代ST的M41T11M6F,其具有56字节的NVRAM,32.768 kHz振荡器(由外部晶体控制)和RAM的前8字节用于时钟/日历功能并以二进制编码十进制(BCD)格式配置。地址和数据通过两行双向总线串…...
Uni-app PDF Annotation plugin library online API examples
This article introduces the online version of the ElasticPDF API tutorial for the PDF annotation plug-in library in Uni-app projects. The API includes ① Export edited PDF data; ② Export annotations json data; ③ Reload old annotations; ④ Change files; ⑤…...
SpringKafka消息发布:KafkaTemplate与事务支持
文章目录 引言一、KafkaTemplate基础二、消息序列化三、事务支持机制四、错误处理与重试五、性能优化总结 引言 在现代分布式系统架构中,Apache Kafka作为高吞吐量的消息系统,被广泛应用于事件驱动应用开发。Spring Kafka为Java开发者提供了与Kafka交互…...
进行性核上性麻痹护理指南,助患者安稳生活
生活细致照料 安全保障:进行性核上性麻痹患者易出现平衡障碍、步态不稳,居家环境需格外留意安全。移除地面障碍物,保持通道畅通,在卫生间、走廊安装扶手,防止患者摔倒受伤。 饮食协助:患者常伴有吞咽困难&…...
