【神经网络】(16) MobileNetV3 代码复现,网络解析,附Tensorflow完整代码

2023-11-08

各位同学好,今天和大家分享一下如何使用 Tensorflow 构建 MobileNetV3 轻量化网络模型。

MobileNetV3 做了如下改动(1)更新了V2中的逆转残差结构;(2)使用NAS搜索参数的技术;(3)重新设计耗时层结构。MobileNetV3相比V2版本,在图像分类任务上,准确率上升了3.2%,延误降低了20%

MobileNetV3 是对MobileNetV2的改进,建议大家在学习之前,先了解 MobileNetV1、V2。

MobileNetV1:https://blog.csdn.net/dgvv4/article/details/123415708

MobileNetV2:https://blog.csdn.net/dgvv4/article/details/123417739


 1. 网络核心模块介绍

1.1 MobileNetV1 深度可分离卷积

MobileNetV1 中主要使用了深度可分离卷积模块,大大减少了参数量和计算量。

普通卷积一个卷积核处理所有的通道,输入特征图有多少个通道,卷积核就有几个通道,一个卷积核生成一张特征图。

深度可分离卷积 可理解为 深度卷积 + 逐点卷积
深度卷积只处理长宽方向的空间信息逐点卷积只处理跨通道方向的信息。能大大减少参数量,提高计算效率

深度卷积: 一个卷积核只处理一个通道,即每个卷积核只处理自己对应的通道输入特征图有多少个通道就有多少个卷积核。将每个卷积核处理后的特征图堆叠在一起。输入和输出特征图的通道数相同。

由于只处理长宽方向的信息会导致丢失跨通道信息,为了将跨通道的信息补充回来,需要进行逐点卷积。

逐点卷积: 是使用1x1卷积对跨通道维度处理有多少个1x1卷积核就会生成多少个特征图


1.2 MobileNetV2 逆转残差结构

MobileNetV2 使用了逆转残差模块。输入图像,先使用1x1卷积提升通道数;然后在高维空间下使用深度卷积;再使用1x1卷积下降通道数降维时采用线性激活函数(y=x)。当步长等于1且输入和输出特征图的shape相同时,使用残差连接输入和输出;当步长=2(下采样阶段)直接输出降维后的特征图。

对比 ResNet 的残差结构。输入图像,先使用1x1卷积下降通道数;然后在低维空间下使用标准卷积,再使用1x1卷积上升通道数激活函数都是ReLU函数。当步长等于1且输入和输出特征图的shape相同时,使用残差连接输入和输出;当步长=2(下采样阶段)直接输出降维后的特征图。


1.3 MobileNetV3 改进逆转残差结构

主要有以下改进:(1)添加SE注意力机制;(2)使用新的激活函数

1.3.1 SE注意力机制

(1)先将特征图进行全局平均池化,特征图有多少个通道,那么池化结果(一维向量)就有多少个元素,[h, w, c]==>[None, c]

(2)然后经过两个全连接层得到输出向量。第一个全连接层输出通道数等于原输入特征图的通道数的1/4第二个全连接层输出通道数等于原输入特征图的通道数。即先降维后升维。

(3)全连接层的输出向量可理解为,向量的每个元素是对每张特征图进行分析得出的权重关系比较重要的特征图就会赋予更大的权重,即该特征图对应的向量元素的值较大。反之,不太重要的特征图对应的权重值较小。

(4)第一个全连接层使用ReLU激活函数,第二个全连接层使用 hard_sigmoid 激活函数

(5)经过两个全连接层得到一个由channel个元素组成的向量每个元素是针对每个通道的权重,将权重和原特征图的对应相乘,得到新的特征图数据

以下图为例,特征图经过两个全连接层之后,比较重要的特征图对应的向量元素的值就较大。将得到的权重和对应特征图中的所有元素相乘,得到新的输出特征图


1.3.2 使用不同的激活函数

swish激活函数公式为:\frac{x}{e^{-x}}尽管提高了网络精度,但是它的计算、求导复杂,对量化过程不友好,尤其对移动端设备的计算。

h_sigmoid激活函数公式为:\frac{ReLU6(x+3)}{6}ReLU6激活函数公式为:min(max(x,0),6)

h_swish激活函数公式为:x*\frac{ReLU6(x+3)}{6},替换之后网络的推理速度加快,对量化过程比较友好


1.3.3 总体流程

图像输入,先通过1x1卷积上升通道数;然后在高维空间下使用深度卷积;再经过SE注意力机制优化特征图数据;最后经过1x1卷积下降通道数(使用线性激活函数)。当步长等于1且输入和输出特征图的shape相同时,使用残差连接输入和输出;当步长=2(下采样阶段)直接输出降维后的特征图。


1.4 重新设计耗时层结构

(1)减少第一个卷积层的卷积核个数。将卷积核个数从32个降低到16个之后,准确率和降低之前是一样的。减少卷积核个数可以减少计算量,节省2ms时间

(2)简化最后的输出层删除多余的卷积层,在准确率上没有变化,节省了7ms执行时间,这7ms占据了整个推理过程的11%的执行时间。明显提升计算速度。


2. 代码复现

2.1 网络结构图

网络模型结构如图所示。exp size 代表1*1卷积上升的通道数;#out 代表1*1卷积下降的通道数,即输出特征图数量;SE 代表是否使用注意力机制;NL 代表使用哪种激活函数;s 代表步长;bneck 代表逆残差结构;NBN 代表不使用批标准化。


2.2 搭建核心模块

(1)激活函数选择

根据上面的公式,定义hard_sigmoid激活函数和hard_swish激活函数。

#(1)激活函数:h-sigmoid
def h_sigmoid(input_tensor):

    x = layers.Activation('hard_sigmoid')(input_tensor)

    return x

#(2)激活函数:h-swish
def h_swish(input_tensor):

    x = input_tensor * h_sigmoid(input_tensor)

    return x

(2)SE注意力机制

SE注意力机制由 全局平均池化 + 全连接层降维 + 全连接层升维 + 对应权重相乘 组成。为了减少参数量和计算量,全连接层由1*1普通卷积层代替

#(3)SE注意力机制
def se_block(input_tensor):

    squeeze = input_tensor.shape[-1]/4  # 第一个全连接通道数下降1/4
    excitation = input_tensor.shape[-1]  # 第二个全连接通道数上升至原来

    # 全局平均池化[b,h,w,c]==>[b,c]
    x = layers.GlobalAveragePooling2D()(input_tensor)

    # 添加宽度和高度的维度信息,因为下面要使用卷积层代替全连接层
    x = layers.Reshape(target_shape=(1, 1, x.shape[-1]))(x)  #[b,c]==>[b,1,1,c]
    
    # 第一个全连接层下降通道数,用 1*1卷积层 代替,减少参数量
    x = layers.Conv2D(filters=squeeze,  # 通道数下降原来的1/4
                      kernel_size=(1,1),  # 1*1卷积融合通道信息
                      strides=1,  # 步长=1
                      padding='same')(x)  # 卷积过程中特征图size不变

    x = layers.ReLU()(x)  # relu激活

    # 第二个全连接层上升通道数,也用1*1卷积层代替
    x = layers.Conv2D(filters=excitation,   # 通道数上升至原始的特征图数量
                      kernel_size=(1,1),
                      strides=1,
                      padding='same')(x)
    
    x = h_sigmoid(x)  # hard_sigmoid激活函数

    # 将输入特征图的每个通道和SE得到的针对每个通道的权重相乘
    output = layers.Multiply()([input_tensor, x])

    return output

(3)标准卷积块

一个标准卷积块是由 普通卷积 + 批标准化 + 激活函数 组成

#(4)标准卷积块
def conv_block(input_tensor, filters, kernel_size, stride, activation):

    # 判断使用什么类型的激活函数
    if activation == 'RE':
        act = layers.ReLU()  # relu激活
    elif activation == 'HS':
        act = h_swish  # hardswish激活
    
    # 普通卷积
    x = layers.Conv2D(filters, kernel_size, strides=stride, padding='same', use_bias=False)(input_tensor)
    # BN层
    x = layers.BatchNormalization()(x)
    # 激活
    x = act(x)

    return x

(4)逆残差模块

相比于MobileNetV2的逆残差模块,添加了注意力机制,使用不同的激活函数

#(5)逆转残差模块bneck
def bneck(x, expansion, filters, kernel_size, stride, se, activation):
    """
    filters代表bottleneck模块输出特征图的通道数个数
    se是bool类型, se=True 就使用注意力机制, 反之则不使用
    activation表示使用什么类型的激活函数'RE'和'HS'
    """
    # 残差边
    residual = x

    # 判断使用什么类型的激活函数
    if activation == 'RE':
        act = layers.ReLU()  # relu激活
    elif activation == 'HS':
        act = h_swish  # hardswish激活

    # ① 1*1卷积上升通道数
    if expansion != filters:  # 第一个bneck模块不需要上升通道数,

        x = layers.Conv2D(filters = expansion,  # 上升的通道数
                        kernel_size = (1,1),  # 1*1卷积融合通道信息
                        strides = 1,  # 只处理通道方向的信息
                        padding = 'same',  # 卷积过程中size不变,
                        use_bias = False)(x)  # 有BN层就不使用偏置
        
        x = layers.BatchNormalization()(x)  # 批标准化

        x = act(x)  # 激活函数

    # ② 深度卷积
    x = layers.DepthwiseConv2D(kernel_size = kernel_size,  # 卷积核size
                               strides = stride,  # 是否进行下采样
                               padding = 'same',  # 步长=2,特征图长宽减半
                               use_bias = False)(x)  # 有BN层就不要偏置
    
    x = layers.BatchNormalization()(x)  # 批标准化

    x = act(x)  # 激活函数

    # ③ 是否使用注意力机制
    if se == True:
        x = se_block(x)

    # ④ 1*1卷积下降通道数
    x = layers.Conv2D(filters = filters,  # 输出特征图个数
                      kernel_size = (1,1),  # 1*1卷积融合通道信息
                      strides = 1,
                      padding = 'same',
                      use_bias = False)(x)
    
    x = layers.BatchNormalization()(x)
    # 使用的是线性激活函数y=x

    # ④ 如果深度卷积的步长=1并且输入和输出的shape相同,就叠加残差边
    if stride == 1 and residual.shape==x.shape:
        x = layers.Add()([residual, x])

    return x  # 如果步长=2,直接返回1*1卷积下降通道数后的结果

2.3 完整代码展示

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model, layers

#(1)激活函数:h-sigmoid
def h_sigmoid(input_tensor):

    x = layers.Activation('hard_sigmoid')(input_tensor)

    return x

#(2)激活函数:h-swish
def h_swish(input_tensor):

    x = input_tensor * h_sigmoid(input_tensor)

    return x

#(3)SE注意力机制
def se_block(input_tensor):

    squeeze = input_tensor.shape[-1]/4  # 第一个全连接通道数下降1/4
    excitation = input_tensor.shape[-1]  # 第二个全连接通道数上升至原来

    # 全局平均池化[b,h,w,c]==>[b,c]
    x = layers.GlobalAveragePooling2D()(input_tensor)

    # 添加宽度和高度的维度信息,因为下面要使用卷积层代替全连接层
    x = layers.Reshape(target_shape=(1, 1, x.shape[-1]))(x)  #[b,c]==>[b,1,1,c]
    
    # 第一个全连接层下降通道数,用 1*1卷积层 代替,减少参数量
    x = layers.Conv2D(filters=squeeze,  # 通道数下降原来的1/4
                      kernel_size=(1,1),  # 1*1卷积融合通道信息
                      strides=1,  # 步长=1
                      padding='same')(x)  # 卷积过程中特征图size不变

    x = layers.ReLU()(x)  # relu激活

    # 第二个全连接层上升通道数,也用1*1卷积层代替
    x = layers.Conv2D(filters=excitation,   # 通道数上升至原始的特征图数量
                      kernel_size=(1,1),
                      strides=1,
                      padding='same')(x)
    
    x = h_sigmoid(x)  # hard_sigmoid激活函数

    # 将输入特征图的每个通道和SE得到的针对每个通道的权重相乘
    output = layers.Multiply()([input_tensor, x])

    return output

#(4)标准卷积块
def conv_block(input_tensor, filters, kernel_size, stride, activation):

    # 判断使用什么类型的激活函数
    if activation == 'RE':
        act = layers.ReLU()  # relu激活
    elif activation == 'HS':
        act = h_swish  # hardswish激活
    
    # 普通卷积
    x = layers.Conv2D(filters, kernel_size, strides=stride, padding='same', use_bias=False)(input_tensor)
    # BN层
    x = layers.BatchNormalization()(x)
    # 激活
    x = act(x)

    return x

#(5)逆转残差模块bneck
def bneck(x, expansion, filters, kernel_size, stride, se, activation):
    """
    filters代表bottleneck模块输出特征图的通道数个数
    se是bool类型, se=True 就使用注意力机制, 反之则不使用
    activation表示使用什么类型的激活函数'RE'和'HS'
    """
    # 残差边
    residual = x

    # 判断使用什么类型的激活函数
    if activation == 'RE':
        act = layers.ReLU()  # relu激活
    elif activation == 'HS':
        act = h_swish  # hardswish激活

    # ① 1*1卷积上升通道数
    if expansion != filters:  # 第一个bneck模块不需要上升通道数,

        x = layers.Conv2D(filters = expansion,  # 上升的通道数
                        kernel_size = (1,1),  # 1*1卷积融合通道信息
                        strides = 1,  # 只处理通道方向的信息
                        padding = 'same',  # 卷积过程中size不变,
                        use_bias = False)(x)  # 有BN层就不使用偏置
        
        x = layers.BatchNormalization()(x)  # 批标准化

        x = act(x)  # 激活函数

    # ② 深度卷积
    x = layers.DepthwiseConv2D(kernel_size = kernel_size,  # 卷积核size
                               strides = stride,  # 是否进行下采样
                               padding = 'same',  # 步长=2,特征图长宽减半
                               use_bias = False)(x)  # 有BN层就不要偏置
    
    x = layers.BatchNormalization()(x)  # 批标准化

    x = act(x)  # 激活函数

    # ③ 是否使用注意力机制
    if se == True:
        x = se_block(x)

    # ④ 1*1卷积下降通道数
    x = layers.Conv2D(filters = filters,  # 输出特征图个数
                      kernel_size = (1,1),  # 1*1卷积融合通道信息
                      strides = 1,
                      padding = 'same',
                      use_bias = False)(x)
    
    x = layers.BatchNormalization()(x)
    # 使用的是线性激活函数y=x

    # ④ 如果深度卷积的步长=1并且输入和输出的shape相同,就叠加残差边
    if stride == 1 and residual.shape==x.shape:
        x = layers.Add()([residual, x])

    return x  # 如果步长=2,直接返回1*1卷积下降通道数后的结果

#(6)主干网络
def mobilenet(input_shape, classes):  # 输入图像shape,分类数

    # 构造输入
    inputs = keras.Input(shape=input_shape)

    # [224,224,3] ==> [112,112,16]
    x = conv_block(inputs, filters=16, kernel_size=(3,3), stride=2, activation='HS')
    # [112,112,16] ==> [112,112,16]
    x = bneck(x, expansion=16, filters=16, kernel_size=(3,3), stride=1, se=False, activation='RE')
    # [112,112,16] ==> [56,56,24]
    x = bneck(x, expansion=64, filters=24, kernel_size=(3,3), stride=2, se=False, activation='RE')
    # [56,56,24] ==> [56,56,24]
    x = bneck(x, expansion=72, filters=24, kernel_size=(3,3), stride=1, se=False, activation='RE')
    # [56,56,24] ==> [28,28,40]
    x = bneck(x, expansion=72, filters=40, kernel_size=(5,5), stride=2, se=True, activation='RE')
    # [28,28,40] ==> [28,28,40]
    x = bneck(x, expansion=120, filters=40, kernel_size=(5,5), stride=1, se=True, activation='RE')
    # [28,28,40] ==> [28,28,40]
    x = bneck(x, expansion=120, filters=40, kernel_size=(5,5), stride=1, se=True, activation='RE')
    # [28,28,40] ==> [14,14,80]
    x = bneck(x, expansion=240, filters=80, kernel_size=(3,3), stride=2, se=False, activation='HS')
    # [14,14,80] ==> [14,14,80]
    x = bneck(x, expansion=200, filters=80, kernel_size=(3,3), stride=1, se=False, activation='HS')
    # [14,14,80] ==> [14,14,80]
    x = bneck(x, expansion=184, filters=80, kernel_size=(3,3), stride=1, se=False, activation='HS')
    # [14,14,80] ==> [14,14,80]
    x = bneck(x, expansion=184, filters=80, kernel_size=(3,3), stride=1, se=False, activation='HS')
    # [14,14,80] ==> [14,14,112]
    x = bneck(x, expansion=480, filters=112, kernel_size=(3,3), stride=1, se=True, activation='HS')
    # [14,14,112] ==> [14,14,112]
    x = bneck(x, expansion=672, filters=112, kernel_size=(3,3), stride=1, se=True, activation='HS')
    # [14,14,112] ==> [7,7,160]
    x = bneck(x, expansion=672, filters=160, kernel_size=(5,5), stride=2, se=True, activation='HS')
    # [7,7,160] ==> [7,7,160]
    x = bneck(x, expansion=960, filters=160, kernel_size=(5,5), stride=1, se=True, activation='HS')
    # [7,7,160] ==> [7,7,160]
    x = bneck(x, expansion=960, filters=160, kernel_size=(5,5), stride=1, se=True, activation='HS')

    # [7,7,160] ==> [7,7,960]
    x = conv_block(x, filters=960, kernel_size=(1,1), stride=1, activation='HS')

    # [7,7,960] ==> [None,960]
    x = layers.MaxPooling2D(pool_size=(7,7))(x)
    # [None,960] ==> [1,1,960]
    x = layers.Reshape(target_shape=(1,1,x.shape[-1]))(x)

    # [1,1,960] ==> [1,1,1280]
    x = layers.Conv2D(filters=1280, kernel_size=(1,1), strides=1, padding='same')(x)
    x = h_swish(x)

    # [1,1,960] ==> [1,1,classes]
    x = layers.Conv2D(filters=classes, kernel_size=(1,1), strides=1, padding='same')(x)
    
    # [1,1,classes] ==> [None,classes]
    logits = layers.Flatten()(x)

    # 构造模型
    model = Model(inputs, logits)

    return model

#(7)接收网络模型
if __name__ == '__main__':

    model = mobilenet(input_shape=[224,224,3], classes=1000)

    model.summary()  # 查看网络架构

2.4 查看网络模型结构

通过函数model.summary()查看网络总体框架,约五百万参数量

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 112, 112, 16) 432         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 112, 112, 16) 64          conv2d[0][0]                     
__________________________________________________________________________________________________
activation (Activation)         (None, 112, 112, 16) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
tf.math.multiply (TFOpLambda)   (None, 112, 112, 16) 0           batch_normalization[0][0]        
                                                                 activation[0][0]                 
__________________________________________________________________________________________________
depthwise_conv2d (DepthwiseConv (None, 112, 112, 16) 144         tf.math.multiply[0][0]           
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 112, 112, 16) 64          depthwise_conv2d[0][0]           
__________________________________________________________________________________________________
re_lu (ReLU)                    (None, 112, 112, 16) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 112, 112, 16) 256         re_lu[0][0]                      
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 112, 112, 16) 64          conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 112, 112, 64) 1024        batch_normalization_2[0][0]      
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 112, 112, 64) 256         conv2d_2[0][0]                   
__________________________________________________________________________________________________
re_lu_1 (ReLU)                  multiple             0           batch_normalization_3[0][0]      
                                                                 batch_normalization_4[0][0]      
__________________________________________________________________________________________________
depthwise_conv2d_1 (DepthwiseCo (None, 56, 56, 64)   576         re_lu_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 56, 56, 64)   256         depthwise_conv2d_1[0][0]         
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 56, 56, 24)   1536        re_lu_1[1][0]                    
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 56, 56, 24)   96          conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 56, 56, 72)   1728        batch_normalization_5[0][0]      
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 56, 56, 72)   288         conv2d_4[0][0]                   
__________________________________________________________________________________________________
re_lu_2 (ReLU)                  (None, 56, 56, 72)   0           batch_normalization_6[0][0]      
                                                                 batch_normalization_7[0][0]      
__________________________________________________________________________________________________
depthwise_conv2d_2 (DepthwiseCo (None, 56, 56, 72)   648         re_lu_2[0][0]                    
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 56, 56, 72)   288         depthwise_conv2d_2[0][0]         
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 56, 56, 24)   1728        re_lu_2[1][0]                    
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 56, 56, 24)   96          conv2d_5[0][0]                   
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 56, 56, 72)   1728        batch_normalization_8[0][0]      
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 56, 56, 72)   288         conv2d_6[0][0]                   
__________________________________________________________________________________________________
re_lu_3 (ReLU)                  multiple             0           batch_normalization_9[0][0]      
                                                                 batch_normalization_10[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_3 (DepthwiseCo (None, 28, 28, 72)   1800        re_lu_3[0][0]                    
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 28, 28, 72)   288         depthwise_conv2d_3[0][0]         
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 72)           0           re_lu_3[1][0]                    
__________________________________________________________________________________________________
reshape (Reshape)               (None, 1, 1, 72)     0           global_average_pooling2d[0][0]   
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 1, 1, 18)     1314        reshape[0][0]                    
__________________________________________________________________________________________________
re_lu_4 (ReLU)                  (None, 1, 1, 18)     0           conv2d_7[0][0]                   
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 1, 1, 72)     1368        re_lu_4[0][0]                    
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 1, 1, 72)     0           conv2d_8[0][0]                   
__________________________________________________________________________________________________
multiply (Multiply)             (None, 28, 28, 72)   0           re_lu_3[1][0]                    
                                                                 activation_1[0][0]               
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 28, 28, 40)   2880        multiply[0][0]                   
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 28, 28, 40)   160         conv2d_9[0][0]                   
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 28, 28, 120)  4800        batch_normalization_11[0][0]     
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 28, 28, 120)  480         conv2d_10[0][0]                  
__________________________________________________________________________________________________
re_lu_5 (ReLU)                  (None, 28, 28, 120)  0           batch_normalization_12[0][0]     
                                                                 batch_normalization_13[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_4 (DepthwiseCo (None, 28, 28, 120)  3000        re_lu_5[0][0]                    
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 28, 28, 120)  480         depthwise_conv2d_4[0][0]         
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 120)          0           re_lu_5[1][0]                    
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 1, 1, 120)    0           global_average_pooling2d_1[0][0] 
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 1, 1, 30)     3630        reshape_1[0][0]                  
__________________________________________________________________________________________________
re_lu_6 (ReLU)                  (None, 1, 1, 30)     0           conv2d_11[0][0]                  
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 1, 1, 120)    3720        re_lu_6[0][0]                    
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 1, 1, 120)    0           conv2d_12[0][0]                  
__________________________________________________________________________________________________
multiply_1 (Multiply)           (None, 28, 28, 120)  0           re_lu_5[1][0]                    
                                                                 activation_2[0][0]               
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 28, 28, 40)   4800        multiply_1[0][0]                 
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 28, 28, 40)   160         conv2d_13[0][0]                  
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 28, 28, 120)  4800        batch_normalization_14[0][0]     
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 28, 28, 120)  480         conv2d_14[0][0]                  
__________________________________________________________________________________________________
re_lu_7 (ReLU)                  (None, 28, 28, 120)  0           batch_normalization_15[0][0]     
                                                                 batch_normalization_16[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_5 (DepthwiseCo (None, 28, 28, 120)  3000        re_lu_7[0][0]                    
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 28, 28, 120)  480         depthwise_conv2d_5[0][0]         
__________________________________________________________________________________________________
global_average_pooling2d_2 (Glo (None, 120)          0           re_lu_7[1][0]                    
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 1, 1, 120)    0           global_average_pooling2d_2[0][0] 
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 1, 1, 30)     3630        reshape_2[0][0]                  
__________________________________________________________________________________________________
re_lu_8 (ReLU)                  (None, 1, 1, 30)     0           conv2d_15[0][0]                  
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 1, 1, 120)    3720        re_lu_8[0][0]                    
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 1, 1, 120)    0           conv2d_16[0][0]                  
__________________________________________________________________________________________________
multiply_2 (Multiply)           (None, 28, 28, 120)  0           re_lu_7[1][0]                    
                                                                 activation_3[0][0]               
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 28, 28, 40)   4800        multiply_2[0][0]                 
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 28, 28, 40)   160         conv2d_17[0][0]                  
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 28, 28, 240)  9600        batch_normalization_17[0][0]     
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 28, 28, 240)  960         conv2d_18[0][0]                  
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 28, 28, 240)  0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_1 (TFOpLambda) (None, 28, 28, 240)  0           batch_normalization_18[0][0]     
                                                                 activation_4[0][0]               
__________________________________________________________________________________________________
depthwise_conv2d_6 (DepthwiseCo (None, 14, 14, 240)  2160        tf.math.multiply_1[0][0]         
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 14, 14, 240)  960         depthwise_conv2d_6[0][0]         
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 14, 14, 240)  0           batch_normalization_19[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_2 (TFOpLambda) (None, 14, 14, 240)  0           batch_normalization_19[0][0]     
                                                                 activation_5[0][0]               
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 14, 14, 80)   19200       tf.math.multiply_2[0][0]         
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 14, 14, 80)   320         conv2d_19[0][0]                  
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 14, 14, 200)  16000       batch_normalization_20[0][0]     
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 14, 14, 200)  800         conv2d_20[0][0]                  
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 14, 14, 200)  0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_3 (TFOpLambda) (None, 14, 14, 200)  0           batch_normalization_21[0][0]     
                                                                 activation_6[0][0]               
__________________________________________________________________________________________________
depthwise_conv2d_7 (DepthwiseCo (None, 14, 14, 200)  1800        tf.math.multiply_3[0][0]         
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 14, 14, 200)  800         depthwise_conv2d_7[0][0]         
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 14, 14, 200)  0           batch_normalization_22[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_4 (TFOpLambda) (None, 14, 14, 200)  0           batch_normalization_22[0][0]     
                                                                 activation_7[0][0]               
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 14, 14, 80)   16000       tf.math.multiply_4[0][0]         
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 14, 14, 80)   320         conv2d_21[0][0]                  
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, 14, 14, 184)  14720       batch_normalization_23[0][0]     
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 14, 14, 184)  736         conv2d_22[0][0]                  
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 14, 14, 184)  0           batch_normalization_24[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_5 (TFOpLambda) (None, 14, 14, 184)  0           batch_normalization_24[0][0]     
                                                                 activation_8[0][0]               
__________________________________________________________________________________________________
depthwise_conv2d_8 (DepthwiseCo (None, 14, 14, 184)  1656        tf.math.multiply_5[0][0]         
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 14, 14, 184)  736         depthwise_conv2d_8[0][0]         
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 14, 14, 184)  0           batch_normalization_25[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_6 (TFOpLambda) (None, 14, 14, 184)  0           batch_normalization_25[0][0]     
                                                                 activation_9[0][0]               
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, 14, 14, 80)   14720       tf.math.multiply_6[0][0]         
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, 14, 14, 80)   320         conv2d_23[0][0]                  
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (None, 14, 14, 184)  14720       batch_normalization_26[0][0]     
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, 14, 14, 184)  736         conv2d_24[0][0]                  
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 14, 14, 184)  0           batch_normalization_27[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_7 (TFOpLambda) (None, 14, 14, 184)  0           batch_normalization_27[0][0]     
                                                                 activation_10[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_9 (DepthwiseCo (None, 14, 14, 184)  1656        tf.math.multiply_7[0][0]         
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, 14, 14, 184)  736         depthwise_conv2d_9[0][0]         
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 14, 14, 184)  0           batch_normalization_28[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_8 (TFOpLambda) (None, 14, 14, 184)  0           batch_normalization_28[0][0]     
                                                                 activation_11[0][0]              
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (None, 14, 14, 80)   14720       tf.math.multiply_8[0][0]         
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, 14, 14, 80)   320         conv2d_25[0][0]                  
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 14, 14, 480)  38400       batch_normalization_29[0][0]     
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, 14, 14, 480)  1920        conv2d_26[0][0]                  
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 14, 14, 480)  0           batch_normalization_30[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_9 (TFOpLambda) (None, 14, 14, 480)  0           batch_normalization_30[0][0]     
                                                                 activation_12[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_10 (DepthwiseC (None, 14, 14, 480)  4320        tf.math.multiply_9[0][0]         
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, 14, 14, 480)  1920        depthwise_conv2d_10[0][0]        
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 14, 14, 480)  0           batch_normalization_31[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_10 (TFOpLambda (None, 14, 14, 480)  0           batch_normalization_31[0][0]     
                                                                 activation_13[0][0]              
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 480)          0           tf.math.multiply_10[0][0]        
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 1, 1, 480)    0           global_average_pooling2d_3[0][0] 
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, 1, 1, 120)    57720       reshape_3[0][0]                  
__________________________________________________________________________________________________
re_lu_9 (ReLU)                  (None, 1, 1, 120)    0           conv2d_27[0][0]                  
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (None, 1, 1, 480)    58080       re_lu_9[0][0]                    
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 1, 1, 480)    0           conv2d_28[0][0]                  
__________________________________________________________________________________________________
multiply_3 (Multiply)           (None, 14, 14, 480)  0           tf.math.multiply_10[0][0]        
                                                                 activation_14[0][0]              
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (None, 14, 14, 112)  53760       multiply_3[0][0]                 
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, 14, 14, 112)  448         conv2d_29[0][0]                  
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (None, 14, 14, 672)  75264       batch_normalization_32[0][0]     
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, 14, 14, 672)  2688        conv2d_30[0][0]                  
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 14, 14, 672)  0           batch_normalization_33[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_11 (TFOpLambda (None, 14, 14, 672)  0           batch_normalization_33[0][0]     
                                                                 activation_15[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_11 (DepthwiseC (None, 14, 14, 672)  6048        tf.math.multiply_11[0][0]        
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, 14, 14, 672)  2688        depthwise_conv2d_11[0][0]        
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 14, 14, 672)  0           batch_normalization_34[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_12 (TFOpLambda (None, 14, 14, 672)  0           batch_normalization_34[0][0]     
                                                                 activation_16[0][0]              
__________________________________________________________________________________________________
global_average_pooling2d_4 (Glo (None, 672)          0           tf.math.multiply_12[0][0]        
__________________________________________________________________________________________________
reshape_4 (Reshape)             (None, 1, 1, 672)    0           global_average_pooling2d_4[0][0] 
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (None, 1, 1, 168)    113064      reshape_4[0][0]                  
__________________________________________________________________________________________________
re_lu_10 (ReLU)                 (None, 1, 1, 168)    0           conv2d_31[0][0]                  
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (None, 1, 1, 672)    113568      re_lu_10[0][0]                   
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 1, 1, 672)    0           conv2d_32[0][0]                  
__________________________________________________________________________________________________
multiply_4 (Multiply)           (None, 14, 14, 672)  0           tf.math.multiply_12[0][0]        
                                                                 activation_17[0][0]              
__________________________________________________________________________________________________
conv2d_33 (Conv2D)              (None, 14, 14, 112)  75264       multiply_4[0][0]                 
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, 14, 14, 112)  448         conv2d_33[0][0]                  
__________________________________________________________________________________________________
conv2d_34 (Conv2D)              (None, 14, 14, 672)  75264       batch_normalization_35[0][0]     
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, 14, 14, 672)  2688        conv2d_34[0][0]                  
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 14, 14, 672)  0           batch_normalization_36[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_13 (TFOpLambda (None, 14, 14, 672)  0           batch_normalization_36[0][0]     
                                                                 activation_18[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_12 (DepthwiseC (None, 7, 7, 672)    16800       tf.math.multiply_13[0][0]        
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 7, 7, 672)    2688        depthwise_conv2d_12[0][0]        
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 7, 7, 672)    0           batch_normalization_37[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_14 (TFOpLambda (None, 7, 7, 672)    0           batch_normalization_37[0][0]     
                                                                 activation_19[0][0]              
__________________________________________________________________________________________________
global_average_pooling2d_5 (Glo (None, 672)          0           tf.math.multiply_14[0][0]        
__________________________________________________________________________________________________
reshape_5 (Reshape)             (None, 1, 1, 672)    0           global_average_pooling2d_5[0][0] 
__________________________________________________________________________________________________
conv2d_35 (Conv2D)              (None, 1, 1, 168)    113064      reshape_5[0][0]                  
__________________________________________________________________________________________________
re_lu_11 (ReLU)                 (None, 1, 1, 168)    0           conv2d_35[0][0]                  
__________________________________________________________________________________________________
conv2d_36 (Conv2D)              (None, 1, 1, 672)    113568      re_lu_11[0][0]                   
__________________________________________________________________________________________________
activation_20 (Activation)      (None, 1, 1, 672)    0           conv2d_36[0][0]                  
__________________________________________________________________________________________________
multiply_5 (Multiply)           (None, 7, 7, 672)    0           tf.math.multiply_14[0][0]        
                                                                 activation_20[0][0]              
__________________________________________________________________________________________________
conv2d_37 (Conv2D)              (None, 7, 7, 160)    107520      multiply_5[0][0]                 
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 7, 7, 160)    640         conv2d_37[0][0]                  
__________________________________________________________________________________________________
conv2d_38 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_38[0][0]     
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 7, 7, 960)    3840        conv2d_38[0][0]                  
__________________________________________________________________________________________________
activation_21 (Activation)      (None, 7, 7, 960)    0           batch_normalization_39[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_15 (TFOpLambda (None, 7, 7, 960)    0           batch_normalization_39[0][0]     
                                                                 activation_21[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_13 (DepthwiseC (None, 7, 7, 960)    24000       tf.math.multiply_15[0][0]        
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 7, 7, 960)    3840        depthwise_conv2d_13[0][0]        
__________________________________________________________________________________________________
activation_22 (Activation)      (None, 7, 7, 960)    0           batch_normalization_40[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_16 (TFOpLambda (None, 7, 7, 960)    0           batch_normalization_40[0][0]     
                                                                 activation_22[0][0]              
__________________________________________________________________________________________________
global_average_pooling2d_6 (Glo (None, 960)          0           tf.math.multiply_16[0][0]        
__________________________________________________________________________________________________
reshape_6 (Reshape)             (None, 1, 1, 960)    0           global_average_pooling2d_6[0][0] 
__________________________________________________________________________________________________
conv2d_39 (Conv2D)              (None, 1, 1, 240)    230640      reshape_6[0][0]                  
__________________________________________________________________________________________________
re_lu_12 (ReLU)                 (None, 1, 1, 240)    0           conv2d_39[0][0]                  
__________________________________________________________________________________________________
conv2d_40 (Conv2D)              (None, 1, 1, 960)    231360      re_lu_12[0][0]                   
__________________________________________________________________________________________________
activation_23 (Activation)      (None, 1, 1, 960)    0           conv2d_40[0][0]                  
__________________________________________________________________________________________________
multiply_6 (Multiply)           (None, 7, 7, 960)    0           tf.math.multiply_16[0][0]        
                                                                 activation_23[0][0]              
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (None, 7, 7, 160)    153600      multiply_6[0][0]                 
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 7, 7, 160)    640         conv2d_41[0][0]                  
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_41[0][0]     
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 7, 7, 960)    3840        conv2d_42[0][0]                  
__________________________________________________________________________________________________
activation_24 (Activation)      (None, 7, 7, 960)    0           batch_normalization_42[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_17 (TFOpLambda (None, 7, 7, 960)    0           batch_normalization_42[0][0]     
                                                                 activation_24[0][0]              
__________________________________________________________________________________________________
depthwise_conv2d_14 (DepthwiseC (None, 7, 7, 960)    24000       tf.math.multiply_17[0][0]        
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 7, 7, 960)    3840        depthwise_conv2d_14[0][0]        
__________________________________________________________________________________________________
activation_25 (Activation)      (None, 7, 7, 960)    0           batch_normalization_43[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_18 (TFOpLambda (None, 7, 7, 960)    0           batch_normalization_43[0][0]     
                                                                 activation_25[0][0]              
__________________________________________________________________________________________________
global_average_pooling2d_7 (Glo (None, 960)          0           tf.math.multiply_18[0][0]        
__________________________________________________________________________________________________
reshape_7 (Reshape)             (None, 1, 1, 960)    0           global_average_pooling2d_7[0][0] 
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, 1, 1, 240)    230640      reshape_7[0][0]                  
__________________________________________________________________________________________________
re_lu_13 (ReLU)                 (None, 1, 1, 240)    0           conv2d_43[0][0]                  
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (None, 1, 1, 960)    231360      re_lu_13[0][0]                   
__________________________________________________________________________________________________
activation_26 (Activation)      (None, 1, 1, 960)    0           conv2d_44[0][0]                  
__________________________________________________________________________________________________
multiply_7 (Multiply)           (None, 7, 7, 960)    0           tf.math.multiply_18[0][0]        
                                                                 activation_26[0][0]              
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (None, 7, 7, 160)    153600      multiply_7[0][0]                 
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 7, 7, 160)    640         conv2d_45[0][0]                  
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_44[0][0]     
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 7, 7, 960)    3840        conv2d_46[0][0]                  
__________________________________________________________________________________________________
activation_27 (Activation)      (None, 7, 7, 960)    0           batch_normalization_45[0][0]     
__________________________________________________________________________________________________
tf.math.multiply_19 (TFOpLambda (None, 7, 7, 960)    0           batch_normalization_45[0][0]     
                                                                 activation_27[0][0]              
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 1, 1, 960)    0           tf.math.multiply_19[0][0]        
__________________________________________________________________________________________________
reshape_8 (Reshape)             (None, 1, 1, 960)    0           max_pooling2d[0][0]              
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (None, 1, 1, 1280)   1230080     reshape_8[0][0]                  
__________________________________________________________________________________________________
activation_28 (Activation)      (None, 1, 1, 1280)   0           conv2d_47[0][0]                  
__________________________________________________________________________________________________
tf.math.multiply_20 (TFOpLambda (None, 1, 1, 1280)   0           conv2d_47[0][0]                  
                                                                 activation_28[0][0]              
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (None, 1, 1, 1000)   1281000     tf.math.multiply_20[0][0]        
__________________________________________________________________________________________________
flatten (Flatten)               (None, 1000)         0           conv2d_48[0][0]                  
==================================================================================================
Total params: 5,505,598
Trainable params: 5,481,198
Non-trainable params: 24,400
__________________________________________________________________________________________________
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

【神经网络】(16) MobileNetV3 代码复现,网络解析,附Tensorflow完整代码 的相关文章

  • 如何在刻度标签和轴之间添加空间

    我已成功增加刻度标签的字体 但现在它们距离轴太近了 我想在刻度标签和轴之间添加一点呼吸空间 如果您不想全局更改间距 通过编辑 rcParams 并且想要更简洁的方法 请尝试以下操作 ax tick params axis both whic
  • Python、Tkinter、更改标签颜色

    有没有一种简单的方法来更改按钮中文本的颜色 I use button text input text here 更改按下后按钮文本的内容 是否存在类似的颜色变化 button color red Use the foreground设置按钮
  • Python PAM 模块的安全问题?

    我有兴趣编写一个 PAM 模块 该模块将利用流行的 Unix 登录身份验证机制 我过去的大部分编程经验都是使用 Python 进行的 并且我正在交互的系统已经有一个 Python API 我用谷歌搜索发现pam python http pa
  • pandas 替换多个值

    以下是示例数据框 gt gt gt df pd DataFrame a 1 1 1 2 2 b 11 22 33 44 55 gt gt gt df a b 0 1 11 1 1 22 2 1 33 3 2 44 4 3 55 现在我想根据
  • 如何等到 Excel 计算公式后再继续 win32com

    我有一个 win32com Python 脚本 它将多个 Excel 文件合并到电子表格中并将其另存为 PDF 现在的工作原理是输出几乎都是 NAME 因为文件是在计算 Excel 文件内容之前输出的 这可能需要一分钟 如何强制工作簿计算值
  • 如何使用 Scrapy 从网站获取所有纯文本?

    我希望在 HTML 呈现后 可以从网站上看到所有文本 我正在使用 Scrapy 框架使用 Python 工作 和xpath body text 我能够获取它 但是带有 HTML 标签 而且我只想要文本 有什么解决办法吗 最简单的选择是ext
  • 在循环中每次迭代开始时将变量重新分配给原始值(在循环之前定义)

    在Python中 你使用 在每次迭代开始时将变量重新分配给原始值 在循环之前定义 时 也就是说 original 1D o o o for i in range 0 3 new original 1D revert back to orig
  • 使用 Pycharm 在 Windows 下启动应用程序时出现 UnicodeDecodeError

    问题是当我尝试启动应用程序 app py 时 我收到以下错误 UnicodeDecodeError utf 8 编解码器无法解码位置 5 中的字节 0xb3 起始字节无效 整个文件app py coding utf 8 from flask
  • NameError:名称“urllib”未定义”

    CODE import networkx as net from urllib request import urlopen def read lj friends g name fetch the friend list from Liv
  • Abaqus 将曲面转化为集合

    我一直试图在模型中找到两个表面的中心 参见照片 但未能成功 它们是元素表面 面 查询中没有选项可以查找元素表面的中心 只能查找元素集的中心 找到节点集的中心也很好 但是我的节点集没有出现在工具 gt 查询 gt 质量属性选项中 而且我找不到
  • python 集合可以包含的值的数量是否有限制?

    我正在尝试使用 python 设置作为 mysql 表中 ids 的过滤器 python集存储了所有要过滤的id 现在大约有30000个 这个数字会随着时间的推移慢慢增长 我担心python集的最大容量 它可以包含的元素数量有限制吗 您最大
  • Python:字符串不会转换为浮点数[重复]

    这个问题在这里已经有答案了 我几个小时前写了这个程序 while True print What would you like me to double line raw input gt if line done break else f
  • HTTPS 代理不适用于 Python 的 requests 模块

    我对 Python 还很陌生 我一直在使用他们的 requests 模块作为 PHP 的 cURL 库的替代品 我的代码如下 import requests import json import os import urllib impor
  • 在tensorflow.js中对张量进行分区、屏蔽或过滤

    我有 2 个相同长度的张量 data and groupIds 我想分开data通过相应的值分成几组groupId 例如 const data tf tensor 1 2 3 4 5 const groupIds tf tensor 0 1
  • Python - 在窗口最小化或隐藏时使用 pywinauto 控制窗口

    我正在尝试做的事情 我正在尝试使用 pywinauto 在 python 中创建一个脚本 以在后台自动安装 notepad 隐藏或最小化 notepad 只是一个示例 因为我将编辑它以与其他软件一起使用 Problem 问题是我想在安装程序
  • 如何从没有结尾的管道中读取 python 中的 stdin

    当管道来自 打开 时 不知道正确的名称 我无法从 python 中的标准输入或管道读取数据 文件 我有作为例子管道测试 py import sys import time k 0 try for line in sys stdin k k
  • 循环标记时出现“ValueError:无法识别的标记样式 -d”

    我正在尝试编码pyplot允许不同标记样式的绘图 这些图是循环生成的 标记是从列表中选取的 为了演示目的 我还提供了一个颜色列表 版本是Python 2 7 9 IPython 3 0 0 matplotlib 1 4 3 这是一个简单的代
  • 使用基于正则表达式的部分匹配来选择 Pandas 数据帧的子数据帧

    我有一个 Pandas 数据框 它有两列 一列 进程参数 列 包含字符串 另一列 值 列 包含相应的浮点值 我需要过滤出部分匹配列 过程参数 中的一组键的子数据帧 并提取与这些键匹配的数据帧的两列 df pd DataFrame Proce
  • Spark.read 在 Databricks 中给出 KrbException

    我正在尝试从 databricks 笔记本连接到 SQL 数据库 以下是我的代码 jdbcDF spark read format com microsoft sqlserver jdbc spark option url jdbc sql
  • Pandas 与 Numpy 数据帧

    看这几行代码 df2 df copy df2 1 df 1 df 1 values 1 df2 ix 0 0 我们的教练说我们需要使用 values属性来访问底层的 numpy 数组 否则我们的代码将无法工作 我知道 pandas Data

随机推荐

  • 与ajax类似的技术,介绍Ajax与jQuery技术

    Ajxs技术 异步的JavaScript与XML 已有多种技术的组合 Ajax的优点是什么 1 可以实现客户端的异步请求操作2 进而在不需要刷新页面的情况下与服务器进行通信 减少用户的等待时间3 减轻服务器和带宽的负担 提供更好的服务响应
  • linux socket bind 内核详解,Socket与系统调用深度分析(示例代码)

    1 什么是系统调用 操作系统通过系统调用为运行于其上的进程提供服务 当用户态进程发起一个系统调用 CPU 将切换到 内核态 并开始执行一个 内核函数 内核函数负责响应应用程序的要求 例如操作文件 进行网络通讯或者申请内存资源等 在Linux
  • 在 CentOS 上安装 Docker 引擎

    在 CentOS 上安装 Docker 引擎 预计阅读时间 11分钟 要在 CentOS 上开始使用 Docker 引擎 请确保 满足先决条件 然后 安装 Docker 先决条件 操作系统要求 要安装 Docker Engine 您需要 C
  • 双向广搜(bfs)

    双向广度优先搜索 广度优先搜索遵循从初始结点开始一层层扩展直到找到目标结点的搜索规则 它只能较好地解决状态不是太多的情况 承受力很有限 如果扩展结点较多 而目标结点又处在较深层 采用前文叙述的广度搜索解题 搜索量巨大是可想而知的 往往就会出
  • GDI+ Graphics绘文字定位不准,显示偏差问题

    拿来主义人员速达 取一般的版式 TGPStringFormat 对象 使用可以达到精准定位显示效果 format GenericTypographic MFC开发中需要自绘控件 使用Graphics绘文字时出现位置偏右偏下问题 显示效果如下
  • JSR-303使用依赖jar包

    jboss logging 3 1 0 GA jar slf4j api 1 5 8 jar hibernate validator 4 3 0 Final jar validation api 1 1 0 Alpha1 jar java
  • springboot 定时任务(线程配置,并行【同步】、异步等)

    定时任务 实现方式 SpringBoot自带的Scheduled 可以将它看成一个轻量级的Quartz 而且使用起来比Quartz简单许多 本文主要介绍 执行方式 单线程 串行 多线程 并行 创建定时任务 Component EnableS
  • 数据库主从同步的作用是什么,如何解决数据不一致问题?

    Redis是一种高性能的内存数据库 而MySQL是基于磁盘文件的关系型数据库 相比于Redis来说 读取速度会慢一些 但是功能强大 可以用于存储持久化的数据 在实际工作中 我们常常将Redis作为缓存与MySQL配合来使用 当有数据访问请求
  • Android 使用Lottie的三个小技巧

    Android 使用Lottie的三个小技巧 Shawn 文章目录 Android 使用Lottie的三个小技巧 I 开启硬件加速 II 通过添加AnimatorListener来控制动画行为 III 通过设置播放速度来实现动画倒放 I 开
  • 构建天气数据API:使用Scrapyd提供实时天气信息接口

    目录 1 天气数据API的重要性 2 选择合适的气象数据源 3 构建天气数据爬虫 4 使用Scrapyd进行
  • 4.3.1 位置变化动作

    4 3 1 位置变化动作 2013 05 21 10 12 火烈鸟网络科技 人民邮电出版社 我要评论 0 字号 T T Cocos2d x高级开发教程 第4章动作 在这一章中 我们将为大家详细介绍各种动作的使用方法 读完本章后 读者将会学到
  • App Transport Security has blocked a cleartext HTTP

    问题 App Transport Security has blocked a cleartext HTTP http resource load since it is insecure Temporary exceptions can
  • linux查找含有指定字符串的文件

    1 使用find进行查找 find oracle apache tomcat 8 5 59 xml xargs grep 1521 2 使用grep查找 grep rn 搜索的内容 路径
  • linux终端Bash换成zsh后,环境变量失效的解决方案

    安装了oh my zsh后 发现node ng什么的命令都失效了 第一反映是环境变量失效 忘记了怎么配置 Search on the Internet half an hour 说多了都是泪 一直找不到解决方法 最终问了项目老大 一句话点明
  • 五.安装gitlab

    1 下载安装 gitlab ce 15 9 1 ce 0 el7 x86 64 rpm 下载安装包 wget https mirrors tuna tsinghua edu cn gitlab ce yum el7 gitlab ce 15
  • python连接oracle数据库查询

    直接上源码说明吧 如下 开头引入必须的插件 连接oracle需要导入cx Oracle coding utf8 import cx Oracle import sys os from selenium import webdriver 编码
  • C++中string如何实现字符串分割函数split()——4种方法

    如 string str1 This is a test string str2 This is a test string str2 This is a test 我们如何将以上字符串按照某种分隔符 将其分割成四个子串 其值分别为 Thi
  • 【Spring Data JPA自学笔记四】动态查询

    文章目录 JpaSpecificationExecutor接口 初见Specification类 Specification的更多功能 Specification的条件 Specification多条件查询 排序和分页 使用Sort 使用P
  • MySQL技术内幕InnoDB存储引擎 学习笔记 第五章 索引与算法

    如果索引太多 应用的性能会受到影响 每次插入都要更新索引并保存在磁盘上 增加了磁盘IO 如果索引太少 对查询性能又会产生影响 要找到一个平衡点 InnoDB支持B 树索引和哈希索引 InnoDB的哈希索引是自适应的 InnoDB会根据表的使
  • 【神经网络】(16) MobileNetV3 代码复现,网络解析,附Tensorflow完整代码

    各位同学好 今天和大家分享一下如何使用 Tensorflow 构建 MobileNetV3 轻量化网络模型 MobileNetV3 做了如下改动 1 更新了V2中的逆转残差结构 2 使用NAS搜索参数的技术 3 重新设计耗时层结构 Mobi