深度學習理論與實務

林嶔 (Lin, Chin)

Lesson 7 現代網路設計與邏輯可視化

前言

– 梯度消失問題似乎已經被Residual Learning解決了,但事情有這麼簡單嗎?

– 權重初始化問題關係著局部極值,目前除了優化器及參數的選擇外,只有轉移特徵學習能夠使用,因此「找到好的轉移特徵學習方法」相當重要!

– 過度擬合問題有大量的方法可以解決,但他的根本在於「待解參數量」遠大於「數據量」,因此我們有可有可能設計一個「小參數量」但又足夠複雜(深)的模型呢?

– 讓我們跟著ILSVRC的腳步來學習,並看看人類是怎樣一步一步突破的,這是ILSVRC的歷屆冠軍模型,隨著時間的推移,我們看看Model Architecture的設計觀念是如何演進的!

F01

第一節:經典神經網路的演進(1)

F10

AlexNet
Operator Kernel Stride Filter Group Input size Parameter size
CONV + ReLU + LRN 11 4 96 2 224 * 224 * 3 11 * 11 * 3 * 96 / 2 ~ 17K
Max Pool 3 2 56 * 56 * 96
CONV + ReLU + LRN 5 1 256 2 28 * 28 * 96 5 * 5 * 96 * 256 / 2 ~ 307K
Max Pool 3 2 28 * 28 * 256
CONV + ReLU 3 1 384 1 14 * 14 * 256 3 * 3 * 256 * 384 / 1 ~ 884K
CONV + ReLU 3 1 384 2 14 * 14 * 384 3 * 3 * 384 * 384 / 2 ~ 664K
CONV + ReLU 3 1 256 2 12 * 12 * 384 3 * 3 * 384 * 256 / 2 ~ 442K
Max Pool 3 2 12 * 12 * 256
FC + ReLU 4096 6 * 6 * 256 6 * 6 * 256 * 4096 ~ 37749K
FC + ReLU 4096 4096 4096 * 4096 ~ 16777K
FC + Softmax 1000 4096 4096 * 1000 ~ 4096K

第一節:經典神經網路的演進(2)

F03

第一節:經典神經網路的演進(3)

F11

  1. 使用ReLU做為非線性變換的激活函數 - 這點可以說是小幅度的解決了梯度消失問題,使網路總深度達到了8層

  2. 使用Dropout技術 - 這可以說是整個研究最創新的點,一定程度避免了過度擬和的危害

  3. 使用overlap的max pooling - 這是一個新的觀念,但實現上並沒有非常困難

  4. 數據增強 - 這個研究所使用的數據增強技術即使到今天都可以算是非常完整,包含了裁減、旋轉、翻轉、縮放、ZCA白化等一系列操作

  5. 使用GPU加速深度卷積網絡的訓練 - 這在當時是一個門檻較高且大家沒有想到的方向,有效的加速了神經網路的訓練

  6. 提出了一種叫做局部響應歸一化(Local response normalization,LRN)層 - 這是一種模仿生物學中相鄰的神經元有較強的訊號時會抑制旁邊較弱訊號的手段,然而後續的研究被證明用處不大

第一節:經典神經網路的演進(4)

– 然而在之前的研究早已證實非線性的結構有助於模擬更複雜的函數模型,因此我們必須在網路內多增加更多非線性的結構以利預測!

F12

F13

– 這篇研究最重要的貢獻在於提出並以實驗證明了1×1的卷積核的好處,未來的網路大量運用了這一觀念在Model Architecture之中。時至今日,目前大多數最先進的網路中1×1的卷積核的使用量甚至都超過其他維度的卷積核使用!

第一節:經典神經網路的演進(5)

– 回頭看看AlexNet的結構,你是否難以理解到底哪裡應該使用11×11的卷積核,而那裡又該用3×3或是5×5呢?這導致定義Model Architecture的選擇過於發散,因此他們做了一個重要的實驗來解決這個問題。

F14

F15

F16

– 這篇研究透過比較了上述6個神經網路,告訴了我們幾個未來設計Model Architecture的重要事項:

  1. 越深的網絡效果通常越好

  2. 1x1的卷積核也顯著提升效能(與前面的研究結論相同)

  3. 局部響應歸一化層對網路的性能提升沒什麼幫助

第一節:經典神經網路的演進(6)

  1. 權重初始化問題 - 由於他拿不到比Imagenet還大量的資料,因此他沒有考慮這個問題

  2. 梯度消失問題 - 他使用了ReLU作為非線性轉換函數,考慮到他只有8層深,問題應該不大

  3. 過度擬合問題 - 整個網路的待解參數共計6100萬,而Imagenet也只有130萬的訓練資料,因此他用了極複雜的資料擴增技術以避免過度擬合問題,同時他也設計了Dropout來解決這個問題

  1. 待解參數量似乎多了點,而且主要集中在全連接層的部分(~97%),而卷積網路在圖像識別上的成功是因為他很像生物的視覺機制,因此應該更強調卷積層的堆疊而捨棄全連接層

– 因此在GoogleNet(Inception net v1)於2014年奪冠時,當時的網路已經完全捨棄了全連接層,完全使用卷積層來抽取圖像特徵,你可以參考Going Deeper with Convolutions,因此參數量得到大幅地降低

  1. AlexNet用的卷積核尺寸不一,這導致整個網路的發展相當困難,所以我們必須好好解決這個問題

– 這個問題在Network In Network以及Very Deep Convolutional Networks for Large-Scale Image Recognition的努力之下,基本上確定了以後的卷積核尺寸是以3×3以及1×1為主

Network In Network研究所提出最重要的觀點在於我們在做卷積運算時,必須在這裡將其從原來的線性變換轉變為類似於MLP的結構。

第一節:經典神經網路的演進(7)

F17

F18

– 也由於前面Network In Network的重要貢獻,最終的Inception Module在每一個通道上都加上了一個1x1卷積層以達到非線性擬合的目標:

F19

第一節:經典神經網路的演進(8)

– 我們還需要特別注意的一點是,GoogleNet不僅僅是捨棄了全連接層,更重要的是他利用了1×1的卷積核進行參數的壓縮:

– 這個是原始版的Inception module,讓我們試著計算他需要多少待解參數:

F04

  1. 我們先計算一下1×1的部分,他需要1×1×256×128個參數

  2. 接著計算一下3×3的部分,他需要3×3×256×192個參數

  3. 接著計算一下5×5的部分,他需要5×5×256×96個參數

第一節:經典神經網路的演進(9)

F05

  1. 1×1的部分並沒有改變,因此仍然需要1×1×256×128個參數

  2. 接著計算一下3×3的部分,他的第一層需要1×1×256×64,第二層需要3×3×64×192個參數

  3. 接著計算一下5×5的部分,他的第一層需要1×1×256×64,第二層需要5×5×64×96個參數

  4. 最後計算Pooling的部分,他需要1×1×256×64個參數

– 這個結構叫做bottleneck,這是因為特徵圖數量由多變少,再由少變多,這個結構之後被大量運用於減少待解參數量。

– 需要注意的是,壓縮得太多會損害網路準確性,一般來說壓縮結構大多不會壓縮超過4倍(這裡是3倍)

第一節:經典神經網路的演進(10)

– 因此,在2014年獲得冠軍的那個網路被稱作Inception v1 net,而在2015年時Google團隊在之後又開發出了Inception v2 net以及Inception v3 net。

F20

– 由於Batch normalization經過大量的研究被證實了他強大的效果,這使的傳統的卷積單元CONV+ReLU,變成CONV+BN+ReLU(或是BN+ReLU+CONV)。

F21

– 值得一提的是,由於這一系列的演進,網路雖然越來越深但待解參數不斷減少,因此從這時候開始Dropout就很少使用了,如果有用也頂多在最後一層輸出前才使用。

– Google團隊在開發出Inception v3 net之後就將其投入至2015年的ILSVRC之中,然而由於研究突破相對小,並且同一次比賽遇到了深度學習目前為止史上最大的突破性進展,因此就淹沒於歷史的長河之中…

第一節:經典神經網路的演進(11)

– 比起競賽獲勝並正式超越人類之外更重要的意義是,我們首次真正意義上的解決了梯度消失問題,而他們所發展的Residual Learning成功地訓練了一隻1000層深的網路,並且同一個時間幾乎沒有團隊有能力訓練超過50層的神經網路。

– 這個核彈級的研究:Deep Residual Learning for Image Recognition在所有人的引頸盼望之下,發表於2016年的CVPR並理所當然的獲得了該研討會的最佳會議論文獎:

Fig/F22

F06

F07

第一節:經典神經網路的演進(12)

– 另外之後提出的DenseNet,他提出了一個很重要的Idea,那就是我們有沒有可能對簡單的圖像使用淺層特徵預測,而深層的網路用來預測困難的圖像,我們可以在論文中找到細節:Densely Connected Convolutional Networks

F08

F09

– 我們採用具有bottleneck的ResNet:首先先經過一個1×1卷積核,通常壓縮四倍,故參數總量為1×1×1024×256;接著經過一個3×3卷積核,參數總量為3×3×256×256;接著再經過一個1×1卷積核,由於必須將維度還原回原始輸入,因此參數總量為1×1×256×1024。整個Module共需1114112個參數!

– 至於DenseNet中要求輸出固定為16維,而一般來說會先經過一個1×1卷積核,我們來計算一下參數量:首先先經過一個1×1卷積核,通常是3×3維度的四倍,故參數總量為1×1×1024×64;接著經過一個3×3卷積核,參數總量為3×3×64×16。整個Module共需74752個參數!

第一節:經典神經網路的演進(13)

  1. 在總參數量相當的狀況下:深 + 窄 > 淺 + 寬

  2. 1×1的卷積核不但能增加卷積核的非線性表達能力,更可以用來壓縮參數

  3. 整個網路通常以Module的形式在中間進行循環使用,而因此關鍵變為設計好的Module

  4. Module與Module之間的連結一定要使用Residual Connection或Dense Connection,避免梯度消失

  5. 盡量讓整個網路都使用卷積層,而不要使用全連接層以及Pooling

  6. 網路的特徵數必須隨著層數堆疊而由少至多

  7. 最後不要忘記上一節課提到的Squeeze-and-Excitation mechanism,他能用極少的參數量作為代價大幅提升模型性能

– 因此大型網路的精準度已經不需要懷疑了,只要你擁有有足夠大的數據外加足夠深的網路。

– 現在的重點在於,我們有沒有可能把網路的參數量再更進一步壓縮?畢竟大型網路只能在雲端環境下運行,這在應用上有許多場景非常不實用,因此讓我們再來了解一下如何更進一步降低網路的參數量!

練習1:重現經典模型架構(1)

– 讓我們下載resnet-18 .paramsresnet-18 symbol兩個檔案,這是先前提過基於何愷明的Residual Learning的研究所訓練出來的50層深的深度神經網路

– 另外再請你下載chinese synset.txt,這描述了這個模型輸出的1000個類別分別是甚麼。

– 讓我們試著讀取它

library(mxnet)

res_model <- mx.model.load("model/resnet-18", 0)
synsets <- readLines('model/chinese synset.txt', encoding = 'UTF-8')
library(OpenImageR)

img<- readImage('test.jpg') 
resized_img <- resizeImage(img, 224, 224, method = 'bilinear')

imageShow(resized_img)

– 讓我們來進行預測,注意要先將圖像改成MxNet所接受的維度:

dim(resized_img) <- c(dim(resized_img), 1)
pred_prob  <- predict(res_model, resized_img)

pred_prob <- as.numeric(pred_prob)
names(pred_prob) <- synsets
pred_prob <- sort(pred_prob, decreasing = TRUE)
pred_prob <- formatC(pred_prob, 4, format = 'f')
head(pred_prob, 5)
## n01484850 大白鯊   n01491361 虎鯊 n02807133 游泳帽   n02640242 鱘魚 
##         "0.9206"         "0.0322"         "0.0272"         "0.0091" 
##   n02641379 雀鱔 
##         "0.0038"

練習1:重現經典模型架構(2)

– 如果你成功的話,將res_model中的「symbol」做替換應該可以得到一模一樣的答案!

Fig/F23

練習1答案(1)

# Model Architecture

# 224×224

data <- mx.symbol.Variable(name = 'data')
bn_data <- mx.symbol.BatchNorm(data = data, eps = "2e-05", name = 'bn_data')

# 112×112

conv0 <- mx.symbol.Convolution(data = bn_data, no_bias = TRUE, name = 'conv0',
                               kernel = c(7, 7), pad = c(3, 3), stride = c(2, 2), num_filter = 64)
bn0 <- mx.symbol.BatchNorm(data = conv0, fix_gamma = FALSE, eps = "2e-05", name = 'bn0')
relu0 <- mx.symbol.Activation(data = bn0, act_type = "relu", name = 'relu0')

# 56×56

# stage1_unit1

pooling0 <- mx.symbol.Pooling(data = relu0, pool_type = "max", name = 'pooling0',
                              kernel = c(3, 3), pad = c(1, 1), stride = c(2, 2))
stage1_unit1_bn1 <- mx.symbol.BatchNorm(data = pooling0, fix_gamma = FALSE, eps = "2e-05", name = 'stage1_unit1_bn1')
stage1_unit1_relu1 <- mx.symbol.Activation(data = stage1_unit1_bn1, act_type = "relu", name = 'stage1_unit1_relu1')
stage1_unit1_conv1 <- mx.symbol.Convolution(data = stage1_unit1_relu1, no_bias = TRUE, name = 'stage1_unit1_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 64)
stage1_unit1_bn2 <- mx.symbol.BatchNorm(data = stage1_unit1_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage1_unit1_bn2')
stage1_unit1_relu2 <- mx.symbol.Activation(data = stage1_unit1_bn2, act_type = "relu", name = 'stage1_unit1_relu2')
stage1_unit1_conv2 <- mx.symbol.Convolution(data = stage1_unit1_relu2, no_bias = TRUE, name = 'stage1_unit1_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 64)

stage1_unit1_sc <- mx.symbol.Convolution(data = stage1_unit1_relu1, no_bias = TRUE, name = 'stage1_unit1_sc',
                                         kernel = c(1, 1), pad = c(0, 0), stride = c(1, 1), num_filter = 64)

elemwise_add_plus0 <- mx.symbol.broadcast_plus(lhs = stage1_unit1_conv2, rhs = stage1_unit1_sc, name = 'elemwise_add_plus0')

# stage1_unit2

stage1_unit2_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus0, fix_gamma = FALSE, eps = "2e-05", name = 'stage1_unit2_bn1')
stage1_unit2_relu1 <- mx.symbol.Activation(data = stage1_unit2_bn1, act_type = "relu", name = 'stage1_unit2_relu1')
stage1_unit2_conv1 <- mx.symbol.Convolution(data = stage1_unit2_relu1, no_bias = TRUE, name = 'stage1_unit2_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 64)
stage1_unit2_bn2 <- mx.symbol.BatchNorm(data = stage1_unit2_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage1_unit2_bn2')
stage1_unit2_relu2 <- mx.symbol.Activation(data = stage1_unit2_bn2, act_type = "relu", name = 'stage1_unit2_relu2')
stage1_unit2_conv2 <- mx.symbol.Convolution(data = stage1_unit2_relu2, no_bias = TRUE, name = 'stage1_unit2_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 64)

elemwise_add_plus1 <- mx.symbol.broadcast_plus(lhs = stage1_unit2_conv2, rhs = elemwise_add_plus0, name = 'elemwise_add_plus1')

# 28×28

# stage2_unit1

stage2_unit1_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus1, fix_gamma = FALSE, eps = "2e-05", name = 'stage2_unit1_bn1')
stage2_unit1_relu1 <- mx.symbol.Activation(data = stage2_unit1_bn1, act_type = "relu", name = 'stage2_unit1_relu1')
stage2_unit1_conv1 <- mx.symbol.Convolution(data = stage2_unit1_relu1, no_bias = TRUE, name = 'stage2_unit1_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(2, 2), num_filter = 128)
stage2_unit1_bn2 <- mx.symbol.BatchNorm(data = stage2_unit1_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage2_unit1_bn2')
stage2_unit1_relu2 <- mx.symbol.Activation(data = stage2_unit1_bn2, act_type = "relu", name = 'stage2_unit1_relu2')
stage2_unit1_conv2 <- mx.symbol.Convolution(data = stage2_unit1_relu2, no_bias = TRUE, name = 'stage2_unit1_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 128)

stage2_unit1_sc <- mx.symbol.Convolution(data = stage2_unit1_relu1, no_bias = TRUE, name = 'stage2_unit1_sc',
                                         kernel = c(1, 1), pad = c(0, 0), stride = c(2, 2), num_filter = 128)

elemwise_add_plus2 <- mx.symbol.broadcast_plus(lhs = stage2_unit1_conv2, rhs = stage2_unit1_sc, name = 'elemwise_add_plus2')

# stage2_unit2

stage2_unit2_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus2, fix_gamma = FALSE, eps = "2e-05", name = 'stage2_unit2_bn1')
stage2_unit2_relu1 <- mx.symbol.Activation(data = stage2_unit2_bn1, act_type = "relu", name = 'stage2_unit2_relu1')
stage2_unit2_conv1 <- mx.symbol.Convolution(data = stage2_unit2_relu1, no_bias = TRUE, name = 'stage2_unit2_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 128)
stage2_unit2_bn2 <- mx.symbol.BatchNorm(data = stage2_unit2_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage2_unit2_bn2')
stage2_unit2_relu2 <- mx.symbol.Activation(data = stage2_unit2_bn2, act_type = "relu", name = 'stage2_unit2_relu2')
stage2_unit2_conv2 <- mx.symbol.Convolution(data = stage2_unit2_relu2, no_bias = TRUE, name = 'stage2_unit2_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 128)

elemwise_add_plus3 <- mx.symbol.broadcast_plus(lhs = stage2_unit2_conv2, rhs = elemwise_add_plus2, name = 'elemwise_add_plus3')

# 14×14

# stage3_unit1

stage3_unit1_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus3, fix_gamma = FALSE, eps = "2e-05", name = 'stage3_unit1_bn1')
stage3_unit1_relu1 <- mx.symbol.Activation(data = stage3_unit1_bn1, act_type = "relu", name = 'stage3_unit1_relu1')
stage3_unit1_conv1 <- mx.symbol.Convolution(data = stage3_unit1_relu1, no_bias = TRUE, name = 'stage3_unit1_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(2, 2), num_filter = 256)
stage3_unit1_bn2 <- mx.symbol.BatchNorm(data = stage3_unit1_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage3_unit1_bn2')
stage3_unit1_relu2 <- mx.symbol.Activation(data = stage3_unit1_bn2, act_type = "relu", name = 'stage3_unit1_relu2')
stage3_unit1_conv2 <- mx.symbol.Convolution(data = stage3_unit1_relu2, no_bias = TRUE, name = 'stage3_unit1_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 256)

stage3_unit1_sc <- mx.symbol.Convolution(data = stage3_unit1_relu1, no_bias = TRUE, name = 'stage3_unit1_sc',
                                         kernel = c(1, 1), pad = c(0, 0), stride = c(2, 2), num_filter = 256)

elemwise_add_plus4 <- mx.symbol.broadcast_plus(lhs = stage3_unit1_conv2, rhs = stage3_unit1_sc, name = 'elemwise_add_plus4')

# stage3_unit2

stage3_unit2_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus4, fix_gamma = FALSE, eps = "2e-05", name = 'stage3_unit2_bn1')
stage3_unit2_relu1 <- mx.symbol.Activation(data = stage3_unit2_bn1, act_type = "relu", name = 'stage3_unit2_relu1')
stage3_unit2_conv1 <- mx.symbol.Convolution(data = stage3_unit2_relu1, no_bias = TRUE, name = 'stage3_unit2_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 256)
stage3_unit2_bn2 <- mx.symbol.BatchNorm(data = stage3_unit2_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage3_unit2_bn2')
stage3_unit2_relu2 <- mx.symbol.Activation(data = stage3_unit2_bn2, act_type = "relu", name = 'stage3_unit2_relu2')
stage3_unit2_conv2 <- mx.symbol.Convolution(data = stage3_unit2_relu2, no_bias = TRUE, name = 'stage3_unit2_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 256)

elemwise_add_plus5 <- mx.symbol.broadcast_plus(lhs = stage3_unit2_conv2, rhs = elemwise_add_plus4, name = 'elemwise_add_plus5')

# 7×7

# stage4_unit1

stage4_unit1_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus5, fix_gamma = FALSE, eps = "2e-05", name = 'stage4_unit1_bn1')
stage4_unit1_relu1 <- mx.symbol.Activation(data = stage4_unit1_bn1, act_type = "relu", name = 'stage4_unit1_relu1')
stage4_unit1_conv1 <- mx.symbol.Convolution(data = stage4_unit1_relu1, no_bias = TRUE, name = 'stage4_unit1_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(2, 2), num_filter = 512)
stage4_unit1_bn2 <- mx.symbol.BatchNorm(data = stage4_unit1_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage4_unit1_bn2')
stage4_unit1_relu2 <- mx.symbol.Activation(data = stage4_unit1_bn2, act_type = "relu", name = 'stage4_unit1_relu2')
stage4_unit1_conv2 <- mx.symbol.Convolution(data = stage4_unit1_relu2, no_bias = TRUE, name = 'stage4_unit1_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 512)

stage4_unit1_sc <- mx.symbol.Convolution(data = stage4_unit1_relu1, no_bias = TRUE, name = 'stage4_unit1_sc',
                                         kernel = c(1, 1), pad = c(0, 0), stride = c(2, 2), num_filter = 512)

elemwise_add_plus6 <- mx.symbol.broadcast_plus(lhs = stage4_unit1_conv2, rhs = stage4_unit1_sc, name = 'elemwise_add_plus6')

# stage4_unit2

stage4_unit2_bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus6, fix_gamma = FALSE, eps = "2e-05", name = 'stage4_unit2_bn1')
stage4_unit2_relu1 <- mx.symbol.Activation(data = stage4_unit2_bn1, act_type = "relu", name = 'stage4_unit2_relu1')
stage4_unit2_conv1 <- mx.symbol.Convolution(data = stage4_unit2_relu1, no_bias = TRUE, name = 'stage4_unit2_conv1',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 512)
stage4_unit2_bn2 <- mx.symbol.BatchNorm(data = stage4_unit2_conv1, fix_gamma = FALSE, eps = "2e-05", name = 'stage4_unit2_bn2')
stage4_unit2_relu2 <- mx.symbol.Activation(data = stage4_unit2_bn2, act_type = "relu", name = 'stage4_unit2_relu2')
stage4_unit2_conv2 <- mx.symbol.Convolution(data = stage4_unit2_relu2, no_bias = TRUE, name = 'stage4_unit2_conv2',
                                            kernel = c(3, 3), pad = c(1, 1), stride = c(1, 1), num_filter = 512)

elemwise_add_plus7 <- mx.symbol.broadcast_plus(lhs = stage4_unit2_conv2, rhs = elemwise_add_plus6, name = 'elemwise_add_plus7')

# Final

bn1 <- mx.symbol.BatchNorm(data = elemwise_add_plus7, fix_gamma = FALSE, eps = "2e-05", name = 'bn1')
relu1 <- mx.symbol.Activation(data = bn1, act_type = "relu", name = 'relu1')
pool1 <- mx.symbol.Pooling(data = relu1, pool_type = "avg", name = 'pool1',
                           kernel = c(7, 7), pad = c(0, 0), stride = c(7, 7))
flatten0 <- mx.symbol.Flatten(data = pool1, name = 'flatten0')
fc1 <- mx.symbol.FullyConnected(data = flatten0, num_hidden = 1000, name = 'fc1')
softmax <- mx.symbol.softmax(data = fc1, axis = 1, name = 'softmax')

練習1答案(2)

res_model$symbol <- softmax

pred_prob  <- predict(res_model, resized_img)

pred_prob <- as.numeric(pred_prob)
names(pred_prob) <- synsets
pred_prob <- sort(pred_prob, decreasing = TRUE)
pred_prob <- formatC(pred_prob, 4, format = 'f')
head(pred_prob, 5)
## n01484850 大白鯊   n01491361 虎鯊 n02807133 游泳帽   n02640242 鱘魚 
##         "0.9206"         "0.0322"         "0.0272"         "0.0091" 
##   n02641379 雀鱔 
##         "0.0038"

練習2:重現Inception module的推理過程(1)

– 我們在這裡提供了一個已經預訓練好的模型列表

– 讓我們下載Inception-BN paramsInception-BN symbol兩個檔案,另外再請你下載chinese synset.txt

library(magrittr)
library(mxnet)

#Load a pre-training residual network model
inception_model <- mx.model.load("model/Inception-BN", 126)

#Show model architecture
all_layers <- inception_model$symbol$get.internals()
print(all_layers$outputs[grepl('output', all_layers$outputs)])
##   [1] "conv_1_output"                    "bn_1_output"                     
##   [3] "relu_1_output"                    "pool_1_output"                   
##   [5] "conv_2_red_output"                "bn_2_red_output"                 
##   [7] "relu_2_red_output"                "conv_2_output"                   
##   [9] "bn_2_output"                      "relu_2_output"                   
##  [11] "pool_2_output"                    "conv_3a_1x1_output"              
##  [13] "bn_3a_1x1_output"                 "relu_3a_1x1_output"              
##  [15] "conv_3a_3x3_reduce_output"        "bn_3a_3x3_reduce_output"         
##  [17] "relu_3a_3x3_reduce_output"        "conv_3a_3x3_output"              
##  [19] "bn_3a_3x3_output"                 "relu_3a_3x3_output"              
##  [21] "conv_3a_double_3x3_reduce_output" "bn_3a_double_3x3_reduce_output"  
##  [23] "relu_3a_double_3x3_reduce_output" "conv_3a_double_3x3_0_output"     
##  [25] "bn_3a_double_3x3_0_output"        "relu_3a_double_3x3_0_output"     
##  [27] "conv_3a_double_3x3_1_output"      "bn_3a_double_3x3_1_output"       
##  [29] "relu_3a_double_3x3_1_output"      "avg_pool_3a_pool_output"         
##  [31] "conv_3a_proj_output"              "bn_3a_proj_output"               
##  [33] "relu_3a_proj_output"              "ch_concat_3a_chconcat_output"    
##  [35] "conv_3b_1x1_output"               "bn_3b_1x1_output"                
##  [37] "relu_3b_1x1_output"               "conv_3b_3x3_reduce_output"       
##  [39] "bn_3b_3x3_reduce_output"          "relu_3b_3x3_reduce_output"       
##  [41] "conv_3b_3x3_output"               "bn_3b_3x3_output"                
##  [43] "relu_3b_3x3_output"               "conv_3b_double_3x3_reduce_output"
##  [45] "bn_3b_double_3x3_reduce_output"   "relu_3b_double_3x3_reduce_output"
##  [47] "conv_3b_double_3x3_0_output"      "bn_3b_double_3x3_0_output"       
##  [49] "relu_3b_double_3x3_0_output"      "conv_3b_double_3x3_1_output"     
##  [51] "bn_3b_double_3x3_1_output"        "relu_3b_double_3x3_1_output"     
##  [53] "avg_pool_3b_pool_output"          "conv_3b_proj_output"             
##  [55] "bn_3b_proj_output"                "relu_3b_proj_output"             
##  [57] "ch_concat_3b_chconcat_output"     "conv_3c_3x3_reduce_output"       
##  [59] "bn_3c_3x3_reduce_output"          "relu_3c_3x3_reduce_output"       
##  [61] "conv_3c_3x3_output"               "bn_3c_3x3_output"                
##  [63] "relu_3c_3x3_output"               "conv_3c_double_3x3_reduce_output"
##  [65] "bn_3c_double_3x3_reduce_output"   "relu_3c_double_3x3_reduce_output"
##  [67] "conv_3c_double_3x3_0_output"      "bn_3c_double_3x3_0_output"       
##  [69] "relu_3c_double_3x3_0_output"      "conv_3c_double_3x3_1_output"     
##  [71] "bn_3c_double_3x3_1_output"        "relu_3c_double_3x3_1_output"     
##  [73] "max_pool_3c_pool_output"          "ch_concat_3c_chconcat_output"    
##  [75] "conv_4a_1x1_output"               "bn_4a_1x1_output"                
##  [77] "relu_4a_1x1_output"               "conv_4a_3x3_reduce_output"       
##  [79] "bn_4a_3x3_reduce_output"          "relu_4a_3x3_reduce_output"       
##  [81] "conv_4a_3x3_output"               "bn_4a_3x3_output"                
##  [83] "relu_4a_3x3_output"               "conv_4a_double_3x3_reduce_output"
##  [85] "bn_4a_double_3x3_reduce_output"   "relu_4a_double_3x3_reduce_output"
##  [87] "conv_4a_double_3x3_0_output"      "bn_4a_double_3x3_0_output"       
##  [89] "relu_4a_double_3x3_0_output"      "conv_4a_double_3x3_1_output"     
##  [91] "bn_4a_double_3x3_1_output"        "relu_4a_double_3x3_1_output"     
##  [93] "avg_pool_4a_pool_output"          "conv_4a_proj_output"             
##  [95] "bn_4a_proj_output"                "relu_4a_proj_output"             
##  [97] "ch_concat_4a_chconcat_output"     "conv_4b_1x1_output"              
##  [99] "bn_4b_1x1_output"                 "relu_4b_1x1_output"              
## [101] "conv_4b_3x3_reduce_output"        "bn_4b_3x3_reduce_output"         
## [103] "relu_4b_3x3_reduce_output"        "conv_4b_3x3_output"              
## [105] "bn_4b_3x3_output"                 "relu_4b_3x3_output"              
## [107] "conv_4b_double_3x3_reduce_output" "bn_4b_double_3x3_reduce_output"  
## [109] "relu_4b_double_3x3_reduce_output" "conv_4b_double_3x3_0_output"     
## [111] "bn_4b_double_3x3_0_output"        "relu_4b_double_3x3_0_output"     
## [113] "conv_4b_double_3x3_1_output"      "bn_4b_double_3x3_1_output"       
## [115] "relu_4b_double_3x3_1_output"      "avg_pool_4b_pool_output"         
## [117] "conv_4b_proj_output"              "bn_4b_proj_output"               
## [119] "relu_4b_proj_output"              "ch_concat_4b_chconcat_output"    
## [121] "conv_4c_1x1_output"               "bn_4c_1x1_output"                
## [123] "relu_4c_1x1_output"               "conv_4c_3x3_reduce_output"       
## [125] "bn_4c_3x3_reduce_output"          "relu_4c_3x3_reduce_output"       
## [127] "conv_4c_3x3_output"               "bn_4c_3x3_output"                
## [129] "relu_4c_3x3_output"               "conv_4c_double_3x3_reduce_output"
## [131] "bn_4c_double_3x3_reduce_output"   "relu_4c_double_3x3_reduce_output"
## [133] "conv_4c_double_3x3_0_output"      "bn_4c_double_3x3_0_output"       
## [135] "relu_4c_double_3x3_0_output"      "conv_4c_double_3x3_1_output"     
## [137] "bn_4c_double_3x3_1_output"        "relu_4c_double_3x3_1_output"     
## [139] "avg_pool_4c_pool_output"          "conv_4c_proj_output"             
## [141] "bn_4c_proj_output"                "relu_4c_proj_output"             
## [143] "ch_concat_4c_chconcat_output"     "conv_4d_1x1_output"              
## [145] "bn_4d_1x1_output"                 "relu_4d_1x1_output"              
## [147] "conv_4d_3x3_reduce_output"        "bn_4d_3x3_reduce_output"         
## [149] "relu_4d_3x3_reduce_output"        "conv_4d_3x3_output"              
## [151] "bn_4d_3x3_output"                 "relu_4d_3x3_output"              
## [153] "conv_4d_double_3x3_reduce_output" "bn_4d_double_3x3_reduce_output"  
## [155] "relu_4d_double_3x3_reduce_output" "conv_4d_double_3x3_0_output"     
## [157] "bn_4d_double_3x3_0_output"        "relu_4d_double_3x3_0_output"     
## [159] "conv_4d_double_3x3_1_output"      "bn_4d_double_3x3_1_output"       
## [161] "relu_4d_double_3x3_1_output"      "avg_pool_4d_pool_output"         
## [163] "conv_4d_proj_output"              "bn_4d_proj_output"               
## [165] "relu_4d_proj_output"              "ch_concat_4d_chconcat_output"    
## [167] "conv_4e_3x3_reduce_output"        "bn_4e_3x3_reduce_output"         
## [169] "relu_4e_3x3_reduce_output"        "conv_4e_3x3_output"              
## [171] "bn_4e_3x3_output"                 "relu_4e_3x3_output"              
## [173] "conv_4e_double_3x3_reduce_output" "bn_4e_double_3x3_reduce_output"  
## [175] "relu_4e_double_3x3_reduce_output" "conv_4e_double_3x3_0_output"     
## [177] "bn_4e_double_3x3_0_output"        "relu_4e_double_3x3_0_output"     
## [179] "conv_4e_double_3x3_1_output"      "bn_4e_double_3x3_1_output"       
## [181] "relu_4e_double_3x3_1_output"      "max_pool_4e_pool_output"         
## [183] "ch_concat_4e_chconcat_output"     "conv_5a_1x1_output"              
## [185] "bn_5a_1x1_output"                 "relu_5a_1x1_output"              
## [187] "conv_5a_3x3_reduce_output"        "bn_5a_3x3_reduce_output"         
## [189] "relu_5a_3x3_reduce_output"        "conv_5a_3x3_output"              
## [191] "bn_5a_3x3_output"                 "relu_5a_3x3_output"              
## [193] "conv_5a_double_3x3_reduce_output" "bn_5a_double_3x3_reduce_output"  
## [195] "relu_5a_double_3x3_reduce_output" "conv_5a_double_3x3_0_output"     
## [197] "bn_5a_double_3x3_0_output"        "relu_5a_double_3x3_0_output"     
## [199] "conv_5a_double_3x3_1_output"      "bn_5a_double_3x3_1_output"       
## [201] "relu_5a_double_3x3_1_output"      "avg_pool_5a_pool_output"         
## [203] "conv_5a_proj_output"              "bn_5a_proj_output"               
## [205] "relu_5a_proj_output"              "ch_concat_5a_chconcat_output"    
## [207] "conv_5b_1x1_output"               "bn_5b_1x1_output"                
## [209] "relu_5b_1x1_output"               "conv_5b_3x3_reduce_output"       
## [211] "bn_5b_3x3_reduce_output"          "relu_5b_3x3_reduce_output"       
## [213] "conv_5b_3x3_output"               "bn_5b_3x3_output"                
## [215] "relu_5b_3x3_output"               "conv_5b_double_3x3_reduce_output"
## [217] "bn_5b_double_3x3_reduce_output"   "relu_5b_double_3x3_reduce_output"
## [219] "conv_5b_double_3x3_0_output"      "bn_5b_double_3x3_0_output"       
## [221] "relu_5b_double_3x3_0_output"      "conv_5b_double_3x3_1_output"     
## [223] "bn_5b_double_3x3_1_output"        "relu_5b_double_3x3_1_output"     
## [225] "max_pool_5b_pool_output"          "conv_5b_proj_output"             
## [227] "bn_5b_proj_output"                "relu_5b_proj_output"             
## [229] "ch_concat_5b_chconcat_output"     "global_pool_output"              
## [231] "flatten_output"                   "fc1_output"                      
## [233] "softmax_output"

練習2:重現Inception module的推理過程(2)

library(OpenImageR)

img<- readImage('test.jpg') 
resized_img <- resizeImage(img, 224, 224, method = 'bilinear')

imageShow(resized_img)

– 讓我們來進行預測,注意前處理的過程,輕易的改變這個過程會導致這個網路不準確:

resized_img[,,1] <- resized_img[,,1] * 255 - 123.68
resized_img[,,2] <- resized_img[,,2] * 255 - 116.78
resized_img[,,3] <- resized_img[,,3] * 255 - 103.94

dim(resized_img) <- c(dim(resized_img), 1)
pred_prob  <- predict(inception_model, resized_img)

pred_prob <- as.numeric(pred_prob)
names(pred_prob) <- synsets
pred_prob <- sort(pred_prob, decreasing = TRUE)
pred_prob <- formatC(pred_prob, 4, format = 'f')
head(pred_prob, 5)
##             n01484850 大白鯊               n03045698 斗篷 
##                     "0.8704"                     "0.0248" 
## n02071294 殺人鯨,逆戟鯨,虎鯨               n03388043 噴泉 
##                     "0.0197"                     "0.0080" 
##         n03916031 香水(瓶) 
##                     "0.0069"

練習2:重現Inception module的推理過程(3)

– 為了方便我們一步一步檢驗我們的輸出是否正確,我們同時輸出conv_3a_1x1_output以及bn_3a_1x1_output測試

#Intrested outputs
pool_2_output <- which(all_layers$outputs == 'pool_2_output') %>% all_layers$get.output()
conv_3a_1x1_output <- which(all_layers$outputs == 'conv_3a_1x1_output') %>% all_layers$get.output()
bn_3a_1x1_output <- which(all_layers$outputs == 'bn_3a_1x1_output') %>% all_layers$get.output()
avg_pool_3a_pool_output <-  which(all_layers$outputs == 'avg_pool_3a_pool_output') %>% all_layers$get.output()
ch_concat_3a_chconcat_output <- which(all_layers$outputs == 'ch_concat_3a_chconcat_output') %>% all_layers$get.output()

#Needed params
my_model <- inception_model
my_model$symbol <- ch_concat_3a_chconcat_output
my_model$arg.params <- my_model$arg.params[names(my_model$arg.params) %in% names(mx.symbol.infer.shape(ch_concat_3a_chconcat_output, data = c(224, 224, 3, 7))$arg.shapes)]
my_model$aux.params <- my_model$aux.params[names(my_model$aux.params) %in% names(mx.symbol.infer.shape(ch_concat_3a_chconcat_output, data = c(224, 224, 3, 7))$aux.shapes)]

#Build executor
out <- mx.symbol.Group(c(pool_2_output, conv_3a_1x1_output, bn_3a_1x1_output, avg_pool_3a_pool_output, ch_concat_3a_chconcat_output))
executor <- mx.simple.bind(symbol = out, data = c(224, 224, 3, 1), ctx = mx.cpu())
mx.exec.update.arg.arrays(executor, my_model$arg.params, match.name = TRUE)
mx.exec.update.aux.arrays(executor, my_model$aux.params, match.name = TRUE)
mx.exec.update.arg.arrays(executor, list(data = mx.nd.array(resized_img)), match.name = TRUE)
mx.exec.forward(executor, is.train = FALSE)
Input <- as.array(executor$ref.outputs$pool_2_output)
check1 <- as.array(executor$ref.outputs$conv_3a_1x1_output)
check2 <- as.array(executor$ref.outputs$bn_3a_1x1_output)
check3 <- as.array(executor$ref.outputs$avg_pool_3a_pool_output)
Output <- as.array(executor$ref.outputs$ch_concat_3a_chconcat_output)

練習2答案(1)

– 首先先檢查我們的卷積推理函數:

CONV_func <- function (input_array, input_weight, input_bias) {
  
  num_size <- dim(input_weight)[1]
  num_filter <- dim(input_weight)[4]
  
  if (num_size > 1) {
    
    original_dim <- dim(input_array)
    pad_size <- (num_size - 1)/2
    original_dim[1:2] <- original_dim[1:2] + pad_size * 2
    input_array_pad <- array(0, dim = original_dim)
    input_array_pad[(pad_size+1):(original_dim[1]-pad_size),(pad_size+1):(original_dim[2]-pad_size),,] <- input_array
    
  } else {
    
    input_array_pad <- input_array
    
  }
  
  out_array <- array(0, dim = c(dim(input_array)[1:2], num_filter, dim(input_array)[4]))
  
  for (l in 1:dim(out_array)[4]) {
    for (k in 1:num_filter) {
      for (j in 1:dim(out_array)[2]) {
        for (i in 1:dim(out_array)[1]) {
          out_array[i,j,k,l] <- sum(input_array_pad[i:(i+num_size-1),j:(j+num_size-1),,l] * input_weight[,,,k]) + input_bias[k]
        }
      }
    }
  }

  return(out_array)

}

my_check1 <- CONV_func(input_array = Input,
                       input_weight = as.array(my_model$arg.params$conv_3a_1x1_weight),
                       input_bias = as.array(my_model$arg.params$conv_3a_1x1_bias))

print(mean(abs(my_check1 - check1)))
## [1] 8.993416e-07

– 再檢查一下我們的批量標準化推理函數:

BN_func <- function (input_array, input_mean, input_var, input_gamma, input_beta, eps = 1e-5) {
  
  input_array_norm <- input_array
  
  for (i in 1:length(input_mean)) {
    input_array_norm[,,i,] <- (input_array[,,i,] - input_mean[i])/sqrt(input_var[i] + eps) * input_gamma[i] + input_beta[i]
  }
  
  return(input_array_norm)
  
}

my_check2 <- BN_func(input_array = my_check1,
                     input_mean = as.array(my_model$aux.params$bn_3a_1x1_moving_mean),
                     input_var = as.array(my_model$aux.params$bn_3a_1x1_moving_var),
                     input_gamma = as.array(my_model$arg.params$bn_3a_1x1_gamma),
                     input_beta = as.array(my_model$arg.params$bn_3a_1x1_beta))


print(mean(abs(my_check2 - check2)))
## [1] 2.637879e-07

– 再檢查一下我們的池化推理函數:

POOL_func <- function (input_array, pad_size = 1, num_size = 3) {
  
  original_dim <- dim(input_array)
  original_dim[1:2] <- original_dim[1:2] + pad_size * 2
  input_array_pad <- array(0, dim = original_dim)
  input_array_pad[(pad_size+1):(original_dim[1]-pad_size),(pad_size+1):(original_dim[2]-pad_size),,] <- input_array
  
  out_array <- array(0, dim = dim(input_array))
  
  for (l in 1:dim(out_array)[4]) {
    for (k in 1:dim(out_array)[3]) {
      for (j in 1:dim(out_array)[2]) {
        for (i in 1:dim(out_array)[1]) {
          out_array[i,j,k,l] <- mean(input_array_pad[i:(i+num_size-1),j:(j+num_size-1),k,l])
        }
      }
    }
  }
  
  return(out_array)
  
}

my_check3 <- POOL_func(input_array = Input)

print(mean(abs(my_check3 - check3)))
## [1] 4.177961e-08

練習2答案(2)

Fig/F05

print(all_layers$outputs[grepl('output', all_layers$outputs)][11:34])
##  [1] "pool_2_output"                    "conv_3a_1x1_output"              
##  [3] "bn_3a_1x1_output"                 "relu_3a_1x1_output"              
##  [5] "conv_3a_3x3_reduce_output"        "bn_3a_3x3_reduce_output"         
##  [7] "relu_3a_3x3_reduce_output"        "conv_3a_3x3_output"              
##  [9] "bn_3a_3x3_output"                 "relu_3a_3x3_output"              
## [11] "conv_3a_double_3x3_reduce_output" "bn_3a_double_3x3_reduce_output"  
## [13] "relu_3a_double_3x3_reduce_output" "conv_3a_double_3x3_0_output"     
## [15] "bn_3a_double_3x3_0_output"        "relu_3a_double_3x3_0_output"     
## [17] "conv_3a_double_3x3_1_output"      "bn_3a_double_3x3_1_output"       
## [19] "relu_3a_double_3x3_1_output"      "avg_pool_3a_pool_output"         
## [21] "conv_3a_proj_output"              "bn_3a_proj_output"               
## [23] "relu_3a_proj_output"              "ch_concat_3a_chconcat_output"
library(abind)

#1x1

conv_3a_1x1 <- CONV_func(input_array = Input,
                         input_weight = as.array(my_model$arg.params$conv_3a_1x1_weight),
                         input_bias = as.array(my_model$arg.params$conv_3a_1x1_bias))

bn_3a_1x1 <- BN_func(input_array = conv_3a_1x1,
                     input_mean = as.array(my_model$aux.params$bn_3a_1x1_moving_mean),
                     input_var = as.array(my_model$aux.params$bn_3a_1x1_moving_var),
                     input_gamma = as.array(my_model$arg.params$bn_3a_1x1_gamma),
                     input_beta = as.array(my_model$arg.params$bn_3a_1x1_beta))

relu_3a_1x1 <- bn_3a_1x1
relu_3a_1x1[relu_3a_1x1 < 0] <- 0

#3x3

conv_3a_3x3_reduce <- CONV_func(input_array = Input,
                                input_weight = as.array(my_model$arg.params$conv_3a_3x3_reduce_weight),
                                input_bias = as.array(my_model$arg.params$conv_3a_3x3_reduce_bias))

bn_3a_3x3_reduce <- BN_func(input_array = conv_3a_3x3_reduce,
                            input_mean = as.array(my_model$aux.params$bn_3a_3x3_reduce_moving_mean),
                            input_var = as.array(my_model$aux.params$bn_3a_3x3_reduce_moving_var),
                            input_gamma = as.array(my_model$arg.params$bn_3a_3x3_reduce_gamma),
                            input_beta = as.array(my_model$arg.params$bn_3a_3x3_reduce_beta))

relu_3a_3x3_reduce <- bn_3a_3x3_reduce
relu_3a_3x3_reduce[relu_3a_3x3_reduce < 0] <- 0

conv_3a_3x3 <- CONV_func(input_array = relu_3a_3x3_reduce,
                         input_weight = as.array(my_model$arg.params$conv_3a_3x3_weight),
                         input_bias = as.array(my_model$arg.params$conv_3a_3x3_bias))

bn_3a_3x3 <- BN_func(input_array = conv_3a_3x3,
                     input_mean = as.array(my_model$aux.params$bn_3a_3x3_moving_mean),
                     input_var = as.array(my_model$aux.params$bn_3a_3x3_moving_var),
                     input_gamma = as.array(my_model$arg.params$bn_3a_3x3_gamma),
                     input_beta = as.array(my_model$arg.params$bn_3a_3x3_beta))

relu_3a_3x3 <- bn_3a_3x3
relu_3a_3x3[relu_3a_3x3 < 0] <- 0

#5x5

conv_3a_double_3x3_reduce <- CONV_func(input_array = Input,
                                      input_weight = as.array(my_model$arg.params$conv_3a_double_3x3_reduce_weight),
                                      input_bias = as.array(my_model$arg.params$conv_3a_double_3x3_reduce_bias))

bn_3a_double_3x3_reduce <- BN_func(input_array = conv_3a_double_3x3_reduce,
                                  input_mean = as.array(my_model$aux.params$bn_3a_double_3x3_reduce_moving_mean),
                                  input_var = as.array(my_model$aux.params$bn_3a_double_3x3_reduce_moving_var),
                                  input_gamma = as.array(my_model$arg.params$bn_3a_double_3x3_reduce_gamma),
                                  input_beta = as.array(my_model$arg.params$bn_3a_double_3x3_reduce_beta))

relu_3a_double_3x3_reduce <- bn_3a_double_3x3_reduce
relu_3a_double_3x3_reduce[relu_3a_double_3x3_reduce < 0] <- 0

conv_3a_double_3x3_0 <- CONV_func(input_array = relu_3a_double_3x3_reduce,
                                  input_weight = as.array(my_model$arg.params$conv_3a_double_3x3_0_weight),
                                  input_bias = as.array(my_model$arg.params$conv_3a_double_3x3_0_bias))

bn_3a_double_3x3_0 <- BN_func(input_array = conv_3a_double_3x3_0,
                              input_mean = as.array(my_model$aux.params$bn_3a_double_3x3_0_moving_mean),
                              input_var = as.array(my_model$aux.params$bn_3a_double_3x3_0_moving_var),
                              input_gamma = as.array(my_model$arg.params$bn_3a_double_3x3_0_gamma),
                              input_beta = as.array(my_model$arg.params$bn_3a_double_3x3_0_beta))

relu_3a_double_3x3_0 <- bn_3a_double_3x3_0
relu_3a_double_3x3_0[relu_3a_double_3x3_0 < 0] <- 0

conv_3a_double_3x3_1 <- CONV_func(input_array = relu_3a_double_3x3_0,
                                  input_weight = as.array(my_model$arg.params$conv_3a_double_3x3_1_weight),
                                  input_bias = as.array(my_model$arg.params$conv_3a_double_3x3_1_bias))

bn_3a_double_3x3_1 <- BN_func(input_array = conv_3a_double_3x3_1,
                              input_mean = as.array(my_model$aux.params$bn_3a_double_3x3_1_moving_mean),
                              input_var = as.array(my_model$aux.params$bn_3a_double_3x3_1_moving_var),
                              input_gamma = as.array(my_model$arg.params$bn_3a_double_3x3_1_gamma),
                              input_beta = as.array(my_model$arg.params$bn_3a_double_3x3_1_beta))

relu_3a_double_3x3_1 <- bn_3a_double_3x3_1
relu_3a_double_3x3_1[relu_3a_double_3x3_1 < 0] <- 0

#Pool

avg_pool_3a_pool <- POOL_func(input_array = Input)

conv_3a_proj <-  CONV_func(input_array = avg_pool_3a_pool,
                           input_weight = as.array(my_model$arg.params$conv_3a_proj_weight),
                           input_bias = as.array(my_model$arg.params$conv_3a_proj_bias))

bn_3a_proj <- BN_func(input_array = conv_3a_proj,
                      input_mean = as.array(my_model$aux.params$bn_3a_proj_moving_mean),
                      input_var = as.array(my_model$aux.params$bn_3a_proj_moving_var),
                      input_gamma = as.array(my_model$arg.params$bn_3a_proj_gamma),
                      input_beta = as.array(my_model$arg.params$bn_3a_proj_beta))

relu_3a_proj <- bn_3a_proj
relu_3a_proj[relu_3a_proj < 0] <- 0

#Concat

My_output <- abind(relu_3a_1x1, relu_3a_3x3, relu_3a_double_3x3_1, relu_3a_proj, along = 3)

#Check answer

print(mean(abs(My_output - Output)))
## [1] 1.375312e-07

第二節:輕量級網路架構的演進(1)

– 而目前壓縮參數的有效方法是透過1×1卷積核,除此之外我們並沒有正式介紹過其他技巧,讓我們先從經典模型中觀察學習,下圖是幾個比較經典冠軍模型其參數量與準確度的關係圖:

F25

第二節:輕量級網路架構的演進(2)

– 但Inception Module討厭的地方在於它充滿的人為設計的痕跡,這也給了後續網路設計存有很大的進步空間。

– ResNext所關注的部分在於ResNet中存在最大的弱點,也就是在Module的設計上。讓我們看看他的Module設計思路:

F26

– 我們可以觀察到,ResNext所使用的Module是多通道的,但比起Inception Module,每個通道的設計更為一致而方便擴展。

第二節:輕量級網路架構的演進(3)

F27

– 這是Paper中使用的ResNet與ResNext的架構對比:

F28

– 這是一系列的實驗結果,我們可以發現group越多結果就越準:

F29

第二節:輕量級網路架構的演進(4)

– Google的研究團隊試著解決這個問題,思考過程與ResNext非常相似,同樣是從Inception Module而來,並發展出了一個比較不知名的網路:Xception net

F30

第二節:輕量級網路架構的演進(5)

– 為了彌補這個信息損失所造成的準確性危害,Google的研究團隊設計了一個實驗,那就是比較使用不同的active fucntion對網路準確性的影響,最終發現了不使用active fucntion具有了最高的準確性。

F31

第二節:輕量級網路架構的演進(6)

– 其中MobileNet v1跟Xception net的結構比較類似,只是研究的主題略有不同,而MobileNet v2就提出了一個非常大觀念上的演進,也就是我們本來都喜歡用的Bottleneck結構,因為Depthwise Separable Convolution會對信息造成損失,被修改成為Inverted Bottleneck如下:

F32

F34

F33

– 需要注意的是MobileNet v2在Module的設計上有參考Xception net的結果,因此Module的最後一個卷積層後面是不使用active fucntion的。

第二節:輕量級網路架構的演進(7)

– 一個可行的方案是同樣使用group convolution於1×1卷積核上,但在3×3卷積核中使用Depthwise Separable Convolution就已經造成了大量的信息損失,還擴展到1×1卷積核明顯不可行,因此這裡遇到了極大的困難。

F35

第三節:網路邏輯的可視化(1)

– 這個研究被發表於CVPR2016,文章是:Learning Deep Features for Discriminative Localization

F24

第三節:網路邏輯的可視化(2)

library(mxnet)

res_model <- mx.model.load("model/resnet-18", 0)
synsets <- readLines('model/chinese synset.txt', encoding = 'UTF-8')
library(OpenImageR)

img<- readImage('test.jpg') 
resized_img <- resizeImage(img, 224, 224, method = 'bilinear')

imageShow(resized_img)

– 讓我們來進行預測,注意要先將圖像改成MxNet所接受的維度:

dim(resized_img) <- c(dim(resized_img), 1)
pred_prob  <- predict(res_model, resized_img)

pred_prob <- as.numeric(pred_prob)
names(pred_prob) <- synsets
pred_prob <- sort(pred_prob, decreasing = TRUE)
pred_prob <- formatC(pred_prob, 4, format = 'f')
head(pred_prob, 5)
## n01484850 大白鯊   n01491361 虎鯊 n02807133 游泳帽   n02640242 鱘魚 
##         "0.9206"         "0.0322"         "0.0272"         "0.0091" 
##   n02641379 雀鱔 
##         "0.0038"

第三節:網路邏輯的可視化(3)

all_layers <- res_model$symbol$get.internals()
relu1_output <- which(all_layers$outputs == 'relu1_output') %>% all_layers$get.output()
softmax_output <- which(all_layers$outputs == 'softmax_output') %>% all_layers$get.output()
  
out <- mx.symbol.Group(c(relu1_output, softmax_output))
executor <- mx.simple.bind(symbol = out, data = c(224, 224, 3, 1), ctx = mx.cpu())
  
mx.exec.update.arg.arrays(executor, res_model$arg.params, match.name = TRUE)
mx.exec.update.aux.arrays(executor, res_model$aux.params, match.name = TRUE)
mx.exec.update.arg.arrays(executor, list(data = mx.nd.array(resized_img)), match.name = TRUE)
mx.exec.forward(executor, is.train = FALSE)

ls(executor$ref.outputs)
## [1] "relu1_output"   "softmax_output"
#Get the prediction output
pred_pos <- which.max(as.array(executor$ref.outputs$softmax_output))

#Get the weights in final fully connected layer
FC_weight <- as.array(res_model$arg.params$fc1_weight)
FC_weight <- array(FC_weight[,pred_pos], dim = c(1, 1, dim(FC_weight)[1], 1)) %>% mx.nd.array()

#Weithged sum of feature map: the core of class activation mapping (CAM)
CAM_map <- mx.nd.broadcast.mul(lhs = executor$ref.outputs$relu1_output, rhs = FC_weight)
CAM_map <- mx.nd.sum(data = CAM_map, axis = 0:1) %>% as.array()

#Standardization from 0 to 1
CAM_map <- (CAM_map - min(CAM_map))/(max(CAM_map) - min(CAM_map)) 

第三節:網路邏輯的可視化(4)

library(imager)

par(mar = rep(0, 4))
plot(NA, xlim = c(0.04, 0.96), ylim = c(0.96, 0.04), xaxt = "n", yaxt = "n", bty = "n")
img %>% rgb_2gray %>% as.raster() %>% rasterImage(., 0, 1, 1, 0, interpolate = FALSE)

#Define the color
  
cols <- colorRampPalette(c("#000099", "#00FEFF", "#45FE4F", "#FCFF00", "#FF9400", "#FF3100"))(256)

cols <- paste0(cols, '80')

#Enlarge the class activation mapping (7*7 to larger)

resize_CAM <- resizeImage(CAM_map, width = dim(img)[1], height = dim(img)[2], method = 'bilinear')
resize_CAM <- (resize_CAM - min(resize_CAM)) / (max(resize_CAM) - min(resize_CAM))
resize_CAM <- round(resize_CAM * 255 + 1)

#Visualization

FINAL_CAM <- cols[resize_CAM] %>% matrix(., dim(img)[1], dim(img)[2], byrow = FALSE) %>% as.raster()
rasterImage(FINAL_CAM, 0, 1, 1, 0, interpolate = FALSE)
    
#Show the prediction output
obj <- synsets[pred_pos]
legend('bottomright', paste0(substr(obj, 11, nchar(obj)), ' (prob = ', formatC(as.array(executor$ref.outputs$softmax_output)[pred_pos], 3, format = 'f'), ')'), bg = 'gray90')

第三節:網路邏輯的可視化(5)

library(imager)

par(mar = rep(0, 4))
plot(NA, xlim = c(0.04, 0.96), ylim = c(0.96, 0.04), xaxt = "n", yaxt = "n", bty = "n")
img %>% rgb_2gray %>% as.raster() %>% rasterImage(., 0, 1, 1, 0, interpolate = FALSE)

#Define the color
  
cols <- colorRampPalette(c("#000000", "#000000", "#000000", "#000000", "#000000", "#000000", 
                           "#000004", "#3B0F70", "#8C2981", "#DE4968", "#FE9F6D", "#FCFDBF"))(256)

alpha_len <- 90
alpha_val <- rgb(0, 0, 0, seq(1/256, 200/256, length.out = alpha_len)^0.9) %>% substr(., 8, 9)
cols[1:(256 - alpha_len)] <- paste0(cols[1:(256 - alpha_len)], '00')
cols[(256 - alpha_len + 1):256] <- paste0(cols[(256 - alpha_len + 1):256], alpha_val)

#Enlarge the class activation mapping (7*7 to larger)

resize_CAM <- resizeImage(CAM_map, width = dim(img)[1], height = dim(img)[2], method = 'bilinear')
resize_CAM <- (resize_CAM - min(resize_CAM)) / (max(resize_CAM) - min(resize_CAM))
resize_CAM <- round(resize_CAM * 255 + 1)

#Visualization

FINAL_CAM <- cols[resize_CAM] %>% matrix(., dim(img)[1], dim(img)[2], byrow = FALSE) %>% as.raster()
rasterImage(FINAL_CAM, 0, 1, 1, 0, interpolate = FALSE)
    
#Show the prediction output
obj <- synsets[pred_pos]
legend('bottomright', paste0(substr(obj, 11, nchar(obj)), ' (prob = ', formatC(as.array(executor$ref.outputs$softmax_output)[pred_pos], 3, format = 'f'), ')'), bg = 'gray90')

結語

– 可視化的部分這讓我們了解到,其實整個卷積神經網路的運算過程並不像之前大家所認為的黑箱決策,而這點非常的重要,讓我們有機會能夠運用這些位置資訊來設計進一步的「物件分割」以及「物件識別」模型。

– 透過本周的練習,我們對卷積神經網路的理解又到了一個新的境界,你現在是否能體會最終為什麼我們能將想看的位置畫出來?