from tensorflow.keras.applications import VGG16
# https://www.tensorflow.org/api_docs/python/tf/keras/applications/vgg16/VGG16?hl=ko
# 여기 위에서 해당 모델 부분 확인 가능함
pre_trainde_model =VGG16(input_shape=(224,224,3),
include_top=False,
weights ='imagenet'
)
# include_top =False : 불러온 모델의 MLP층을 사용하지 않고 특성추출부만 사용 (특성추출방식)
# > 기존 이미지넷 대회에서는 1000가지의 이미지를 분류했으나 우리는 3가지 동물로만 분류하기 때문에
# MLP층을 우리에게 맞게 설정해줘야함
# weights ='imagenet' : 이미지넷 대회에서 학습된 가중치(w)를 그대로 가져와서 사용
cnn_model2 = Sequential()
# 특성추출부
cnn_model2.add(pre_trainde_model)
# MLP층
cnn_model2.add(Flatten())
cnn_model2.add(Dense(128, activation='relu'))
cnn_model2.add(Dense(64, activation='relu'))
cnn_model2.add(Dense(32, activation='relu'))
cnn_model2.add(Dense(3, activation='softmax'))
cnn_model2.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 7, 7, 512) 14714688
flatten_1 (Flatten) (None, 25088) 0
dense_4 (Dense) (None, 128) 3211392
dense_5 (Dense) (None, 64) 8256
dense_6 (Dense) (None, 32) 2080
dense_7 (Dense) (None, 3) 99
=================================================================
Total params: 17,936,515
Trainable params: 17,936,515
Non-trainable params: 0
_________________________________________________________________
# 사전에 학습된 vgg16 모델을 불러와서 재학습이 불가능하도록 동결(잘 학습된 w,b 값을 그대로 사용하기 위해서)
# > 동결을 시지키 않게 되면 새롭게 만들어준 MLP층과 재학습하게 되므로 잘 학습된 w,b값이 뒤틀려 버라게 됨
pre_trainde_model.trainable = False
pre_trainde_model.summary()
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 0
Non-trainable params: 14,714,688
_________________________________________________________________
cnn_model2.compile(loss='sparse_categorical_crossentropy',
optimizer='Adam',
metrics =['acc']
)
from sklearn.utils import validation
h2= cnn_model2.fit(X_train, y_train,
validation_split=0.2,
batch_size=128,
epochs=50
)
plt.figure(figsize=(15,10))
plt.plot(h2.history['acc'],
label='acc',
c = 'blue',
marker='.'
)
plt.plot(h2.history['val_acc'],
label='val_sigmid+SGD',
c = 'red',
marker='.'
)
plt.legend()
plt.show()
pre = cnn_model2.predict(X_test)
print(classification_report(y_test, np.argmax(pre, axis=1)))
precision recall f1-score support
0 0.77 0.77 0.77 87
1 0.85 0.66 0.74 79
2 0.66 0.82 0.73 74
accuracy 0.75 240
macro avg 0.76 0.75 0.75 240
weighted avg 0.76 0.75 0.75 240
'딥러닝' 카테고리의 다른 글
[딥러닝] Simple RNN (0) | 2022.07.26 |
---|---|
[딥러닝] 데이터 증강(ImageDataGenerator) (0) | 2022.07.26 |
[딥러닝] CNN 모델 (0) | 2022.07.26 |
[딥러닝] CNN(Convolutional Neural Network) (0) | 2022.07.22 |
[딥러닝] 다중분류 신경망모델 만들기(3가지 동물 분류) (0) | 2022.07.22 |