공부 기록
Deep Learning(0914_day8) - Cat and Dog(VGG16)_2 - Kaggle 본문
Cat and Dog Dataset¶
- Cats and Dogs dataset to train a DL model
- 캐글의 Cat and Dog
import¶
In [15]:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
import glob
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Input, Conv2D, BatchNormalization, Activation, MaxPooling2D, GlobalAveragePooling2D, Dropout, Dense
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.applications.vgg16 import VGG16
In [3]:
np.random.seed(42)
tf.random.set_seed(42)
Load Dataset¶
In [4]:
# in training_set directory
training_cats = glob.glob("../input/cat-and-dog/training_set/training_set/cats/*.jpg")
training_dogs = glob.glob("../input/cat-and-dog/training_set/training_set/dogs/*.jpg")
len(training_cats), len(training_dogs)
Out[4]:
(4000, 4005)
In [5]:
# in test_set directory
test_cats = glob.glob("../input/cat-and-dog/test_set/test_set/cats/*.jpg")
test_dogs = glob.glob("../input/cat-and-dog/test_set/test_set/dogs/*.jpg")
len(test_cats), len(test_dogs)
Out[5]:
(1011, 1012)
Visualize Data¶
In [6]:
fig, axes = plt.subplots(figsize=(22, 6), nrows=1, ncols=4)
dog_images = training_dogs[:4]
for i in range(4):
image = cv2.cvtColor(cv2.imread(dog_images[i]), cv2.COLOR_BGR2RGB)
axes[i].imshow(image)
fig, axes = plt.subplots(figsize=(22, 6), nrows=1, ncols=4)
cat_images = training_cats[:4]
for i in range(4):
image = cv2.cvtColor(cv2.imread(cat_images[i]), cv2.COLOR_BGR2RGB)
axes[i].imshow(image)
Preprocess Data(from dataframe)¶
- ImageDataGenerator : Augmentation, scale
- flow : batch, Label Encoding, target size(image size)
In [7]:
pd.set_option("display.max_colwidth", 200)
In [8]:
train_paths = training_cats + training_dogs
train_labels = ["CAT" for _ in range(len(training_cats))] + ["Dog" for _ in range(len(training_dogs))]
train_df = pd.DataFrame({'path': train_paths, "label": train_labels})
test_paths = test_cats + test_dogs
test_labels = ["CAT" for _ in range(len(test_cats))] + ["Dog" for _ in range(len(test_dogs))]
test_df = pd.DataFrame({'path': test_paths, "label": test_labels})
print(train_df.label.value_counts())
print(test_df.label.value_counts())
Dog 4005 CAT 4000 Name: label, dtype: int64 Dog 1012 CAT 1011 Name: label, dtype: int64
In [9]:
train_df, valid_df = train_test_split(train_df, test_size=0.2, stratify=train_df['label'])
print(train_df['label'].value_counts())
print(valid_df['label'].value_counts())
print(train_df.shape, valid_df.shape)
Dog 3204 CAT 3200 Name: label, dtype: int64 Dog 801 CAT 800 Name: label, dtype: int64 (6404, 2) (1601, 2)
In [10]:
IMAGE_SIZE = 224
BATCH_SIZE = 64
train_generator = ImageDataGenerator(horizontal_flip=True, rescale=1/255.0)
train_generator_iterator = train_generator.flow_from_dataframe(dataframe=train_df,
x_col="path", y_col="label",
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE, class_mode="binary")
Found 6404 validated image filenames belonging to 2 classes.
In [11]:
valid_generator = ImageDataGenerator(rescale=1/255.0)
valid_generator_iterator = valid_generator.flow_from_dataframe(dataframe=valid_df,
x_col="path", y_col="label",
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE, class_mode="binary")
Found 1601 validated image filenames belonging to 2 classes.
In [12]:
test_generator = ImageDataGenerator(rescale=1/255.0)
test_generator_iterator = test_generator.flow_from_dataframe(dataframe=test_df,
x_col="path", y_col="label",
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE, class_mode="binary")
Found 2023 validated image filenames belonging to 2 classes.
- fetch some data
In [13]:
image_array, label_array = next(train_generator_iterator)
image_array.shape, label_array.shape
Out[13]:
((64, 224, 224, 3), (64,))
Create Model¶
In [13]:
def build_vgg16():
tf.keras.backend.clear_session()
input_tensor = Input(shape=(224, 224, 3))
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(input_tensor)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
x = GlobalAveragePooling2D()(x)
# x = Dropout(rate=0.5)(x)
# x = Dense(300, activation='relu', name='fc1')(x)
# x = Dropout(rate=0.5)(x)
x = Dense(50, activation='relu', name='fc2')(x)
# x = Dropout(rate=0.5)(x)
output = Dense(1, activation="sigmoid")(x)
model = Model(inputs=input_tensor, outputs=output)
return model
In [14]:
model = build_vgg16()
model.summary()
2021-09-14 08:07:50.706752: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-09-14 08:07:50.709866: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2021-09-14 08:07:50.746554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.747201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0 coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s 2021-09-14 08:07:50.747279: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-14 08:07:50.778920: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-14 08:07:50.779024: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-14 08:07:50.795231: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-09-14 08:07:50.803408: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2021-09-14 08:07:50.848502: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-09-14 08:07:50.861349: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2021-09-14 08:07:50.863658: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2021-09-14 08:07:50.863833: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.864543: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.866133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-09-14 08:07:50.866532: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-14 08:07:50.866726: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-09-14 08:07:50.866887: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.867471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0 coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s 2021-09-14 08:07:50.867525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-14 08:07:50.867556: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-14 08:07:50.867579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-14 08:07:50.867607: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-09-14 08:07:50.867639: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2021-09-14 08:07:50.867664: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-09-14 08:07:50.867687: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2021-09-14 08:07:50.867710: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2021-09-14 08:07:50.867798: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.868405: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:50.869023: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-09-14 08:07:50.869959: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-14 08:07:52.330888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-09-14 08:07:52.330940: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 2021-09-14 08:07:52.330950: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N 2021-09-14 08:07:52.333554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:52.334324: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:52.334955: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 08:07:52.335508: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14957 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 512) 0 _________________________________________________________________ fc2 (Dense) (None, 50) 25650 _________________________________________________________________ dense (Dense) (None, 1) 51 ================================================================= Total params: 14,740,389 Trainable params: 14,740,389 Non-trainable params: 0 _________________________________________________________________
Callback¶
In [15]:
checkpoint_cb = ModelCheckpoint("my_keras_model.h5", save_best_only=True, verbose=1)
early_stopping_cb = EarlyStopping(patience=12, restore_best_weights=True)
reducelr_cb = ReduceLROnPlateau(monitor="val_loss", factor=0.2, patience=5, mode="min", verbose=1)
Compile Model, Train¶
In [ ]:
model.compile(optimizer=Adam(0.0001), loss="binary_crossentropy", metrics=["accuracy"])
history = model.fit(train_generator_iterator, epochs=40, validation_data=valid_generator_iterator,
callbacks=[checkpoint_cb, early_stopping_cb, reducelr_cb])
Evaluate¶
In [ ]:
model.evaluate(test_generator_iterator)
VGG16 Pre-trained Model 이용하기¶
In [17]:
IMAGE_SIZE = 224
input_tensor = Input(shape=(IMAGE_SIZE, IMAGE_SIZE, 3))
base_model = VGG16(include_top=False, weights="imagenet", input_tensor=input_tensor)
x = GlobalAveragePooling2D()(base_model.output)
x = Dense(50, activation='relu', name='fc1')(x)
output = Dense(1, activation="sigmoid")(x)
model = Model(inputs=input_tensor, outputs=output)
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 58892288/58889256 [==============================] - 0s 0us/step
In [18]:
model.summary()
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ global_average_pooling2d_1 ( (None, 512) 0 _________________________________________________________________ fc1 (Dense) (None, 50) 25650 _________________________________________________________________ dense_1 (Dense) (None, 1) 51 ================================================================= Total params: 14,740,389 Trainable params: 14,740,389 Non-trainable params: 0 _________________________________________________________________
In [19]:
checkpoint_cb = ModelCheckpoint("my_keras_model.h5", save_best_only=True, verbose=1)
early_stopping_cb = EarlyStopping(patience=12, restore_best_weights=True)
reducelr_cb = ReduceLROnPlateau(monitor="val_loss", factor=0.2, patience=5, mode="min", verbose=1)
In [ ]:
model.compile(optimizer=Adam(0.0001), loss="binary_crossentropy", metrics=["accuracy"])
history = model.fit(train_generator_iterator, epochs=40, validation_data=valid_generator_iterator,
callbacks=[checkpoint_cb, early_stopping_cb, reducelr_cb])
In [ ]:
model = load_model("my_keras_model.h5")
model.evaluate(test_generator_iterator)
Transfer Learning¶
In [17]:
IMAGE_SIZE = 224
input_tensor = Input(shape=(IMAGE_SIZE, IMAGE_SIZE, 3))
base_model = VGG16(include_top=False, weights="imagenet", input_tensor=input_tensor)
x = GlobalAveragePooling2D()(base_model.output)
x = Dense(50, activation='relu', name='fc1')(x)
output = Dense(1, activation="sigmoid")(x)
model = Model(inputs=input_tensor, outputs=output)
model.summary()
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ global_average_pooling2d_1 ( (None, 512) 0 _________________________________________________________________ fc1 (Dense) (None, 50) 25650 _________________________________________________________________ dense_1 (Dense) (None, 1) 51 ================================================================= Total params: 14,740,389 Trainable params: 14,740,389 Non-trainable params: 0 _________________________________________________________________
In [19]:
model.layers
Out[19]:
[<tensorflow.python.keras.engine.input_layer.InputLayer at 0x7fe4dc0fc310>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc0fc110>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4df063710>, <tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x7fe4dc115f90>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc10f450>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc0f9a10>, <tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x7fe4dc105710>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc126e50>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc12ac90>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc115410>, <tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x7fe4df0c8f90>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4df128490>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc0f2e10>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4df17cd90>, <tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x7fe4dc12f650>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc0f5f10>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4df11aa90>, <tensorflow.python.keras.layers.convolutional.Conv2D at 0x7fe4dc0b2cd0>, <tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x7fe4dc0b4ad0>, <tensorflow.python.keras.layers.pooling.GlobalAveragePooling2D at 0x7fe4dc0fcc10>, <tensorflow.python.keras.layers.core.Dense at 0x7fe4df0d3110>, <tensorflow.python.keras.layers.core.Dense at 0x7fe4dc0c6c90>]
In [21]:
for layer in model.layers:
print(layer.name, layer.trainable)
input_3 True block1_conv1 True block1_conv2 True block1_pool True block2_conv1 True block2_conv2 True block2_pool True block3_conv1 True block3_conv2 True block3_conv3 True block3_pool True block4_conv1 True block4_conv2 True block4_conv3 True block4_pool True block5_conv1 True block5_conv2 True block5_conv3 True block5_pool True global_average_pooling2d_1 True fc1 True dense_1 True
In [23]:
type(model.layers)
Out[23]:
list
In [26]:
for layer in model.layers[:-3]:
layer.trainable= False
print(layer.name, layer.trainable)
for layer in model.layers[-3:]:
print(layer.name, layer.trainable)
input_3 False block1_conv1 False block1_conv2 False block1_pool False block2_conv1 False block2_conv2 False block2_pool False block3_conv1 False block3_conv2 False block3_conv3 False block3_pool False block4_conv1 False block4_conv2 False block4_conv3 False block4_pool False block5_conv1 False block5_conv2 False block5_conv3 False block5_pool False global_average_pooling2d_1 True fc1 True dense_1 True
In [27]:
checkpoint_cb = ModelCheckpoint("my_keras_model.h5", save_best_only=True, verbose=1)
early_stopping_cb = EarlyStopping(patience=12, restore_best_weights=True)
reducelr_cb = ReduceLROnPlateau(monitor="val_loss", factor=0.2, patience=5, mode="min", verbose=1)
In [28]:
model.compile(optimizer=Adam(0.0001), loss="binary_crossentropy", metrics=["accuracy"])
history = model.fit(train_generator_iterator, epochs=40, validation_data=valid_generator_iterator,
callbacks=[checkpoint_cb, early_stopping_cb, reducelr_cb])
2021-09-14 08:17:39.592537: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) 2021-09-14 08:17:39.597251: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz
Epoch 1/40
2021-09-14 08:17:40.511955: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-14 08:17:41.449766: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-14 08:17:41.508980: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
101/101 [==============================] - 75s 625ms/step - loss: 0.6926 - accuracy: 0.5539 - val_loss: 0.6377 - val_accuracy: 0.7808 Epoch 00001: val_loss improved from inf to 0.63773, saving model to my_keras_model.h5 Epoch 2/40 101/101 [==============================] - 42s 415ms/step - loss: 0.6253 - accuracy: 0.7807 - val_loss: 0.5903 - val_accuracy: 0.8089 Epoch 00002: val_loss improved from 0.63773 to 0.59025, saving model to my_keras_model.h5 Epoch 3/40 101/101 [==============================] - 41s 408ms/step - loss: 0.5741 - accuracy: 0.8236 - val_loss: 0.5450 - val_accuracy: 0.8239 Epoch 00003: val_loss improved from 0.59025 to 0.54503, saving model to my_keras_model.h5 Epoch 4/40 101/101 [==============================] - 41s 409ms/step - loss: 0.5261 - accuracy: 0.8336 - val_loss: 0.5027 - val_accuracy: 0.8389 Epoch 00004: val_loss improved from 0.54503 to 0.50272, saving model to my_keras_model.h5 Epoch 5/40 101/101 [==============================] - 41s 407ms/step - loss: 0.4790 - accuracy: 0.8583 - val_loss: 0.4639 - val_accuracy: 0.8438 Epoch 00005: val_loss improved from 0.50272 to 0.46391, saving model to my_keras_model.h5 Epoch 6/40 101/101 [==============================] - 42s 415ms/step - loss: 0.4445 - accuracy: 0.8580 - val_loss: 0.4401 - val_accuracy: 0.8420 Epoch 00006: val_loss improved from 0.46391 to 0.44005, saving model to my_keras_model.h5 Epoch 7/40 101/101 [==============================] - 41s 410ms/step - loss: 0.4102 - accuracy: 0.8644 - val_loss: 0.4097 - val_accuracy: 0.8538 Epoch 00007: val_loss improved from 0.44005 to 0.40970, saving model to my_keras_model.h5 Epoch 8/40 101/101 [==============================] - 41s 409ms/step - loss: 0.3911 - accuracy: 0.8692 - val_loss: 0.3892 - val_accuracy: 0.8620 Epoch 00008: val_loss improved from 0.40970 to 0.38921, saving model to my_keras_model.h5 Epoch 9/40 101/101 [==============================] - 41s 407ms/step - loss: 0.3691 - accuracy: 0.8700 - val_loss: 0.3723 - val_accuracy: 0.8651 Epoch 00009: val_loss improved from 0.38921 to 0.37227, saving model to my_keras_model.h5 Epoch 10/40 101/101 [==============================] - 42s 412ms/step - loss: 0.3560 - accuracy: 0.8719 - val_loss: 0.3576 - val_accuracy: 0.8663 Epoch 00010: val_loss improved from 0.37227 to 0.35755, saving model to my_keras_model.h5 Epoch 11/40 101/101 [==============================] - 41s 408ms/step - loss: 0.3387 - accuracy: 0.8791 - val_loss: 0.3454 - val_accuracy: 0.8720 Epoch 00011: val_loss improved from 0.35755 to 0.34541, saving model to my_keras_model.h5 Epoch 12/40 101/101 [==============================] - 42s 412ms/step - loss: 0.3255 - accuracy: 0.8811 - val_loss: 0.3336 - val_accuracy: 0.8782 Epoch 00012: val_loss improved from 0.34541 to 0.33363, saving model to my_keras_model.h5 Epoch 13/40 101/101 [==============================] - 41s 410ms/step - loss: 0.3145 - accuracy: 0.8826 - val_loss: 0.3237 - val_accuracy: 0.8782 Epoch 00013: val_loss improved from 0.33363 to 0.32374, saving model to my_keras_model.h5 Epoch 14/40 101/101 [==============================] - 42s 414ms/step - loss: 0.3033 - accuracy: 0.8835 - val_loss: 0.3157 - val_accuracy: 0.8776 Epoch 00014: val_loss improved from 0.32374 to 0.31568, saving model to my_keras_model.h5 Epoch 15/40 101/101 [==============================] - 41s 410ms/step - loss: 0.2962 - accuracy: 0.8901 - val_loss: 0.3079 - val_accuracy: 0.8795 Epoch 00015: val_loss improved from 0.31568 to 0.30795, saving model to my_keras_model.h5 Epoch 16/40 101/101 [==============================] - 41s 409ms/step - loss: 0.2902 - accuracy: 0.8917 - val_loss: 0.2999 - val_accuracy: 0.8844 Epoch 00016: val_loss improved from 0.30795 to 0.29989, saving model to my_keras_model.h5 Epoch 17/40 101/101 [==============================] - 41s 407ms/step - loss: 0.2805 - accuracy: 0.8914 - val_loss: 0.2935 - val_accuracy: 0.8869 Epoch 00017: val_loss improved from 0.29989 to 0.29346, saving model to my_keras_model.h5 Epoch 18/40 101/101 [==============================] - 41s 407ms/step - loss: 0.2684 - accuracy: 0.9013 - val_loss: 0.2879 - val_accuracy: 0.8901 Epoch 00018: val_loss improved from 0.29346 to 0.28787, saving model to my_keras_model.h5 Epoch 19/40 101/101 [==============================] - 41s 407ms/step - loss: 0.2653 - accuracy: 0.9022 - val_loss: 0.2836 - val_accuracy: 0.8888 Epoch 00019: val_loss improved from 0.28787 to 0.28358, saving model to my_keras_model.h5 Epoch 20/40 101/101 [==============================] - 41s 407ms/step - loss: 0.2690 - accuracy: 0.8984 - val_loss: 0.2773 - val_accuracy: 0.8932 Epoch 00020: val_loss improved from 0.28358 to 0.27727, saving model to my_keras_model.h5 Epoch 21/40 101/101 [==============================] - 41s 409ms/step - loss: 0.2512 - accuracy: 0.9069 - val_loss: 0.2729 - val_accuracy: 0.8963 Epoch 00021: val_loss improved from 0.27727 to 0.27291, saving model to my_keras_model.h5 Epoch 22/40 101/101 [==============================] - 41s 406ms/step - loss: 0.2465 - accuracy: 0.9087 - val_loss: 0.2690 - val_accuracy: 0.8938 Epoch 00022: val_loss improved from 0.27291 to 0.26905, saving model to my_keras_model.h5 Epoch 23/40 101/101 [==============================] - 41s 411ms/step - loss: 0.2519 - accuracy: 0.9021 - val_loss: 0.2659 - val_accuracy: 0.8976 Epoch 00023: val_loss improved from 0.26905 to 0.26585, saving model to my_keras_model.h5 Epoch 24/40 101/101 [==============================] - 41s 405ms/step - loss: 0.2364 - accuracy: 0.9091 - val_loss: 0.2617 - val_accuracy: 0.8957 Epoch 00024: val_loss improved from 0.26585 to 0.26167, saving model to my_keras_model.h5 Epoch 25/40 101/101 [==============================] - 41s 408ms/step - loss: 0.2351 - accuracy: 0.9114 - val_loss: 0.2588 - val_accuracy: 0.8976 Epoch 00025: val_loss improved from 0.26167 to 0.25883, saving model to my_keras_model.h5 Epoch 26/40 101/101 [==============================] - 41s 406ms/step - loss: 0.2317 - accuracy: 0.9100 - val_loss: 0.2554 - val_accuracy: 0.8994 Epoch 00026: val_loss improved from 0.25883 to 0.25535, saving model to my_keras_model.h5 Epoch 27/40 101/101 [==============================] - 42s 411ms/step - loss: 0.2329 - accuracy: 0.9097 - val_loss: 0.2525 - val_accuracy: 0.9001 Epoch 00027: val_loss improved from 0.25535 to 0.25252, saving model to my_keras_model.h5 Epoch 28/40 101/101 [==============================] - 42s 414ms/step - loss: 0.2327 - accuracy: 0.9069 - val_loss: 0.2525 - val_accuracy: 0.9007 Epoch 00028: val_loss did not improve from 0.25252 Epoch 29/40 101/101 [==============================] - 41s 410ms/step - loss: 0.2249 - accuracy: 0.9122 - val_loss: 0.2489 - val_accuracy: 0.9026 Epoch 00029: val_loss improved from 0.25252 to 0.24894, saving model to my_keras_model.h5 Epoch 30/40 101/101 [==============================] - 41s 406ms/step - loss: 0.2280 - accuracy: 0.9083 - val_loss: 0.2446 - val_accuracy: 0.9038 Epoch 00030: val_loss improved from 0.24894 to 0.24464, saving model to my_keras_model.h5 Epoch 31/40 101/101 [==============================] - 41s 407ms/step - loss: 0.2167 - accuracy: 0.9153 - val_loss: 0.2452 - val_accuracy: 0.9038 Epoch 00031: val_loss did not improve from 0.24464 Epoch 32/40 101/101 [==============================] - 42s 415ms/step - loss: 0.2102 - accuracy: 0.9164 - val_loss: 0.2403 - val_accuracy: 0.9057 Epoch 00032: val_loss improved from 0.24464 to 0.24032, saving model to my_keras_model.h5 Epoch 33/40 101/101 [==============================] - 41s 410ms/step - loss: 0.2212 - accuracy: 0.9123 - val_loss: 0.2383 - val_accuracy: 0.9076 Epoch 00033: val_loss improved from 0.24032 to 0.23830, saving model to my_keras_model.h5 Epoch 34/40 101/101 [==============================] - 41s 409ms/step - loss: 0.2093 - accuracy: 0.9194 - val_loss: 0.2390 - val_accuracy: 0.9044 Epoch 00034: val_loss did not improve from 0.23830 Epoch 35/40 101/101 [==============================] - 41s 410ms/step - loss: 0.2196 - accuracy: 0.9096 - val_loss: 0.2347 - val_accuracy: 0.9082 Epoch 00035: val_loss improved from 0.23830 to 0.23470, saving model to my_keras_model.h5 Epoch 36/40 101/101 [==============================] - 41s 408ms/step - loss: 0.2103 - accuracy: 0.9148 - val_loss: 0.2335 - val_accuracy: 0.9069 Epoch 00036: val_loss improved from 0.23470 to 0.23347, saving model to my_keras_model.h5 Epoch 37/40 101/101 [==============================] - 42s 415ms/step - loss: 0.2125 - accuracy: 0.9112 - val_loss: 0.2314 - val_accuracy: 0.9101 Epoch 00037: val_loss improved from 0.23347 to 0.23139, saving model to my_keras_model.h5 Epoch 38/40 101/101 [==============================] - 41s 411ms/step - loss: 0.2083 - accuracy: 0.9199 - val_loss: 0.2309 - val_accuracy: 0.9076 Epoch 00038: val_loss improved from 0.23139 to 0.23087, saving model to my_keras_model.h5 Epoch 39/40 101/101 [==============================] - 41s 409ms/step - loss: 0.2043 - accuracy: 0.9198 - val_loss: 0.2286 - val_accuracy: 0.9113 Epoch 00039: val_loss improved from 0.23087 to 0.22861, saving model to my_keras_model.h5 Epoch 40/40 101/101 [==============================] - 42s 412ms/step - loss: 0.2043 - accuracy: 0.9211 - val_loss: 0.2279 - val_accuracy: 0.9088 Epoch 00040: val_loss improved from 0.22861 to 0.22795, saving model to my_keras_model.h5
In [29]:
model = load_model("my_keras_model.h5")
model.evaluate(test_generator_iterator)
32/32 [==============================] - 19s 595ms/step - loss: 0.2228 - accuracy: 0.9066
Out[29]:
[0.22282946109771729, 0.9065743684768677]
'playdata' 카테고리의 다른 글
Deep Learning(0915_day9) - Cat and Dog(ResNet) - Kaggle (0) | 2021.09.15 |
---|---|
Deep Learning(0915_day9) - Cat and Dog(Inception) - Kaggle (0) | 2021.09.15 |
Deep Learning(0914_day8) - Cat and Dog(VGG16) - Kaggle (0) | 2021.09.14 |
Deep Learning(0914_day8) - Cat and Dog(from DataFrame) - Kaggle (0) | 2021.09.14 |
Deep Learning(0914_day8) - Cat and Dog(from directory) - Kaggle (0) | 2021.09.14 |
Comments