在Keras中實現遷移學習通常涉及使用預訓練的模型作為基礎,并根據新的數據集對其進行微調。以下是一個簡單的示例,演示如何在Keras中實現遷移學習:
from keras.applications import VGG16
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
base_model = VGG16(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_generator, steps_per_epoch=num_train_samples // batch_size, epochs=num_epochs, validation_data=validation_generator, validation_steps=num_val_samples // batch_size)
在訓練過程中,可以根據需要解凍基礎模型的一些層,并進一步微調模型。最后,可以使用訓練好的模型進行預測。