詳解TensorFlow訓練網絡兩種方式

TensorFlow訓練網絡有兩種方式,一種是基於tensor(array),另外一種是迭代器

兩種方式區別是:

  • 第一種是要加載全部數據形成一個tensor,然後調用model.fit()然後指定參數batch_size進行將所有數據進行分批訓練
  • 第二種是自己先將數據分批形成一個迭代器,然後遍歷這個迭代器,分別訓練每個批次的數據

方式一:通過迭代器

IMAGE_SIZE = 1000

# step1:加載數據集
(train_images, train_labels), (val_images, val_labels) = tf.keras.datasets.mnist.load_data()

# step2:將圖像歸一化
train_images, val_images = train_images / 255.0, val_images / 255.0

# step3:設置訓練集大小
train_images = train_images[:IMAGE_SIZE]
val_images = val_images[:IMAGE_SIZE]
train_labels = train_labels[:IMAGE_SIZE]
val_labels = val_labels[:IMAGE_SIZE]

# step4:將圖像的維度變為(IMAGE_SIZE,28,28,1)
train_images = tf.expand_dims(train_images, axis=3)
val_images = tf.expand_dims(val_images, axis=3)

# step5:將圖像的尺寸變為(32,32)
train_images = tf.image.resize(train_images, [32, 32])
val_images = tf.image.resize(val_images, [32, 32])

# step6:將數據變為迭代器
train_loader = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).batch(32)
val_loader = tf.data.Dataset.from_tensor_slices((val_images, val_labels)).batch(IMAGE_SIZE)

# step5:導入模型
model = LeNet5()

# 讓模型知道輸入數據的形式
model.build(input_shape=(1, 32, 32, 1))

# 結局Output Shape為 multiple
model.call(Input(shape=(32, 32, 1)))

# step6:編譯模型
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

# 權重保存路徑
checkpoint_path = "./weight/cp.ckpt"

# 回調函數,用戶保存權重
save_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                   save_best_only=True,
                                                   save_weights_only=True,
                                                   monitor='val_loss',
                                                   verbose=0)

EPOCHS = 11

for epoch in range(1, EPOCHS):
    # 每個批次訓練集誤差
    train_epoch_loss_avg = tf.keras.metrics.Mean()
    # 每個批次訓練集精度
    train_epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
    # 每個批次驗證集誤差
    val_epoch_loss_avg = tf.keras.metrics.Mean()
    # 每個批次驗證集精度
    val_epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

    for x, y in train_loader:
        history = model.fit(x,
                            y,
                            validation_data=val_loader,
                            callbacks=[save_callback],
                            verbose=0)

        # 更新誤差,保留上次
        train_epoch_loss_avg.update_state(history.history['loss'][0])
        # 更新精度,保留上次
        train_epoch_accuracy.update_state(y, model(x, training=True))

        val_epoch_loss_avg.update_state(history.history['val_loss'][0])
        val_epoch_accuracy.update_state(next(iter(val_loader))[1], model(next(iter(val_loader))[0], training=True))

    # 使用.result()計算每個批次的誤差和精度結果
    print("Epoch {:d}: trainLoss: {:.3f}, trainAccuracy: {:.3%} valLoss: {:.3f}, valAccuracy: {:.3%}".format(epoch,
                                                                                                             train_epoch_loss_avg.result(),
                                                                                                             train_epoch_accuracy.result(),
                                                                                                             val_epoch_loss_avg.result(),
                                                                                                             val_epoch_accuracy.result()))

方式二:適用model.fit()進行分批訓練

import model_sequential

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

# step2:將圖像歸一化
train_images, test_images = train_images / 255.0, test_images / 255.0

# step3:將圖像的維度變為(60000,28,28,1)
train_images = tf.expand_dims(train_images, axis=3)
test_images = tf.expand_dims(test_images, axis=3)

# step4:將圖像尺寸改為(60000,32,32,1)
train_images = tf.image.resize(train_images, [32, 32])
test_images = tf.image.resize(test_images, [32, 32])

# step5:導入模型
# history = LeNet5()
history = model_sequential.LeNet()

# 讓模型知道輸入數據的形式
history.build(input_shape=(1, 32, 32, 1))
# history(tf.zeros([1, 32, 32, 1]))

# 結局Output Shape為 multiple
history.call(Input(shape=(32, 32, 1)))
history.summary()

# step6:編譯模型
history.compile(optimizer='adam',
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])

# 權重保存路徑
checkpoint_path = "./weight/cp.ckpt"

# 回調函數,用戶保存權重
save_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                   save_best_only=True,
                                                   save_weights_only=True,
                                                   monitor='val_loss',
                                                   verbose=1)
# step7:訓練模型
history = history.fit(train_images,
                      train_labels,
                      epochs=10,
                      batch_size=32,
                      validation_data=(test_images, test_labels),
                      callbacks=[save_callback])

到此這篇關於詳解TensorFlow訓練網絡兩種方式的文章就介紹到這瞭,更多相關TensorFlow訓練網絡內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet! 

推薦閱讀: