深度學習TextRNN的tensorflow1.14實現示例
實現對下一個單詞的預測
RNN 原理自己找,這裡隻給出簡單例子的實現代碼
import tensorflow as tf import numpy as np tf.reset_default_graph() sentences = ['i love damao','i like mengjun','we love all'] words = list(set(" ".join(sentences).split())) word2idx = {v:k for k,v in enumerate(words)} idx2word = {k:v for k,v in enumerate(words)} V = len(words) # 詞典大小 step = 2 # 時間序列長度 hidden = 5 # 隱層大小 dim = 50 # 詞向量維度 # 制作輸入和標簽 def make_batch(sentences): input_batch = [] target_batch = [] for sentence in sentences: words = sentence.split() input = [word2idx[word] for word in words[:-1]] target = word2idx[words[-1]] input_batch.append(input) target_batch.append(np.eye(V)[target]) # 這裡將標簽改為 one-hot 編碼,之後計算交叉熵的時候會用到 return input_batch, target_batch # 初始化詞向量 embedding = tf.get_variable(shape=[V, dim], initializer=tf.random_normal_initializer(), name="embedding") X = tf.placeholder(tf.int32, [None, step]) XX = tf.nn.embedding_lookup(embedding, X) Y = tf.placeholder(tf.int32, [None, V]) # 定義 cell cell = tf.nn.rnn_cell.BasicRNNCell(hidden) # 計算各個時間點的輸出和隱層輸出的結果 outputs, hiddens = tf.nn.dynamic_rnn(cell, XX, dtype=tf.float32) # outputs: [batch_size, step, hidden] hiddens: [batch_size, hidden] # 這裡將所有時間點的狀態向量都作為瞭後續分類器的輸入(也可以隻將最後時間節點的狀態向量作為後續分類器的輸入) W = tf.Variable(tf.random_normal([step*hidden, V])) b = tf.Variable(tf.random_normal([V])) L = tf.matmul(tf.reshape(outputs,[-1, step*hidden]), W) + b # 計算損失並進行優化 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=L)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) # 預測 prediction = tf.argmax(L, 1) # 初始化 tf init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # 喂訓練數據 input_batch, target_batch = make_batch(sentences) for epoch in range(5000): _, loss = sess.run([optimizer, cost], feed_dict={X:input_batch, Y:target_batch}) if (epoch+1)%1000 == 0: print("epoch: ", '%04d'%(epoch+1), 'cost= ', '%04f'%(loss)) # 預測數據 predict = sess.run([prediction], feed_dict={X: input_batch}) print([sentence.split()[:2] for sentence in sentences], '->', [idx2word[n] for n in predict[0]])
結果打印
epoch: 1000 cost= 0.008979
epoch: 2000 cost= 0.002754
epoch: 3000 cost= 0.001283
epoch: 4000 cost= 0.000697
epoch: 5000 cost= 0.000406
[['i', 'love'], ['i', 'like'], ['we', 'love']] -> ['damao', 'mengjun', 'all']
以上就是深度學習TextRNN的tensorflow1.14實現示例的詳細內容,更多關於深度學習TextRNN tensorflow的資料請關註WalkonNet其它相關文章!
推薦閱讀:
- TensorFlow神經網絡創建多層感知機MNIST數據集
- TensorFlow教程Softmax邏輯回歸識別手寫數字MNIST數據集
- 基於Tensorflow搭建一個神經網絡的實現
- TensorFlow卷積神經網絡AlexNet實現示例詳解
- TensorFlow神經網絡構造線性回歸模型示例教程