5
我正在通過Udacity Deep Learning課程的作業6。我不確定爲什麼在這些步驟中使用zip()函數來應用漸變。Tensorflow:爲什麼在涉及應用漸變的步驟中使用zip()函數?
下面是相關代碼:
# define the loss function
logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf.concat(0, train_labels)))
# Optimizer.
global_step = tf.Variable(0)
#staircase=True means that the learning_rate updates at discrete time steps
learning_rate = tf.train.exponential_decay(10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step)
什麼是應用zip()
功能的目的是什麼?
爲什麼gradients
和v
以這種方式存儲?我以爲zip(*iterable)
只返回一個zip對象。