2016-11-21 69 views
9

我有2個獨特的總結小組。每個批次收集一個,每個時期收集一個。我如何使用merge_all_summaries(key='???')分別在這兩組中收集摘要?手動操作始終是一種選擇,但似乎有更好的方法。如何在Tensorflow中使用多個彙總集合?

插圖的如何,我認爲它應該工作:

 # once per batch 
     tf.scalar_summary("loss", graph.loss) 
     tf.scalar_summary("batch_acc", batch_accuracy) 
     # once per epoch 
     gradients = tf.gradients(graph.loss, [W, D]) 
     tf.histogram_summary("embedding/W", W, collections='per_epoch') 
     tf.histogram_summary("embedding/D", D, collections='per_epoch') 

     tf.merge_all_summaries()    # -> (MergeSummary...) :) 
     tf.merge_all_summaries(key='per_epoch') # -> NONE    :(
+0

首先發現了這個問題,但是搜索了2個不是特別的摘要組。這種方法https://stackoverflow.com/questions/42418029/unable-to-use-summary-merge-in-tensorboard-for-separate-training-and-evaluation對稍微不同的用例稍微簡單一些。您可以簡單地使用摘要的名稱。 – Maikefer

回答

15

問題解決了。 collections摘要的參數應該是一個列表。 解決方案:

# once per batch 
    tf.scalar_summary("loss", graph.loss) 
    tf.scalar_summary("batch_acc", batch_accuracy) 
    # once per epoch 
    tf.histogram_summary("embedding/W", W, collections=['per_epoch']) 
    tf.histogram_summary("embedding/D", D, collections=['per_epoch']) 

    tf.merge_all_summaries()    # -> (MergeSummary...) :) 
    tf.merge_all_summaries(key='per_epoch') # -> (MergeSummary...) :) 

編輯。 TF中的句法變化:

# once per batch 
    tf.summary.scalar("loss", graph.loss) 
    tf.summary.scalar("batch_acc", batch_accuracy) 
    # once per epoch 
    tf.summary.histogram("embedding/W", W, collections=['per_epoch']) 
    tf.summary.histogram("embedding/D", D, collections=['per_epoch']) 

    tf.summaries.merge_all()    # -> (MergeSummary...) :) 
    tf.summaries.merge_all(key='per_epoch') # -> (MergeSummary...) :)