2017-08-20 46 views
0

我正在嘗試使張量板可視化網絡圖。以下是MNIST分類中的簡單CNN代碼。代碼來自tensorboard tutorialTensorboard沒有圖

代碼:

import os 
import tensorflow as tf 
import urllib 

GIST_URL = 'https://gist.githubusercontent.com/dandelionmane/4f02ab8f1451e276fea1f165a20336f1/raw/dfb8ee95b010480d56a73f324aca480b3820c180' 
LOGDIR = '/tmp/mnist_tutorial/' 

### MNIST EMBEDDINGS ### 
mnist = tf.contrib.learn.datasets.mnist.read_data_sets(train_dir=LOGDIR + 'data', one_hot=True) 

# Define a simple convolutional layer 
def conv_layer(input, channels_in, channels_out): 
    w = tf.Variable(tf.zeros([5, 5, channels_in, channels_out])) 
    b = tf.Variable(tf.zeros([channels_out])) 
    conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding="SAME") 
    act = tf.nn.relu(conv + b) 
    return act 

def fc_layer(input, channels_in, channels_out): 
    w = tf.Variable(tf.zeros([channels_in, channels_out])) 
    b = tf.Variable(tf.zeros([channels_out])) 
    act = tf.nn.relu(tf.matmul(input, w) + b) 
    return act 

def make_hparam_string(learning_rate, use_two_fc, use_two_conv): 
    conv_param = "conv=2" if use_two_conv else "conv=1" 
    fc_param = "fc=2" if use_two_fc else "fc=1" 
    return "lr_%.0E,%s,%s" % (learning_rate, conv_param, fc_param) 

# Setup placeholders, and reshape the data 
x = tf.placeholder(tf.float32, shape=[None, 784]) 
y = tf.placeholder(tf.float32, shape=[None, 10]) 
x_image = tf.reshape(x, [-1, 28, 28, 1]) 
# Create the network 
conv1 = conv_layer(x_image, 1, 32) 
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME") 
conv2 = conv_layer(pool1, 32, 64) 
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME") 
flattened = tf.reshape(pool2, [-1, 7 * 7 * 64]) 
fc1 = fc_layer(flattened, 7 * 7 * 64, 1024) 
logits = fc_layer(fc1, 1024, 10) 



# Compute cross entropy as our loss function 
cross_entropy = tf.reduce_mean(
     tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) 
# Use an AdamOptimizer to train the network 
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) 
# compute the accuracy 
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 

sess = tf.Session() 
# Initialize all the variables 
sess.run(tf.global_variables_initializer()) 
hparam = make_hparam_string(.1,True,True) 

writer = tf.summary.FileWriter(LOGDIR+hparam) 
writer.add_graph(sess.graph) 
# Train for 2000 steps 
for i in range(20): 
    batch = mnist.train.next_batch(100) 
    # Occasionally report accuracy 
    if i % 5 == 0: 
     [train_accuracy] = sess.run([accuracy], feed_dict={x: batch[0], y: batch[1]}) 
     print("step %d, training accuracy %g" % (i, train_accuracy)) 
    # Run the training step 
    sess.run(train_step, feed_dict={x: batch[0], y: batch[1]}) 
writer.close() 

圖形是不存在的!爲什麼?我也關閉了作家。 (正如本文後面提到的there is no graph with tensorboard)。我不知道我錯過了什麼。

Tensorboard:

$ tree mnist_tutorial/ 
mnist_tutorial/ 
├── data 
│   ├── t10k-images-idx3-ubyte.gz 
│   ├── t10k-labels-idx1-ubyte.gz 
│   ├── train-images-idx3-ubyte.gz 
│   └── train-labels-idx1-ubyte.gz 
└── lr_1E-01,conv=2,fc=2 
    └── events.out.tfevents.1503327291.neon-2.local 

2 directories, 5 files 

應該是什麼tensorboard LOGDIR。我假設它是lr_1E-01,conv = 2,fc = 2,因爲它包含事件文件,並傳遞給FileWriter。

+0

我只是複製粘貼,跑到你的代碼,之後結束了,我開始tensorboard,這只是工作,這可能是因爲您開始使用--logdir指向tensorboard一個錯誤目錄?應該將/ tmp/mnist_tutorial指向您在構建時傳遞給作者的hparam子目錄。 –

+0

@ amo-ej1我很困惑,爲什麼是這樣。我不應該使用'writer = tf.summary.FileWriter(LOGDIR + hparam)'?那麼我傳遞給FileWriter? –

+0

我將它指向/ tmp/mnist_tutorial然後如果你有幾個運行,你可以把它們放在不同的子目錄中(這個例子在這裏討論:https://stackoverflow.com/questions/36182380/how-do-display-different-運行張力板) –

回答

-1

您正在使用tensorflow windows版本嗎? 試試下面的代碼:

tf.train.write_graph(sess.graph_def, LOGDIR+hparam, 'graph.pb', False)