與Caffe框架類似,可以在CNNs訓練期間觀察學習過濾器,並通過輸入圖像進行卷積,我想知道是否可以用TensorFlow做同樣的事情?如何在張量流中可視化學習過濾器
一個來自Caffe例如可以在這個鏈接查看:
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb
感謝您的幫助!
與Caffe框架類似,可以在CNNs訓練期間觀察學習過濾器,並通過輸入圖像進行卷積,我想知道是否可以用TensorFlow做同樣的事情?如何在張量流中可視化學習過濾器
一個來自Caffe例如可以在這個鏈接查看:
http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb
感謝您的幫助!
要看到的只是幾個CONV1過濾器Tensorboard,您可以使用此代碼(它爲cifar10)
# this should be a part of the inference(images) function in cifar10.py file
# conv1
with tf.variable_scope('conv1') as scope:
kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
stddev=1e-4, wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope.name)
_activation_summary(conv1)
with tf.variable_scope('visualization'):
# scale weights to [0 1], type is still float
x_min = tf.reduce_min(kernel)
x_max = tf.reduce_max(kernel)
kernel_0_to_1 = (kernel - x_min)/(x_max - x_min)
# to tf.image_summary format [batch_size, height, width, channels]
kernel_transposed = tf.transpose (kernel_0_to_1, [3, 0, 1, 2])
# this will display random 3 filters from the 64 in conv1
tf.image_summary('conv1/filters', kernel_transposed, max_images=3)
我也寫了一個簡單gist在網格中顯示所有64個CONV1過濾器。
參見[如何可視化tensorflow卷積過濾器?](http://stackoverflow.com/q/39361943/562769) –
[我如何可視化重量(變量)在cnn中的Tensorflow?]( http://stackoverflow.com/questions/33783672/how-can-i-visualize-the-weightsvariables-in-cnn-in-tensorflow) –
你可以使用[tensorflow調試器](https://github.com/ ericjang/tdb)工具 – fabrizioM