2017-08-04 67 views
3

我試圖用這個代碼創建自己的夢深算法:深夢代碼不會產生可識別模式

import tensorflow as tf 
import matplotlib.pyplot as plt 
import numpy as np 
import inception 

img = np.random.rand(1,500,500,3) 
net = inception.get_inception_model() 
tf.import_graph_def(net['graph_def'], name='inception') 
graph = tf.get_default_graph() 
sess = tf.Session() 
layer = graph.get_tensor_by_name('inception/mixed5b_pool_reduce_pre_relu:0') 
gradient = tf.gradients(tf.reduce_mean(layer), graph.get_tensor_by_name('inception/input:0')) 
softmax = sess.graph.get_tensor_by_name('inception/softmax2:0') 
iters = 100 
init = tf.global_variables_initializer() 

sess.run(init) 
for i in range(iters): 
    prediction = sess.run(softmax, \ 
          {'inception/input:0': img}) 
    grad = sess.run(gradient[0], \ 
          {'inception/input:0': img}) 
    grad = (grad-np.mean(grad))/np.std(grad) 
    img = grad 
    plt.imshow(img[0]) 
    plt.savefig('output/'+str(i+1)+'.png') 
    plt.close('all') 

但是,即使運行該循環100次迭代後所產生的畫面看起來還是隨機的(我將附上說明圖片到這個問題)。 enter image description here有人可以幫我優化我的代碼嗎?

回答

1

使用Inception網絡進行Deep Dream有點煩瑣。在您借用幫助程序庫的CADL課程中,講師選擇使用VGG16作爲指令網絡。如果您使用此,並提出一些小的修改你的代碼,你應該得到的東西,工作(如果你先啓網絡交換它將一種工作在這裏,但結果看起來會更加失望):

import tensorflow as tf 
import matplotlib.pyplot as plt 
import numpy as np 
import vgg16 as vgg 

# Note reduced range of image, your noise function was drowning 
# out the few textures that you were getting 
img = np.random.rand(1,500,500,3) * 0.1 + 0.45 
net = vgg.get_vgg_model() 
tf.import_graph_def(net['graph_def'], name='vgg') 
graph = tf.get_default_graph() 
sess = tf.Session() 
layer = graph.get_tensor_by_name('vgg/pool4:0') 
gradient = tf.gradients(tf.reduce_mean(layer), 
    graph.get_tensor_by_name('vgg/images:0')) 

# You don't need to define or use the softmax layer - TensorFlow 
# is smart enough to resolve the computation graph for gradients 
# without explicitly running the whole network forward first 
iters = 100 
# You don't need to init the network variables, everything you need 
# is set by the import, plus the placeholder. 

for i in range(iters): 
    grad = sess.run(gradient[0], {'vgg/images:0': img}) 

    # You can use all sorts of normalisation, this one is from CADL 
    grad /= (np.max(np.abs(grad))+1e-7) 

    # You forgot to use += here, and it is best to use a 
    # step size even after gradient normalisation 
    img += 0.25 * grad 
    # Re-normalise the image, to prevent over-saturation 
    img = 0.98 * (img - 0.5) + 0.5 
    img = np.clip(img, 0.0, 1.0) 
    plt.imshow(img[0]) 
    plt.savefig('output/'+str(i+1)+'.png') 
    plt.close('all') 
    print(i) 

做這一切獲取明確的工作圖像,但仍需要一些細化:

enter image description here

爲了得到更好的,你可能已經在網上看到該類型的全綵色圖像需要更多的變化。例如,您可以在每次迭代之間稍微重新標準化或模糊圖像。

如果你想變得更復雜,你可以嘗試the TensorFlow Jupyter notebook walk-through,儘管由於結合了多個想法而從第一原理中理解起來有些困難。

+0

BTW,是的,我拿了CADL過程中,並用深夢想製作這個視頻:https://www.youtube.com/watch?v=RD9uc2u557w - 但實際上,這是可能的,而不unpicking谷歌的深層夢代碼到所有單獨的步驟都需要。 –

+0

謝謝,這真的很好。 PS。你非常熟練 –