2016-10-11 61 views

回答

5

你們首先得到了內存中的網絡架構。你可以從here

網絡架構一旦你有了這個程序和你在一起,用下面的辦法來使用模型:

from inception_resnet_v2 import inception_resnet_v2, inception_resnet_v2_arg_scope 

height = 299 
width = 299 
channels = 3 

X = tf.placeholder(tf.float32, shape=[None, height, width, channels]) 
with slim.arg_scope(inception_resnet_v2_arg_scope()): 
    logits, end_points = inception_resnet_v2(X, num_classes=1001,is_training=False) 

有了這個,你有內存中的所有網絡,現在你可以初始化與檢查點文件(CKPT)網絡使用tf.train.saver:

saver = tf.train.Saver() 
sess = tf.Session() 
saver.restore(sess, "/home/pramod/Downloads/inception_resnet_v2_2016_08_30.ckpt") 

如果你想要做的奶瓶特徵提取,其簡單的像可以說你想從最後一層的功能,那麼只需你有申報predictions = end_points["Logits"]如果你想獲得它的其他中間層,你可以從上面的程序這些名稱inception_resnet_v2.py

之後,你可以撥打:output = sess.run(predictions, feed_dict={X:batch_images})

+2

我認爲與ckpt文件一起,您還可以搜索.meta文件爲該network.meta文件可以用於使用tf.train.import()函數重新創建網絡,如下所示:saver = tf.train.import_meta_graph('The_model.meta')。 –

+0

是的,可以做到。但我沒有看到上述網絡的.meta文件。如果您遇到過這種情況,請評論鏈接相同。 –

+0

tf.trainable_variables()也列出了初始模型的變量,儘管這些變量是不可訓練的。 – Tulsi

1

不,你不知道。

至於如何使用檢查點文件(cpkt文件)

1,本品(TensorFlow-Slim image classification library),告訴你如何從頭開始

2.後續列車模型是一個示例代碼google blog

import numpy as np 
import os 
import tensorflow as tf 
import urllib2 

from datasets import imagenet 
from nets import inception 
from preprocessing import inception_preprocessing 

slim = tf.contrib.slim 

batch_size = 3 
image_size = inception.inception_v3.default_image_size 

checkpoints_dir = '/root/code/model' 
checkpoints_filename = 'inception_resnet_v2_2016_08_30.ckpt' 
model_name = 'InceptionResnetV2' 
sess = tf.InteractiveSession() 
graph = tf.Graph() 
graph.as_default() 

def classify_from_url(url): 
    image_string = urllib2.urlopen(url).read() 
    image = tf.image.decode_jpeg(image_string, channels=3) 
    processed_image = inception_preprocessing.preprocess_image(image,  image_size, image_size, is_training=False) 
processed_images = tf.expand_dims(processed_image, 0) 

# Create the model, use the default arg scope to configure the batch norm parameters. 
with slim.arg_scope(inception.inception_resnet_v2_arg_scope()): 
    logits, _ = inception.inception_resnet_v2(processed_images, num_classes=1001, is_training=False) 
probabilities = tf.nn.softmax(logits) 

init_fn = slim.assign_from_checkpoint_fn(
    os.path.join(checkpoints_dir, checkpoints_filename), 
    slim.get_model_variables(model_name)) 

init_fn(sess) 
np_image, probabilities = sess.run([image, probabilities]) 
probabilities = probabilities[0, 0:] 
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] 

plt.figure() 
plt.imshow(np_image.astype(np.uint8)) 
plt.axis('off') 
plt.show() 

names = imagenet.create_readable_names_for_imagenet_labels() 
for i in range(5): 
    index = sorted_inds[i] 
    print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index])) 
+0

的例子也可以使用預先訓練模式下找到以下ipython筆記本:https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb –

相關問題