2016-11-28 43 views

回答

4

第一步是確保您導出的圖有一個佔位符,並且ops可以接受JPEG數據。請注意,CloudML假定您正在發送一批圖像。我們必須使用tf.map_fn來解碼和調整一批圖像的大小。根據型號的不同,可能需要對數據進行額外的預處理數據標準化的,等等。這是如下圖所示:

# Number of channels in the input image 
CHANNELS = 3 

# Dimensions of resized images (input to the neural net) 
HEIGHT = 200 
WIDTH = 200 

# A placeholder for a batch of images 
images_placeholder = tf.placeholder(dtype=tf.string, shape=(None,)) 

# The CloudML Prediction API always "feeds" the Tensorflow graph with 
# dynamic batch sizes e.g. (?,). decode_jpeg only processes scalar 
# strings because it cannot guarantee a batch of images would have 
# the same output size. We use tf.map_fn to give decode_jpeg a scalar 
# string from dynamic batches. 
def decode_and_resize(image_str_tensor): 
    """Decodes jpeg string, resizes it and returns a uint8 tensor.""" 

    image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS) 

    # Note resize expects a batch_size, but tf_map supresses that index, 
    # thus we have to expand then squeeze. Resize returns float32 in the 
    # range [0, uint8_max] 
    image = tf.expand_dims(image, 0) 
    image = tf.image.resize_bilinear(
     image, [HEIGHT, WIDTH], align_corners=False) 
    image = tf.squeeze(image, squeeze_dims=[0]) 
    image = tf.cast(image, dtype=tf.uint8) 
    return image 

decoded_images = tf.map_fn(
    decode_and_resize, images_placeholder, back_prop=False, dtype=tf.uint8) 

# convert_image_dtype, also scales [0, uint8_max] -> [0, 1). 
images = tf.image.convert_image_dtype(decoded_images, dtype=tf.float32) 

# Then shift images to [-1, 1) (useful for some models such as Inception) 
images = tf.sub(images, 0.5) 
images = tf.mul(images, 2.0) 

# ... 

而且,我們需要確保正確標註輸入,在這種情況下,它的必須輸入的名稱(地圖中的鍵)以_bytes結尾。當發送base64編碼的數據,就會讓CloudML預測服務知道它需要對數據進行解碼:

inputs = {"image_bytes": images_placeholder.name} 
tf.add_to_collection("inputs", json.dumps(inputs)) 

該gcloud指令期待將是這樣的形式的數據格式:

{"image_bytes": {"b64": "dGVzdAo="}} 

(請注意,如果image_bytes是您的型號的唯一輸入,則可以簡化爲{"b64": "dGVzdAo="})。

例如,從磁盤上的文件創建此,你可以嘗試這樣的:

echo "{\"image_bytes\": {\"b64\": \"`base64 image.jpg`\"}}" > instances 

然後將其發送到服務,像這樣:

gcloud beta ml predict --instances=instances --model=my_model 

請注意,當直接向服務發送數據,您發送請求的主體需要包裝在「實例」列表中。所以上面實際上gcloud指令發送以下在HTTP請求主體的服務:

{"instances" : [{"image_bytes": {"b64": "dGVzdAo="}}]} 
+0

感謝您的回答!也許我不明白我必須做什麼。事實上,當我發送一個請求時,它會返回:錯誤:'預測失敗:'。我寫了我的問題[這裏](http://stackoverflow.com/questions/41261701/how-make-correct-predictions-of-jpeg-image-in-cloud-ml) –

2

就堆放到以前的答案...

谷歌公佈的圖像識別任務blog post和一些相關的code,將直接解決您的問題,你可能會發現更多。它包含一個images_to_json.py文件,以幫助構建json請求

相關問題