2017-08-03 112 views
4

我正在研究基本的Tensorflow服務示例。我遵循MNIST的例子,除了代替分類我想用numpy array預計另一個numpy array張量流服務客戶端的最小工作實例

要做到這一點,我首先訓練我的神經網絡

x = tf.placeholder("float", [None, n_input],name ="input_values") 

weights = { 
    'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 
    'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 
    'encoder_h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])), 
    'decoder_h1': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_2])), 
    'decoder_h2': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])), 
    'decoder_h3': tf.Variable(tf.random_normal([n_hidden_1, n_input])), 
} 
biases = { 
    'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 
    'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])), 
    'encoder_b3': tf.Variable(tf.random_normal([n_hidden_3])), 
    'decoder_b1': tf.Variable(tf.random_normal([n_hidden_2])), 
    'decoder_b2': tf.Variable(tf.random_normal([n_hidden_1])), 
    'decoder_b3': tf.Variable(tf.random_normal([n_input])), 
} 

# Building the encoder 
def encoder(x): 
    # Encoder Hidden layer with sigmoid activation #1 
    layer_1 = tf.nn.tanh(tf.matmul(x, weights['encoder_h1'])+biases['encoder_b1']) 
    print(layer_1.shape) 
    # Decoder Hidden layer with sigmoid activation #2 
    layer_2 = tf.nn.tanh(tf.matmul(layer_1, weights['encoder_h2'])+biases['encoder_b2']) 
    print(layer_2.shape) 
    # Layer 3 
    layer_3 = tf.nn.tanh(tf.matmul(layer_2, weights['encoder_h3'])+biases['encoder_b3']) 
    print(layer_3.shape) 
    return layer_3 


# Building the decoder 
def decoder(x): 
    # Encoder Hidden layer with sigmoid activation #1 
    layer_1 = tf.nn.tanh(tf.matmul(x, weights['decoder_h1'])+biases['decoder_b1']) 
    print(layer_1.shape) 
    # Decoder Hidden layer with sigmoid activation #2 
    layer_2 = tf.nn.tanh(tf.matmul(layer_1, weights['decoder_h2'])+biases['decoder_b2']) 
    # Layer 3 
    layer_3 = tf.nn.tanh(tf.matmul(layer_2, weights['decoder_h3'])+biases['decoder_b3']) 
    return layer_3 

# Construct model 
encoder_op = encoder(x) 
decoder_op = decoder(encoder_op) 

# Prediction 
y = decoder_op 



# Objective functions 
y_ = tf.placeholder("float", [None,n_input],name="predict") 

下一頁有人建議我在這裏保存有我的網絡,像這樣..

import os 
import sys 

from tensorflow.python.saved_model import builder as saved_model_builder 
from tensorflow.python.saved_model import utils 
from tensorflow.python.saved_model import tag_constants, signature_constants 
from tensorflow.python.saved_model.signature_def_utils_impl import  build_signature_def, predict_signature_def 
from tensorflow.contrib.session_bundle import exporter 

with tf.Session() as sess: 
# Initialize variables 
    sess.run(init) 

    # Restore model weights from previously saved model 
    saver.restore(sess, model_path) 
    print("Model restored from file: %s" % save_path) 

    export_path = '/tmp/AE_model/6' 
    print('Exporting trained model to', export_path) 
    builder = tf.saved_model.builder.SavedModelBuilder(export_path) 


    signature = predict_signature_def(inputs={'inputs': x}, 
            outputs={'outputs': y}) 

    builder.add_meta_graph_and_variables(sess=sess, 
             tags=[tag_constants.SERVING], 
             signature_def_map={'predict': signature}) 

    builder.save() 


    print 'Done exporting!' 

接下來我按照說明來運行我的服務器on localhost:9000

bazel build //tensorflow_serving/model_servers:tensorflow_model_server 

我設置了服務器

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_base_path=/tmp/AE_model/ 

的問題

現在我想編寫一個程序,所以我可以在Eclipse C++程序通過墊載體(我使用庫的很多),我的服務器,所以我可以做某種預言。

我首先想到了使用inception_client.cc作爲參考。然而,似乎我需要巴澤爾編譯它,因爲我找不到prediction_service.grpc.pb.h任何地方:(

所以看來,我唯一的其他選擇是使用python調用腳本我得到以下輸出:

<grpc.beta._client_adaptations._Rendezvous object at 0x7f9bcf8cb850> 

這個問題的任何幫助,將不勝感激

謝謝

編輯:。

我重新安裝protobuf的一個nd grpc並按照建議運行命令:

我的命令有點不同,我不得不在我的服務文件夾之外使用它(在Ubuntu 14.04中)。

sudo protoc -I=serving -I serving/tensorflow --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_cpp_plugin` serving/tensorflow_serving/apis/*.proto 

這產生的.gprc.pb.h文件,我把它們拉到了/的API /文件夾和錯誤下去。現在我收到錯誤

/tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory 

即使此文件確實存在。任何建議表示讚賞。

謝謝@subzero!

EDIT 2

我能夠通過更新到最新版本的本徵和建立從源頭上解決與本徵問題。接下來我指向/ usr/local/include/eigen3/

之後我遇到了張量流庫的問題。我通過使用lababidi的建議生成libtensorflow_cc.so庫來解決這些問題。 https://github.com/tensorflow/tensorflow/issues/2412

我有最後一個問題。一切似乎要被罰款,除了我得到的錯誤:

未定義參考to`tensorflow ::服務:: PredictRequest ::〜PredictRequest()」

看來,我很想念無論是連接或庫。有誰知道我錯過了什麼?

+0

我遇到了編輯2中的相同問題,您是否找到解決方案? – Matt2048

+0

嘿不,我沒有:(我不得不切換到tensorflow C++ –

+0

我放棄了,並使用自定義服務器和客戶端,而不是 – Matt2048

回答

1

定製客戶端和服務器的一個示例:

服務器代碼添加到tensorflow模型:

import grpc 
from concurrent import futures 
import python_pb2 
import python_pb2_grpc 

class PythonServicer(python_pb2_grpc.PythonServicer): 


    def makePredictions(self, request, context): 


     items = eval(str(request.items)) #Receives the input values for the model as a string and evaluates them into an array to be passed to tensorflow 

     x_feed = items 

     targetEval_out = sess.run(confidences, feed_dict={x:x_feed}) #"confidences" is the output of my model, replace it for the appropriate function from your model 


     out = str(targetEval_out.tolist()) #The model output is then put into string format to be passed back to the client. It has to be reformatted on the other end, but this method was easier to implement 

     return python_pb2.value(name=out) 


print("server online") 
MAX_MESSAGE_LENGTH = 4 * 1024 * 1024 #can be edited to allow for larger amount of data to be transmitted per message. This can be helpful for making large numbers of predictions at once. 
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), 
options=[('grpc.max_send_message_length', MAX_MESSAGE_LENGTH), (
'grpc.max_receive_message_length', MAX_MESSAGE_LENGTH)]) 
python_pb2_grpc.add_PythonServicer_to_server(
PythonServicer(), server) 
server.add_insecure_port('[::]:50051') 
server.start() 

客戶端C++代碼:

#include <grpc/grpc.h> 
#include <grpc++/channel.h> 
#include <grpc++/client_context.h> 
#include <grpc++/create_channel.h> 
#include <grpc++/security/credentials.h> 
#include "python.grpc.pb.h" 

using grpc::Channel; 
using grpc::ClientContext; 
using grpc::ClientReader; 
using grpc::ClientReaderWriter; 
using grpc::ClientWriter; 
using grpc::Status; 
using python::request; 
using python::value; 
using python::Python; 

using namespace std; 


unsigned MAX_MESSAGE_LENGTH = 4 * 1024 * 1024; //can be edited to allow for larger amount of data to be transmitted per message. This can be helpful for making large numbers of predictions at once. 
grpc::ChannelArguments channel_args; 
channel_args.SetMaxReceiveMessageSize(MAX_MESSAGE_LENGTH); 
channel_args.SetMaxSendMessageSize(MAX_MESSAGE_LENGTH); 

shared_ptr<Channel> channel = CreateCustomChannel("localhost:50051", grpc::InsecureChannelCredentials(),channel_args); 
unique_ptr<python::Python::Stub>stub = python::Python::NewStub(channel); 

request r; 
r.set_items(dataInputString); //The input data should be a string that can be parsed to a python array, for example "[[1.0,2.0,3.0],[4.0,5.0,6.0]]" 
//The server code was made to be able to make multiple predictions at once, hence the multiple data arrays 
value val; 
ClientContext context; 

Status status = stub->makePredictions(&context, r, &val); 

cout << val.name() << "\n"; //This prints the returned model prediction 

的python.proto代碼:

syntax = "proto3"; 


package python; 

service Python { 

    rpc makePredictions(request) returns (value) {} 


} 

message request { 
    string items = 1; 
} 


message value { 
    string name = 1; 
} 

我不確定這些代碼片段是否可以自行工作,因爲我剛從當前項目中複製了相關的代碼。但希望這對於需要張量流客戶機和服務器的任何人都是一個很好的起點。

0

您正在查找的pb.h文件是通過在this file上運行protc而生成的。

您可以按照說明here的說明生成頭文件並自己使用它。在任何情況下,您運行的Basel版本都應該在您的build目錄中生成該文件,您可以設置您的eclipse項目以使用這些包含路徑來編譯您的C客戶端。

+0

感謝您的快速響應,我嘗試了您的建議,並且在這裏是我所在的位置:通過運行:protoc -I =正在服務-I服務/ tensorflow --grpc_out =。 --plugin = protoc-gen-grpc = grpc_cpp_plugin服務/ tensorflow_serving/apis/*。proto。我得到錯誤:grpc_cpp_plugin:程序未找到或不可執行 --grpc_out:protoc-gen-grpc:插件失敗, 1. 。此外,似乎原始的bazel生成生成.pb.h文件&不grpc.pb.h文件,所以我不能使用該 –

+0

你必須指向protobuf的gRPC插件,就像'--plugin = protoc-gen-grpc = \'grpc_cpp_plugin \''。假設它在你的'$ PATH'變量上。 – subzero

+0

你好,感謝您的建議。這工作,但現在看來我被困在依賴煉獄。當我嘗試編譯它時,我收到了一些關於Eigen和其他庫的抱怨。我能夠從.proto中產生一些,但我沒有擺脫Eigen的投訴。 –

相關問題