2017-03-15 24 views
3

我試圖構建一個簡單的例子,通過Tensorflow使用LSTM RNN預測某些目標系列的時間序列值,給定已知輸入時間序列。如何從張量流中的LSTM單元控制輸出尺寸

Link to example problem

我想

what I try to accomplish formally

在本質上我認爲是小區A的輸出和下面的基質MULT應該充當:

X = np.zeros([40,2,1]) 
A = np.zeros([40,1,2]) 
b = np.arange(0,2) 

X = tf.convert_to_tensor(X) 
A = tf.convert_to_tensor(A) 
b = tf.convert_to_tensor(b) 

Y = tf.matmul(X,A)+b 

的tensorflow代碼設置爲查看輸出大小,而不是功能tf.graph /會話:

import numpy as np 
import tkinter 
import matplotlib.pyplot as plt 
import tensorflow as tf 
n=40 
x = np.linspace(0,10,n) 
y1 = np.sin(x) 
y2 = np.cos(x) 

x1=np.random.normal(0,y1**2,n) 
x2=np.random.normal(0,y2**2,n) 

y1=(y1**2>0.4)*1 
y2=(y2**2>0.4)*1 

ys = np.vstack((y1,y2)) 
xs = np.vstack((x1,x2)) 

def plot_results_multiple(xs, ys): 
    fig = plt.figure(facecolor='white') 
    ax = fig.add_subplot(111) 
    for i, data in enumerate(xs): 
     plt.plot(data, label='x'+str(i)) 
     plt.legend() 
    for i, data in enumerate(ys): 
     plt.plot(data, label='y'+str(i)) 
     plt.legend() 
    plt.show() 

plot_results_multiple(xs,ys) 

xs = xs.T 
ys = ys.T 

print("Shape of arrays " +str(xs.shape) + " " +str(ys.shape)) 


batch_size = 1 
lstm_size = 1 
nseries = 2 
time_steps = 40 
nclasses = 2 

lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size,state_is_tuple=True) 
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm] * 2, state_is_tuple=True) 

state = lstm.zero_state(batch_size, tf.float32) 
inputs = tf.unstack(xs, num=40, axis=0) 

outputs = [] 

with tf.variable_scope("RNN"): 
    for timestep in range(time_steps): 
     if timestep > 0: tf.get_variable_scope().reuse_variables() 
     output, state = lstm(tf.cast(tf.reshape(inputs[timestep],[1,nseries]),tf.float32), state) 
     print(tf.convert_to_tensor(output).get_shape()) 
     outputs.append(output) 

print(tf.convert_to_tensor(outputs).get_shape()) 
output = tf.reshape(tf.concat(outputs, 1), [-1, lstm_size]) 
softmax_w = tf.get_variable(
    "softmax_w", [time_steps, 1,nclasses],tf.float32)# dtype= 
print(softmax_w.get_shape()) 
softmax_b = tf.get_variable("softmax_b", [nseries], dtype=tf.float32) 
print(softmax_b.get_shape()) 
logits = tf.matmul(output, softmax_w) + softmax_b 

print(logits.get_shape()) 

我認爲我遇到的問題是搞清楚如何修改RNN LSTM細胞,因爲它是目前輸出1x1的張量,從2×1輸入,在那裏我期待一個2×出來放。很感謝任何形式的幫助。

+0

看起來像是由隱藏的神經元號碼控制的,它是tf.contrib.rnn.BasicLSTMCell的第一個參數(lstm_size,state_is_tuple = True) – derek

回答