2017-09-05 24 views
0

我使用張量流預測具有不同時間段的財務時間序列。爲了分割輸入數據,我做了子樣本並用於循環。 但是,我得到了這樣的ValueError;Tensorflow值錯誤:變量已存在,不允許

ValueError:變量rnn/basic_lstm_cell /權重已存在,不允許。你是否想在VarScope中設置reuse = True?最初定義在:

沒有子採樣此代碼效果很好。 以下是我的代碼。

import tensorflow as tf 
    import numpy as np 
    import matplotlib 
    import os 
    import matplotlib.pyplot as plt 

    class lstm: 
     def __init__(self, x, y): 
      # train Parameters 
      self.seq_length = 50 
      self.data_dim = x.shape[1] 
      self.hidden_dim = self.data_dim*2 
      self.output_dim = 1 
      self.learning_rate = 0.0001 
      self.iterations = 5 # originally 500 

     def model(self,x,y): 
      # build a dataset 
      dataX = [] 
      dataY = [] 
      for i in range(0, len(y) - self.seq_length): 
       _x = x[i:i + self.seq_length] 
       _y = y[i + self.seq_length] 
       dataX.append(_x) 
       dataY.append(_y) 

      train_size = int(len(dataY) * 0.7977) 
      test_size = len(dataY) - train_size 
      trainX, testX = np.array(dataX[0:train_size]), np.array(dataX[train_size:len(dataX)]) 
      trainY, testY = np.array(dataY[0:train_size]), np.array(dataY[train_size:len(dataY)]) 
      print(train_size,test_size) 

      # input place holders 
      X = tf.placeholder(tf.float32, [None, self.seq_length,   self.data_dim]) 
      Y = tf.placeholder(tf.float32, [None, 1]) 

      # build a LSTM network 
      cell = tf.contrib.rnn.BasicLSTMCell(num_units=self.hidden_dim,state_is_tuple=True, activation=tf.tanh) 
      outputs, _states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) 
      self.Y_pred = tf.contrib.layers.fully_connected(outputs[:, -1], self.output_dim, activation_fn=None) 
      # We use the last cell's output 

      # cost/loss 
      loss = tf.reduce_sum(tf.square(self.Y_pred - Y)) # sum of the squares 
      # optimizer 
      optimizer = tf.train.AdamOptimizer(self.learning_rate) 
      train = optimizer.minimize(loss) 

      # RMSE 
      targets = tf.placeholder(tf.float32, [None, 1]) 
      predictions = tf.placeholder(tf.float32, [None, 1]) 
      rmse = tf.sqrt(tf.reduce_mean(tf.square(targets - predictions))) 

      # training 
      with tf.Session() as sess: 
       init = tf.global_variables_initializer() 
       sess.run(init) 

       # Training step 
       for i in range(self.iterations): 
        _, step_loss = sess.run([train, loss], feed_dict={X: trainX, Y: trainY}) 

       # prediction 
       train_predict = sess.run(self.Y_pred, feed_dict={X: trainX}) 
       test_predict = sess.run(self.Y_pred, feed_dict={X: testX}) 

      return train_predict, test_predict 

    # variables definition 
    tsx = [] 
    tsy = [] 
    tsr = [] 
    trp = [] 
    tep = [] 

    x = np.loadtxt('data.csv', delimiter=',') # data for analysis 
    y = x[:,[-1]] 
    z = np.loadtxt('rb.csv', delimiter=',') # data for time series 
    z1 = z[:,0] # start cell 
    z2 = z[:,1] # end cell 

    for i in range(1): # need to change to len(z) 
     globals()['x_%s' % i] = x[int(z1[i]):int(z2[i]),:] # definition of x 
     tsx.append(globals()["x_%s" % i]) 

     globals()['y_%s' % i] = y[int(z1[i])+1:int(z2[i])+1,:] # definition of y 
     tsy.append(globals()["y_%s" % i]) 

     globals()['a_%s' % i] = lstm(tsx[i],tsy[i]) # definition of class 

     globals()['trp_%s' % i],globals()['tep_%s' % i] = globals()["a_%s" % i].model(tsx[i],tsy[i]) 
     trp.append(globals()["trp_%s" % i]) 
     tep.append(globals()["tep_%s" % i]) 

回答

1

每次調用model方法時,您都在構建LSTM的計算圖。第二次調用model方法時,張量流發現已經創建了具有相同名稱的變量。如果創建變量的作用域的reuse標誌設置爲False,則會引發ValueError

要解決此問題,您必須在循環結束時通過調用tf.get_variable_scope().reuse_variables()將重用標誌設置爲True

請注意,您不能在循環開始時添加此項,因爲這樣您就試圖重新使用尚未創建的變量。

您在張量流文檔中找到更多信息here

+0

謝謝,GeertH。你的建議已經解決了我提到的錯誤。不過,我仍然有一個錯誤。它如下所示。 ValueError:變量fully_connected_1 /權重/ Adam /不存在,或者未使用tf.get_variable()創建。你是否想在VarScope中設置重用=無? –

相關問題