2016-09-15 24 views
0

我使用Newmu tutorial從他的github進行邏輯迴歸。想要在他的模型中添加一個隱藏層,所以我將權重變量分爲兩個數組h_w和o_w。 問題是 - 當我試圖讓一個更新其不可能在名單上操作(W = [h_w,o_w])Theano - 帶有1個隱藏層的SGD

"File "C:/Users/Dis/PycharmProjects/untitled/MNISTnet.py", 
line 32, in <module> 
    **update = [[w, w - gradient * 0.05]] TypeError: can't multiply sequence by non-int of type 'float'**" 

我在theano和numpy的和theano文檔couldn初學者」幫助我。我發現棧()函數,但結合w = T.stack([h_w, o_w], axis=1) theano時給我的錯誤:

Traceback (most recent call last): 
    File "C:\Users\Dis\PycharmProjects\untitled\MNISTnet.py", line 35, in <module> 
    gradient = T.grad(cost=cost, wrt=w) 
    File "C:\Program Files\Anaconda2\lib\site-packages\theano-0.9.0.dev1-py2.7.egg\theano\gradient.py", line 533, in grad 
    handle_disconnected(elem) 
    File "C:\Program Files\Anaconda2\lib\site-packages\theano-0.9.0.dev1-py2.7.egg\theano\gradient.py", line 520, in handle_disconnected 
    raise DisconnectedInputError(message) 
theano.gradient.DisconnectedInputError: 
Backtrace when that variable is created: 

    File "C:\Users\Dis\PycharmProjects\untitled\MNISTnet.py", line 30, in <module> 
    w = T.stack([h_w, o_w], axis=1) 

所以,我的問題:我怎麼能轉換成該列表[<TensorType(float64, matrix)>, <TensorType(float64, matrix)>]變量<TensorType(float64, matrix)>

我的全代碼如下:

import theano 
from theano import tensor as T 
import numpy as np 
from load import mnist 

def floatX(X): 
    return np.asarray(X, dtype=theano.config.floatX) 

def init_weights(shape): 
    return theano.shared(floatX(np.random.randn(*shape) * 0.01)) 

def model(X, o_w, h_w): 
    hid = T.nnet.sigmoid(T.dot(X, h_w)) 
    out = T.nnet.softmax(T.dot(hid, o_w)) 
    return out 

trX, teX, trY, teY = mnist(onehot=True) 

X = T.fmatrix() 
Y = T.fmatrix() 

h_w = init_weights((784, 625)) 
o_w = init_weights((625, 10)) 

py_x = model(X, o_w, h_w) 
y_pred = T.argmax(py_x, axis=1) 
w = [o_w, h_w] 

cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y)) 
gradient = T.grad(cost=cost, wrt=w) 
print type(gradient) 
update = [[w, w - gradient * 0.05]] 

回答

1

T.grad(..)回報gradient w.r.t到每個參數,所以你不能做[w, w - gradient * 0.05],你必須指定的漸變[*]參數你指的是。此外它不是一個好主意,使用堆棧多個參數,簡單的列表是不夠的,檢查這tutorial。 這應該工作:

import theano 
from theano import tensor as T 
import numpy as np 
from load import mnist 

def floatX(X): 
    return np.asarray(X, dtype=theano.config.floatX) 

def init_weights(shape): 
    return theano.shared(floatX(np.random.randn(*shape) * 0.01)) 

def model(X, o_w, h_w): 
    hid = T.nnet.sigmoid(T.dot(X, h_w)) 
    out = T.nnet.softmax(T.dot(hid, o_w)) 
    return out 

trX, teX, trY, teY = mnist(onehot=True) 

X = T.fmatrix() 
Y = T.fmatrix() 

h_w = init_weights((784, 625)) 
o_w = init_weights((625, 10)) 

py_x = model(X, o_w, h_w) 
y_pred = T.argmax(py_x, axis=1) 
w = [o_w, h_w] 

cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y)) 
gradient = T.grad(cost=cost, wrt=w) 
print type(gradient) 
update = [[o_w, o_w - gradient[0] * 0.05], 
      [h_w, h_w - gradient[1] * 0.05]] 

我建議要通過Theano tutorials上手。