技術信息:theano GRU RNN亞當優化
操作系統:Mac OS X 10.9.5
IDE:Eclipse的Mars.1版本(4.5.1),用的PyDev和蟒蛇解釋器(語法版本3.4)
GPU:的NVIDIA GeForce GT 650M
利布斯:numpy的,aeosa,獅身人面像-1.3.1,theano 0.7,NLTK-3.1
我的背景:我很新的theano和numpy的和避風港」在機器學習或離散數學方面取得了正式課程。
自然語言處理的迴歸神經網絡我目前就是從這裏取使用:
https://github.com/dennybritz/rnn-tutorial-gru-lstm/blob/master/gru_theano.py
這個文件所做的唯一變化與字符串'float32'
更換到theano.config.floatX
引用。
我也使用存儲庫中包含的utils.py和train.py模塊,只做了很小的修改。
亞當優化我計劃在地方的例子庫實現新元/ RMS代碼包含在這裏找到:3210
此處轉載(再次引用到.config.floatX
替換爲硬編碼'float32'
) :
(theano
如th
,theano.shared
如thsh
,theano.tensor
如T
,numpy
作爲np
)
def adam(loss, all_params, learning_rate=0.001, b1=0.9, b2=0.999, e=1e-8, gamma=1-1e-8):
"""
ADAM update rules
Default values are taken from [Kingma2014]
References:
[Kingma2014] Kingma, Diederik, and Jimmy Ba.
"Adam: A Method for Stochastic Optimization."
arXiv preprint arXiv:1412.6980 (2014).
http://arxiv.org/pdf/1412.6980v4.pdf
"""
updates = []
all_grads = th.grad(loss, all_params)
alpha = learning_rate
t = thsh(np.float32(1))
b1_t = b1*gamma**(t-1) #(Decay the first moment running average coefficient)
for theta_previous, g in zip(all_params, all_grads):
m_previous = thsh(np.zeros(theta_previous.get_value().shape.astype('float32')))
v_previous = thsh(np.zeros(theta_previous.get_value().shape.astype('float32')))
m = b1_t*m_previous + (1 - b1_t)*g # (Update biased first moment estimate)
v = b2*v_previous + (1 - b2)*g**2 # (Update biased second raw moment estimate)
m_hat = m/(1-b1**t) # (Compute bias-corrected first moment estimate)
v_hat = v/(1-b2**t) # (Compute bias-corrected second raw moment estimate)
theta = theta_previous - (alpha * m_hat)/(T.sqrt(v_hat) + e) #(Update parameters)
updates.append((m_previous, m))
updates.append((v_previous, v))
updates.append((theta_previous, theta))
updates.append((t, t + 1.))
return updates
我的問題是這樣的:
你會如何修改GRUTheano模塊來代替內置新元/ rmsprop功能的使用上面的方法亞當?
它看起來像主要變化將是線GRUTheano的99-126:
# SGD parameters
learning_rate = T.scalar('learning_rate')
decay = T.scalar('decay')
# rmsprop cache updates
mE = decay * self.mE + (1 - decay) * dE ** 2
mU = decay * self.mU + (1 - decay) * dU ** 2
mW = decay * self.mW + (1 - decay) * dW ** 2
mV = decay * self.mV + (1 - decay) * dV ** 2
mb = decay * self.mb + (1 - decay) * db ** 2
mc = decay * self.mc + (1 - decay) * dc ** 2
self.sgd_step = theano.function(
[x, y, learning_rate, theano.Param(decay, default=0.9)],
[],
updates=[(E, E - learning_rate * dE/T.sqrt(mE + 1e-6)),
(U, U - learning_rate * dU/T.sqrt(mU + 1e-6)),
(W, W - learning_rate * dW/T.sqrt(mW + 1e-6)),
(V, V - learning_rate * dV/T.sqrt(mV + 1e-6)),
(b, b - learning_rate * db/T.sqrt(mb + 1e-6)),
(c, c - learning_rate * dc/T.sqrt(mc + 1e-6)),
(self.mE, mE),
(self.mU, mU),
(self.mW, mW),
(self.mV, mV),
(self.mb, mb),
(self.mc, mc)
])