我正在嘗試使用動作值近似函數來實現q-learning。我正在使用openai-gym和「MountainCar-v0」環境來測試我的算法。我的問題是,它沒有收斂或找到目標。函數逼近器和q學習
基本上,approximator的工作方式如下,您可以輸入2個特徵:位置和速度,以及單熱編碼中的3個動作之一:0 - > [1,0,0],1 - > [ 0,1,0]和2 - > [0,0,1]。對於一個特定的動作,輸出是動作值近似值Q_approx(s,a)。
我知道通常情況下,輸入是狀態(2個功能),輸出層每個動作包含1個輸出。我看到的最大區別是我已經運行了前饋3次(每次操作一次)並取最大值,而在標準實現中,您只運行一次,並在輸出中取最大值。
也許我的實現是完全錯誤的,我想錯了。要在這裏粘貼代碼,這是一個混亂,但我只是試驗一下:
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
env = gym.make('MountainCar-v0')
# The mean reward over 20 episodes
mean_rewards = np.zeros(20)
# Feature numpy holder
features = np.zeros(5)
# Q_a value holder
qa_vals = np.zeros(3)
one_hot = {
0 : np.asarray([1,0,0]),
1 : np.asarray([0,1,0]),
2 : np.asarray([0,0,1])
}
model = Sequential()
model.add(Dense(20, activation="relu",input_dim=(5)))
model.add(Dense(10,activation="relu"))
model.add(Dense(1))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
epsilon_greedy = 0.1
discount = 0.9
batch_size = 16
# Experience replay containing features and target
experience = np.ones((10*300,5+1))
# Ring buffer
def add_exp(features,target,index):
if index % experience.shape[0] == 0:
index = 0
global filled_once
filled_once = True
experience[index,0:5] = features
experience[index,5] = target
index += 1
return index
for e in range(0,100000):
obs = env.reset()
old_obs = None
new_obs = obs
rewards = 0
loss = 0
for i in range(0,300):
if old_obs is not None:
# Find q_a max for s_(t+1)
features[0:2] = new_obs
for i,pa in enumerate([0,1,2]):
features[2:5] = one_hot[pa]
qa_vals[i] = model.predict(features.reshape(-1,5))
rewards += reward
target = reward + discount*np.max(qa_vals)
features[0:2] = old_obs
features[2:5] = one_hot[a]
fill_index = add_exp(features,target,fill_index)
# Find new action
if np.random.random() < epsilon_greedy:
a = env.action_space.sample()
else:
a = np.argmax(qa_vals)
else:
a = env.action_space.sample()
obs, reward, done, info = env.step(a)
old_obs = new_obs
new_obs = obs
if done:
break
if filled_once:
samples_ids = np.random.choice(experience.shape[0],batch_size)
loss += model.train_on_batch(experience[samples_ids,0:5],experience[samples_ids,5].reshape(-1))[0]
mean_rewards[e%20] = rewards
print("e = {} and loss = {}".format(e,loss))
if e % 50 == 0:
print("e = {} and mean = {}".format(e,mean_rewards.mean()))
在此先感謝!
我聽說過使用這些動作作爲功能之前,但沒有聽說過它運行良好。我認爲你最好用這個傳統去做,並把行動當作輸出。在數學上這兩個網絡會有很大的不同。 – Andnp