2017-06-19 19 views
0

我在這裏嘗試的是,通過使用網球比賽統計數據集0123'作爲輸入,我做了一個預測比賽結果(1或0)的神經網絡模型。Python中的MXNet分類總是給出相同的預測

通過以下官方mxnet文檔,我開發了一個程序如下。 我嘗試過不同的配置參數,如batch_size,unit_size,act_type,learning_rate,但不管我能想到什麼樣的修改,我得到的準確度總是在0.5左右,並且總是預測全部爲1或0.

import numpy as np 
from sklearn.preprocessing import normalize 
import mxnet as mx 
import logging 
import warnings 
warnings.filterwarnings("ignore", category=DeprecationWarning) 

logging.basicConfig(level=logging.DEBUG, format='%(asctime)-15s %(message)s') 

batch_size = 100 
train_data = np.loadtxt("dm.csv",delimiter=",") 
train_data = normalize(train_data, norm='l1', axis=0) 
train_lbl = np.loadtxt("dm_lbl.csv",delimiter=",") 
eval_data = np.loadtxt("dw.csv",delimiter=",") 
eval_data = normalize(eval_data, norm='l1', axis=0) 
eval_lbl = np.loadtxt("dw_lbl.csv",delimiter=",") 


train_iter = mx.io.NDArrayIter(train_data, train_lbl, batch_size=batch_size, shuffle=True) 
val_iter = mx.io.NDArrayIter(eval_data, eval_lbl, batch_size=batch_size) 


data = mx.sym.var('data') 
# The first fully-connected layer and the corresponding activation function 
fc1 = mx.sym.FullyConnected(data=data, num_hidden=220) 
#bn1 = mx.sym.BatchNorm(data = fc1, name="bn1") 
act1 = mx.sym.Activation(data=fc1, act_type="sigmoid") 

# The second fully-connected layer and the corresponding activation function 
fc2 = mx.sym.FullyConnected(data=act1, num_hidden = 220) 
#bn2 = mx.sym.BatchNorm(data = fc2, name="bn2") 
act2 = mx.sym.Activation(data=fc2, act_type="sigmoid") 

# The third fully-connected layer and the corresponding activation function 
fc3 = mx.sym.FullyConnected(data=act2, num_hidden = 110) 
#bn3 = mx.sym.BatchNorm(data = fc3, name="bn3") 
act3 = mx.sym.Activation(data=fc3, act_type="sigmoid") 

# output class(es) 
fc4 = mx.sym.FullyConnected(data=act3, num_hidden=2) 
# Softmax with cross entropy loss 
mlp = mx.sym.SoftmaxOutput(data=fc4, name='softmax') 

mod = mx.mod.Module(symbol=mlp, 
        context=mx.cpu(), 
        data_names=['data'], 
        label_names=['softmax_label']) 
mod.fit(train_iter, 
     eval_data=val_iter, 
     optimizer='sgd', 
     optimizer_params={'learning_rate':0.03}, 
     eval_metric='rmse', 
     num_epoch=10, 
     batch_end_callback = mx.callback.Speedometer(batch_size, 100)) # output progress for each 200 data batches) 

prob = mod.predict(val_iter).asnumpy() 
#print(prob) 

for unit in prob: 
    print 'Classified as %d with probability %f' % (unit.argmax(), max(unit)) 

這裏是日誌輸出:

2017-06-19 17:18:34,961 Epoch[0] Train-rmse=0.500574 
2017-06-19 17:18:34,961 Epoch[0] Time cost=0.007 
2017-06-19 17:18:34,968 Epoch[0] Validation-rmse=0.500284 
2017-06-19 17:18:34,975 Epoch[1] Train-rmse=0.500703 
2017-06-19 17:18:34,975 Epoch[1] Time cost=0.007 
2017-06-19 17:18:34,982 Epoch[1] Validation-rmse=0.500301 
2017-06-19 17:18:34,990 Epoch[2] Train-rmse=0.500713 
2017-06-19 17:18:34,990 Epoch[2] Time cost=0.008 
2017-06-19 17:18:34,998 Epoch[2] Validation-rmse=0.500302 
2017-06-19 17:18:35,005 Epoch[3] Train-rmse=0.500713 
2017-06-19 17:18:35,005 Epoch[3] Time cost=0.007 
2017-06-19 17:18:35,012 Epoch[3] Validation-rmse=0.500302 
2017-06-19 17:18:35,019 Epoch[4] Train-rmse=0.500713 
2017-06-19 17:18:35,019 Epoch[4] Time cost=0.007 
2017-06-19 17:18:35,027 Epoch[4] Validation-rmse=0.500302 
2017-06-19 17:18:35,035 Epoch[5] Train-rmse=0.500713 
2017-06-19 17:18:35,035 Epoch[5] Time cost=0.008 
2017-06-19 17:18:35,042 Epoch[5] Validation-rmse=0.500302 
2017-06-19 17:18:35,049 Epoch[6] Train-rmse=0.500713 
2017-06-19 17:18:35,049 Epoch[6] Time cost=0.007 
2017-06-19 17:18:35,056 Epoch[6] Validation-rmse=0.500302 
2017-06-19 17:18:35,064 Epoch[7] Train-rmse=0.500712 
2017-06-19 17:18:35,064 Epoch[7] Time cost=0.008 
2017-06-19 17:18:35,071 Epoch[7] Validation-rmse=0.500302 
2017-06-19 17:18:35,079 Epoch[8] Train-rmse=0.500712 
2017-06-19 17:18:35,079 Epoch[8] Time cost=0.007 
2017-06-19 17:18:35,085 Epoch[8] Validation-rmse=0.500301 
2017-06-19 17:18:35,093 Epoch[9] Train-rmse=0.500712 
2017-06-19 17:18:35,093 Epoch[9] Time cost=0.007 
2017-06-19 17:18:35,099 Epoch[9] Validation-rmse=0.500301 
Classified as 0 with probability 0.530638 
Classified as 0 with probability 0.530638 
Classified as 0 with probability 0.530638 
. 
. 
. 
Classified as 0 with probability 0.530638 

可有人請給我介紹這裏也有我錯了嗎?

python version == 2.7.10 
mxnet == 0.10.0 
numpy==1.12.0 

從數據集中,我刪除了一些非信息性列和標題,然後將其轉換爲csv格式。

train_data.shape == (491, 22) 
train_lbl.shape == (491,) 
eval_data.shape == (452, 22) 
eval_lbl.shape == (452,) 
+0

我上傳了這個程序中使用的數據集[這裏](https://www.dropbox.com/sh/f7d79gu4w9xgbdx/AABvKPiwN0axn4iVpOH4p_sya?dl=0) –

回答

0

網絡定義看起來不錯。你能否打印train_iter和val_iter來查看規範化後數據是否仍然是你期望的?另外,您要從原始數據中刪除哪些列?

+0

非常感謝您對@Naveen的評論。 train_iter和val_iter看起來不錯,通常分佈。但是有或沒有規範化代碼,它們都會產生不好的結果:沒有準確性/改善。我從原始數據中刪除了很多列,例如玩家的名字,但是您能否看看我上傳的文件?鏈接有我在這個程序中使用的確切數據。 –

相關問題