2017-04-04 124 views
0

我目前正在嘗試創建一個神經網絡來預測撲克牌手,我對機器學習和神經網絡很陌生,可能需要一些幫助!我發現了一些關於如何創建神經網絡的教程,是我在努力適應這個數據集到後援下面的代碼正在pycharm崩潰 試試這裏是代碼:神經網絡分類器撲克

import numpy as np 
import pandas as pnd 
# sigmoid function 


def nonlin(x, deriv=False): 
    if deriv: 
     return x * (1 - x) 
    return 1/(1 + np.exp(-x)) 
# InputData 
training_data = pnd.read_csv("train.csv") 
print(training_data) 
training_data = training_data.drop(['hand'], axis=1) 
print(training_data) 
X = np.array(training_data) 

# output data 
training_data = pnd.read_csv("train.csv") 
print(training_data) 
training_data = training_data.drop(['S1'], axis=1) 
training_data = training_data.drop(['C1'], axis=1) 
training_data = training_data.drop(['S2'], axis=1) 
training_data = training_data.drop(['C2'], axis=1) 
training_data = training_data.drop(['S3'], axis=1) 
training_data = training_data.drop(['C3'], axis=1) 
training_data = training_data.drop(['S4'], axis=1) 
training_data = training_data.drop(['C4'], axis=1) 
training_data = training_data.drop(['S5'], axis=1) 
training_data = training_data.drop(['C5'], axis=1) 
print(training_data) 
Y = np.array(training_data).T 
print(Y) 
# input dataset 
# seed random numbers to make calculation 
# deterministic (just a good practice) 
np.random.seed(1) 
# initialize weights randomly with mean 0 
syn0 = 2 * np.random.random((10, 25011)) - 1 
syn1 = 2*np.random.random((10, 1)) - 1 

for j in range(10000): 
    # Feed forward through layers 0, 1, and 2 
    l0 = X 
    l1 = nonlin(np.dot(l0, syn0)) 
    l2 = nonlin(np.dot(l1, syn1)) 
    # how much did we miss the target value? 
    l2_error = y - l2 
    if (j % 10000) == 0: 
     print("Error:" + str(np.mean(np.abs(l2_error)))) 
    # in what direction is the target value 
    # were we really sure? if so, don't change too much. 
    l2_delta = l2_error * nonlin(l2, deriv=True) 
    # how much did each l1 value contribute to the l2 error (according to the weights)? 
    l1_error = l2_delta.dot(syn1.T) 
    # in what direction is the target l1? 
    # were we really sure? if so, don't change too much. 
    l1_delta = l1_error * nonlin(l1, deriv=True) 
    syn1 += l1.T.dot(l2_delta) 
    syn0 += l0.T.dot(l1_delta) 

以下是我的數據設置了一個片段: Data set snippet

而且以下是我正在使用的數據集的說明: 屬性信息:

1) S1 "Suit of card #1" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

2) C1 "Rank of card #1" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

3) S2 "Suit of card #2" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

4) C2 "Rank of card #2" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

5) S3 "Suit of card #3" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

6) C3 "Rank of card #3" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

7) S4 "Suit of card #4" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

8) C4 "Rank of card #4" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

9) S5 "Suit of card #5" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

10) C5 "Rank of card 5" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

11) CLASS "Poker Hand" 
Ordinal (0-9) 

0: Nothing in hand; not a recognized poker hand 
1: One pair; one pair of equal ranks within five cards 
2: Two pairs; two pairs of equal ranks within five cards 
3: Three of a kind; three equal ranks within five cards 
4: Straight; five cards, sequentially ranked with no gaps 
5: Flush; five cards with the same suit 
6: Full house; pair + different rank three of a kind 
7: Four of a kind; four equal ranks within five cards 
8: Straight flush; straight + flush 
9: Royal flush; {Ace, King, Queen, Jack, Ten} + flush 

正在使用的變量:

Variable Definition 
X Input dataset matrix where each row is a training example 
y Output dataset matrix where each row is a training example 
l0 First Layer of the Network, specified by the input data 
l1 Second Layer of the Network, otherwise known as the hidden layer 
l2 Final Layer of the Network, which is our hypothesis, and should approximate the correct answer as we train. 
syn0 First layer of weights, Synapse 0, connecting l0 to l1. 
syn1 Second layer of weights, Synapse 1 connecting l1 to l2. 
l2_error This is the amount that the neural network "missed". 
l2_delta This is the error of the network scaled by the confidence. It's almost identical to the error except that very confident errors are muted. 
l1_error Weighting l2_delta by the weights in syn1, we can calculate the error in the middle/hidden layer. 
l1_delta This is the l1 error of the network scaled by the confidence. Again, it's almost identical to the l1_error except that confident errors are muted. 
+0

我確定該程序不會讓*電腦*崩潰。在最糟糕的情況下,它會導致進程崩潰。還有一些關於在崩潰時收到什麼錯誤(或任何其他消息)的信息將有助於查明錯誤。 –

+0

海事組織你可以減少你的神經網絡如何工作的解釋。這與事故無關。 –

回答

0

首先你一定要清楚這是否崩潰您的計算機或過程。

如果是您的計算機,請在啓動NN時檢查您的RAM規格以及使用的數量(top,htop等)。

您的syn0矩陣是10x25011。那是10 * 25011 * 8/1024 = 1953kB。如果您將近2個演出放入Python中的單個變量並運行一堆Chrome選項卡,則完全關閉是一種真正的可能。