2017-04-03 63 views
0

當我編譯我的代碼,我多次得到錯誤C++內存錯誤

free(): invalid next size (fast) 

然而,代碼只是到目前爲止,以創建引用。具體來說,評論一條特定的線似乎可以解決錯誤;然而,這是一條非常重要的路線。

void neuron::updateWeights(layer &prevLayer) { 
    for(unsigned i = 0; i < prevLayer.size(); i++) { 
     double oldDeltaWeight = prevLayer[i].m_connections[m_index].m_deltaWeight; 
     double newDeltaWeight = eta * prevLayer[i].m_output * m_gradient + alpha * oldDeltaWeight; 
     prevLayer[i].m_connections[m_index].m_deltaWeight = newDeltaWeight; // THIS LINE 
     prevLayer[i].m_connections[m_index].m_weight += newDeltaWeight; 
    } 
} 

任何幫助將不勝感激!

編輯: 附加代碼 //頭 的#include 「../../Include/neuralNet.h」

// Libraries 
#include <vector> 
#include <iostream> 
#include <cmath> 

// Namespace 
using namespace std; 

// Class constructor 
neuron::neuron(unsigned index, unsigned outputs) { 
    m_index = index; 
    for(unsigned i = 0; i < outputs; i++) { 
     m_connections.push_back(connection()); 
    } 
    // Set default neuron output 
    setOutput(1.0); 
} 

double neuron::eta = 0.15; // overall net learning rate, [0.0..1.0] 
double neuron::alpha = 0.5; // momentum, multiplier of last deltaWeight, [0.0..1.0] 

// Definition of transfer function method 
double neuron::transferFunction(double x) const { 
    return tanh(x); // -1 -> 1 
} 

// Transfer function derivation method 
double neuron::transferFunctionDerivative(double x) const { 
    return 1 - x*x; // Derivative of tanh 
} 

// Set output value 
void neuron::setOutput(double value) { 
    m_output = value; 
} 

// Forward propagate 
void neuron::recalculate(layer &previousLayer) { 

    double sum = 0.0; 
    for(unsigned i = 0; i < previousLayer.size(); i++) { 
     sum += previousLayer[i].m_output * previousLayer[i].m_connections[m_index].m_weight; 
    } 
    setOutput(transferFunction(sum)); 
} 

// Change weights based on target 
void neuron::updateWeights(layer &prevLayer) { 
    for(unsigned i = 0; i < prevLayer.size(); i++) { 
     double oldDeltaWeight = prevLayer[i].m_connections[m_index].m_deltaWeight; 
     double newDeltaWeight = eta * prevLayer[i].m_output * m_gradient + alpha * oldDeltaWeight; 
     prevLayer[i].m_connections[m_index].m_deltaWeight = newDeltaWeight; 
     prevLayer[i].m_connections[m_index].m_weight += newDeltaWeight; 
    } 
} 

// Complex math stuff 
void neuron::calculateOutputGradients(double target) { 
    double delta = target - m_output; 
    m_gradient = delta * transferFunctionDerivative(m_output); 
} 

double neuron::sumDOW(const layer &nextLayer) { 
    double sum = 0.0; 

    for(unsigned i = 1; i < nextLayer.size(); i++) { 
     sum += m_connections[i].m_weight * nextLayer[i].m_gradient; 
    } 

    return sum; 
} 

void neuron::calculateHiddenGradients(const layer &nextLayer) { 
    double dow = sumDOW(nextLayer); 
    m_gradient = dow * neuron::transferFunctionDerivative(m_output); 
} 

也行,這裏所謂的

// Update weights 
    for(unsigned layerIndex = m_layers.size() - 1; layerIndex > 0; layerIndex--) { 
     layer &currentLayer = m_layers[layerIndex]; 
     layer &previousLayer = m_layers[layerIndex - 1]; 

     for(unsigned i = 1; i < currentLayer.size(); i++) { 
      currentLayer[i].updateWeights(previousLayer); 
     } 
    }  
+2

請詳細說明您的問題。目前還不清楚你是如何到達這個問題的,因此我們無法幫助你。您可能希望閱讀[問]很好的問題,以更好地瞭解我們在堆棧溢出時期望的問題。此外,您可能會發現創建[mcve]頁面有幫助,因爲您提供的示例很少,但既不完整也不可驗證。 – jaggedSpire

+0

希望有幫助嗎?自從爲prevLayer [i] .m_connections [m_index] .m_deltaWeight分配一個值後,我只是非常丟失,但是從中檢索一個值不會。 – Brian

+1

這是一個運行時錯誤(來自您的C庫),而不是編譯錯誤。你發佈的代碼不是[mcve]。 – melpomene

回答

0

你構造函數初始化N'輸出'm_connections在課堂上。

但你有很多的地方致電:

m_connections[m_index] 

如果M_索引>輸出會發生什麼?這可能是你的問題嗎? 嘗試,包括在構造函數中的第一行的斷言(http://www.cplusplus.com/reference/cassert/assert/):

assert(index < outputs) 

你可能有一個糟糕的指針訪問的地方。