回答
下面是一個簡單的例子由阿明·裏戈:http://codespeak.net/pypy/dist/demo/bpnn.py。 如果你想使用更復雜的東西,也有http://pybrain.org。
編輯:鏈接斷開。無論如何,目前與Python中的神經網絡的方式可能是Theano。
你可能想看看Monte:
Monte (python) is a Python framework for building gradient based learning machines, like neural networks, conditional random fields, logistic regression, etc. Monte contains modules (that hold parameters, a cost-function and a gradient-function) and trainers (that can adapt a module's parameters by minimizing its cost-function on training data).
Modules are usually composed of other modules, which can in turn contain other modules, etc. Gradients of decomposable systems like these can be computed with back-propagation.
找到在Ubuntu論壇 http://ubuntuforums.org/showthread.php?t=320257
import time
import random
# Learning rate:
# Lower = slower
# Higher = less precise
rate=.2
# Create random weights
inWeight=[random.uniform(0, 1), random.uniform(0, 1)]
# Start neuron with no stimuli
inNeuron=[0.0, 0.0]
# Learning table (or gate)
test =[[0.0, 0.0, 0.0]]
test+=[[0.0, 1.0, 1.0]]
test+=[[1.0, 0.0, 1.0]]
test+=[[1.0, 1.0, 1.0]]
# Calculate response from neural input
def outNeuron(midThresh):
global inNeuron, inWeight
s=inNeuron[0]*inWeight[0] + inNeuron[1]*inWeight[1]
if s>midThresh:
return 1.0
else:
return 0.0
# Display results of test
def display(out, real):
if out == real:
print str(out)+" should be "+str(real)+" ***"
else:
print str(out)+" should be "+str(real)
while 1:
# Loop through each lesson in the learning table
for i in range(len(test)):
# Stimulate neurons with test input
inNeuron[0]=test[i][0]
inNeuron[1]=test[i][1]
# Adjust weight of neuron #1
# based on feedback, then display
out = outNeuron(2)
inWeight[0]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Adjust weight of neuron #2
# based on feedback, then display
out = outNeuron(2)
inWeight[1]+=rate*(test[i][2]-out)
display(out, test[i][2])
# Delay
time.sleep(1)
編輯這個interresting discusion:還有一個框架命名chainer https://pypi.python.org/pypi/chainer/1.0.0
這個代碼中的偏見是如何發揮作用的?我不太明白。 – extensa5620 2016-01-28 03:30:02
這裏是一個概率神經網絡tutori人:http://www.youtube.com/watch?v=uAKu4g7lBxU
我的Python實現:
import math
data = {'o' : [(0.2, 0.5), (0.5, 0.7)],
'x' : [(0.8, 0.8), (0.4, 0.5)],
'i' : [(0.8, 0.5), (0.6, 0.3), (0.3, 0.2)]}
class Prob_Neural_Network(object):
def __init__(self, data):
self.data = data
def predict(self, new_point, sigma):
res_dict = {}
np = new_point
for k, v in self.data.iteritems():
res_dict[k] = sum(self.gaussian_func(np[0], np[1], p[0], p[1], sigma) for p in v)
return max(res_dict.iteritems(), key=lambda k : k[1])
def gaussian_func(self, x, y, x_0, y_0, sigma):
return math.e ** (-1 *((x - x_0) ** 2 + (y - y_0) ** 2)/((2 * (sigma ** 2))))
prob_nn = Prob_Neural_Network(data)
res = prob_nn.predict((0.2, 0.6), 0.1)
結果:
>>> res
('o', 0.6132686067117191)
- 1. OpenCV神經網絡代碼示例
- 2. 神經網絡的實現
- 3. 神經網絡的疑惑python代碼
- 4. Windows的神經網絡OpenCL代碼示例
- 5. 優化的Hopfield神經網絡
- 6. 實現偏倚神經網絡神經網絡
- 7. 神經網絡代碼解釋
- 8. 的神經網絡
- 9. 神經網絡
- 10. 神經網絡
- 11. 最佳神經網絡優化算法
- 12. Python神經網絡來優化AUC
- 13. 神經網絡準確性優化
- 14. 神經網絡性能優化
- 15. 遺傳算法神經網絡優化
- 16. 2-Dence神經網絡精度優化
- 17. java中的神經網絡實現
- 18. 神經網絡初始化 - Nguyen Widrow實施?
- 19. 我有實施反向傳播神經網絡
- 20. 神經網絡 - 如何測試它是否正確實施?
- 21. 嘗試實施神經網絡來解決異或
- 22. 對學生考勤系統實施神經網絡
- 23. 神經網絡的替代品
- 24. 優化我的Verilog代碼神經網絡中的前饋算法
- 25. 神經網絡[ocr]
- 26. 神經網絡,python
- 27. MATLAB神經網絡
- 28. 神經網絡backpropogation
- 29. 神經網絡:「InverseLayer」
- 30. SOM - 神經網絡
鏈接的破... – kmace 2013-03-12 06:17:09
謝謝,我更新了答案。 – bayer 2013-03-12 10:13:07