首页 > 代码库 > 神经网络python 实现

神经网络python 实现

感知器学习算法步骤如下:
1.对权系数w置初值
对权系数w=(W1 .W2 ,…,Wn ,Wn+1 )的各个分量置一个较小的零随机值,但Wn+1 =
—g。并记为Wl (0),W2 (0),…,Wn (0),同时有Wn+1(0)=-θ 。这里Wi (t)为t时刻从第i个
输入上的权系数,i=1,2,…,n。Wn+1 (t)为t时刻时的阀值。

图1-10 感知器的分类例子

2.输入一样本X=(X1 ,X2 ,…,Xn+1 )以及它的期望输出d。

期望输出值d在样本的类属不同时取值不同。如果x是A类,则取d=1,如果x是B类,则取-1。期望输出d也即是教师信号。

3.计算实际输出值Y

4.根据实际输出求误差e

e=d-Y(t)       (1-21)

5.用误差e去修改权系数

i=1,2,…,n,n+1      (1-22)

其中,η称为权重变化率,0<η≤1


# -*- coding: cp936 -*-
import numpy
import pylab
import sys

class neuralNetwork:
	b = 1
	learnRaito = 0.5	
	trainData = numpy.array([[b,1,3],[b,2,3],[b,1,8],[b,2,15],[b,3,7],[b,4,29],[b,4,8],[b,4,20]])
	#训练数据 可以训练不同的方程 模型
	trainResult = numpy.array([1,1,-1,-1,1,-1,1,-1])
	weight = numpy.array([b,0,0])
	error = 0.001
	def Out(self,v):
		"""求值的取向"""
		if v>=0:
			return 1
		else:
			return -1
	
	def exceptSignal(self,oldw,inx):
				#a bug here
				#print '-'*20
				#print oldw
				#print inx
				#print numpy.dot(oldw.T,inx)
				#print '+'*20
				#return 1
				ans = numpy.dot(oldw.T,inx)
				return self.Out(ans)
	
	def trainOnce(self,oldw,inx,correctResult):
		"""one training"""
		error = correctResult - self.exceptSignal(oldw,inx)
		newWeight = oldw + self.learnRaito*error*inx
		self.weight = newWeight
		return error
	
	def getAbs(self,x):
		if x<0:
			return -x
		else:
			return x
	
	def trainWeight(self):
		"""traing the weight of data"""
		error = 1
		while error > self.error:
			i = 0
			error = 0
			
			for inx in self.trainData:
				error += self.getAbs(self.trainOnce(self.weight,inx,self.trainResult[i]))
				i = i+1
			
	
	def drawTrainResult(self):
		""" draw graph of Result"""
		xor = self.trainData[:,1]#切片,获取第一列,x坐标
		yor = self.trainData[:,2]#切片,获取第二列,y坐标
		pylab.subplot(111)
		xMax = numpy.max(xor)+15
		xMin = numpy.min(xor)-5
		yMax = numpy.max(yor)+50
		yMin = numpy.min(yor)-5
		pylab.xlabel(u'xor')
		pylab.ylabel(u'yor')
		pylab.xlim(xMin,xMax)
		pylab.ylim(yMin,yMax)
		
		#draw point
		for i in range(0,len(self.trainResult)):
			if self.trainResult[i] == 1:
				pylab.plot(xor[i],yor[i],'r*')
			else:
				pylab.plot(xor[i],yor[i],'ro')
	
	def drawTestResult(self,data):
		
		test = data#numpy.array(data)
		if self.exceptSignal(self.weight,test)>0:
			pylab.plot(test[1],test[2],'b*')
		else:
			pylab.plot(test[1],test[2],'bo')
	
	def drawTrueLine(self):
		"""真实函数分界线"""
		xtest = numpy.array(range(0,20))
		ytest = xtest*2+1.68
		pylab.plot(xtest,ytest,'g--')
	
	def showGraph(self):
		pylab.show()

testData = [[1,5,11],[1,5,12],[1,4,16],[1,6,7],[1,3,12],[1,2,22]]
neural = neuralNetwork()
print neural.Out(124.32423)
neural.trainWeight()
neural.drawTrainResult()
neural.drawTrueLine()
#neural.showGraph()
for test in testData:
	neural.drawTestResult(test)
print neural.weight
neural.showGraph()
		
		
		
	


红色是训练数据,蓝色是测试数据,圆点代表是在线上方,*代表在线下方,由图可知这个算法还不错