【上海交通大学】【神经网络原理与应用】作业2.pdf
上传人:qw****27 上传时间:2024-09-12 格式:PDF 页数:11 大小:1.4MB 金币:15 举报 版权申诉
预览加载中,请您耐心等待几秒...

【上海交通大学】【神经网络原理与应用】作业2.pdf

【上海交通大学】【神经网络原理与应用】作业2.pdf

预览

免费试读已结束,剩余 1 页请下载文档后查看

15 金币

下载此文档

如果您无法下载资料,请参考说明:

1、部分资料下载需要金币,请确保您的账户上有足够的金币

2、已购买过的文档,再次下载不重复扣费

3、资料包下载后请先用软件解压,在使用对应软件打开

NeuralNetworkTheoryandApplicationsHomeworkAssignment2oxstar@SJTUJanuary19,2012Supposetheoutputofeachneuroninamultilayerquadraticperceptron(MLQP)networkis01Nk−1X2xkj=f@(ukjixk−1;i+vkjixk−1;i)+bkjAi=1fork=2;3;:::;Mandj=1;2;:::;Nkwherebothukjiandvkjiaretheweightsconnectingtheithunitinthelayerk−1tothejthunitinthelayerk,bkjisthebiasofthejthunitinthelayerk,Nkisthenumberofunitsinthek(1≤k≤M),andf(:)isthesigmoidalactivationfunction.1.Pleasedesignthecorrespondingbackpropagationalgorithm.Ans.Fromthenetwork,wehaveinput=x1(1)2xk=f(ukxk−1+vkxk−1+bk);k=2;3;:::;M(2)output=xM(3)Thesteepestdescentalgorithmfortheapproximatemeansquareerroris@F^ukji(m+1)=ukji(m)−α@ukji@F^vkji(m+1)=vkji(m)−α@vkji@F^bkj(m+1)=bkj(m)−α@bkjThenetinputtolayerkisanexplicitfunctionoftheweightsandbiasinthatlayerNk−1X2nkj=(ukjixk−1;i+vkjixk−1;i)+bkji=1Therefore@nkj2@nkj@nkj=xk−1;i;=xk−1;i;=1@ukji@vkji@bkjLet@F^@F^skj≡;sk≡@nkj@nk1Usingthechainruleofcalculus,wehave2Tuk(m+1)=uk(m)−αsk(xk−1)(4)Tvk(m+1)=vk(m)−αsk(xk−1)(5)bk(m+1)=bk(m)−αsk(6)Inordertoobtaintherecurrencerelation,weshouldcalculatePNk2@n@i=1(uk+1;jixki+vk+1;jixki)+bk+1;jk+1;j=@nki@nki2@xki@xki=uk+1;ji+vk+1;ji@nki@nki2@f(nki)@f(nki)=uk+1;ji+vk+1;ji@nki@nki_=(2uk+1;jixki+vk+1;ji)f(nki)=(2uk+1;jixki+vk+1;ji)(1−xki)xki@nk+1=(2uk+1xk+vk+1)F_(nk)@nkWecannowwriteouttherecurrencerelationforthesensitivitybyusingthechainruleinmatrixform:T@F^@nk+1@F^sk==(7)@nk@nk@nk+1T=F_(nk)(2uk+1xk+vk+1)sk+1(8)Thestartingpoint,sM,fortherecurrencerelationhasbeenderived:sM=−2F_(nM)(t−xM)(9)2.Writeaprogramtorealizeit(3layers).Ans.cf.`src/mlqp.m'3.Runyourprogramforpatternclassificationonthetwo-spiraldataset,whichisthesameashomeworkone.Youcanchoose10hiddenunitsinthisproblem.Ans.Wecanfirstdiscussabouttheprocessofconvergence.Whenlearningrateissetto0.1andinitialvaluesarerandomlychosen.WechooseMSE(MeanSquaredError)topresentthelearningqualityofourNN.However,thefinalresultsareclass