performance analysis of ppfnn
play

PERFORMANCEANALYSIS OFPPFNN PROBABILISTIC - PDF document

PERFORMANCEANALYSIS OFPPFNN PROBABILISTIC POTENTIALFUNCTIONNEURAL NETWORK GurselSerpen HongJiang LloydG.Allred*


  1. PERFORMANCE�ANALYSIS� OF�PPFNN� � PROBABILISTIC�� POTENTIAL�FUNCTION�NEURAL� NETWORK� � � � � � � Gursel�Serpen� Hong�Jiang� Lloyd�G.�Allred* � � � � Electrical�Engineering�&�Computer�Science�Department� The�University�of�Toledo,�Toledo,�OH�43606� � *Software�Engineering�Division�� Ogden�Air�Logistics�Center� Hill�AFB,�UT�84056� � �

  2. PROPERTIES�OF�NEURO-CLASSIFIER� � � 1. � offer�fast�(real-time)�training�and�classification�cycles� even�if�implemented�in�software,�� 2. � do� not� require� an� initial� guess� for� the� network� topology,� rather� topologically� adapt� to� a� particular� instance� of� the� classification� problem� at� hand� in� a� dynamic�way�as�the�training�progresses,�� 3. � discover� clustering� properties� of� training� data� and� adapt� to� a� minimal� network� topology� in� terms� of� needed�computational�resources,�� 4. � implement� an� incremental� learning� procedure� and� hence,�do�not�disturb�the�previous�state�of�the�network� but� simply� add� new� computational� resources� to� the� existing�network�topology,�and�� 5. � require� a� small� number� of� parameters� to� be� specified� heuristically� while� the� network� performance� being� insensitive� to� large� variations� in� the� value� of� those� parameters,�and�� 6. � form� classification� boundaries,� which� optimally� separate� the� classes,� which� are� likely� to� be� formed� from� a� set� of� disconnected� subclasses� in� the� pattern� space;� the� joint� probability� density� function� of� a� particular�class�is�likely�to�have�many�modes.� �

  3. SIGNIFICANT�NEURAL� CLASSIFICATION�PARADIGMS� � � ♦ � The�Multi-Layer�Feedforward�Network� Initial�network�topology�-�needs�guessing� slow� training� speed� -� unsuitable� for� real-time� implementations.� � � ♦ � The�Radial�Basis�Function�Network� Initialization� of� network� -� clustering� properties� of� training�data�( k -means)� Hidden�layer�node�count� � � ♦ � The�Probabilistic�Neural�Network� A� pattern� layer� node� for� each� training� pattern� Potentially�large�node�counts� � � ♦ � Learning�Vector�Quantization�Networks� Codebook� vector� initialization� -� no� well-defined� procedure�exists� �

  4. TOPOLOGY�OF�PPFNN.� � � w ij � � O� � U� I� � T� N� P� P� �� U� U� T� T� � S� S� � Output� Layer� � MAXNET� Pattern� Layer� Layer� � Hidden� Layer� � � Pattern�Layer�Nodes�distribute�the�incoming�signal�values� to�hidden�layer�nodes.��� Hidden�layer�nodes�map�the�incoming�signals�as�follows:� � � 2 � � exp − α x xk − � � � where� α �(Alpha)�is�a�spread�parameter�of�the�exponential� function�centered�at� x k .��� Output� layer� nodes� sum� the� incoming� weighted� signals� and�pass�the�weighted�sum�through:�� � � 0 �������������������������� � if weighted sum _ 0 < � � output weighted sum if weighted sum _ ��� � _ [ , ] 01 = ∈ � � � 1 �������������������������� � if weighted sum _ 1 >

  5. NETWORK�CREATION�PROCESS� � � 2-CLASS� PROBLEM �

  6. 1. � Initialize�the�PPFNN�and�assume�a�value�for�parameter�Alpha.�� 2. � Present�a�new�feature�vector�(using� k� as�index�for�feature�vector)�and�compute� network�output.� 3. � If�the�network�classifies�the�vector�correctly�for�each�class,�no�action�needed.���� 4. � Else�� A.Add�a�new�hidden�layer�node�(using �i �index�for�nodes),� B.Center�the�potential�function�represented�by�the�new�hidden�layer�node� around�this�vector,�and� C.Repeat�for�each�class�(using� j �as�index�for�classes),��� ∗ � If�pattern�belongs�to�the�class�and�function,� � i ,�is�positive,�no�action.� f k ∗ � Else�if�pattern�does�not�belong�to�the�class�and�function,� � i ,�is�negative,� f k no�action.� ∗ � Else�if�pattern�belongs�to�the�class�and�function,� � i ,�is�negative,�connect� f k output�of�hidden�node� i �to�the�output�node�for�class� j �through�a�weight�of� + γ k ij .�� ∗ � Else�if�pattern�does�not�belong�to�the�class�and�function,� � i ,�is�positive,� f k connect�output�of�hidden�node� i �to�the�output�node�for�class� j �through�a� weight�of�- γ k ij .�� 5.�Repeat�the�procedure�until�all�training�patterns�are�processed.�

  7. SIMULATION�RESULTS� � � � � CLASSIFICATION�PERFORMANCES�OF� NEURAL�NETWORK�ALGORITHMS� � � Test�Data�� Classification�Rate�in�%� � � � � � � MLP� LVQ� RBF� PNN� PPFNN� 2-Spiral 1 � 50.00� 55.73� 98.96� 89.58� 91.67� IRIS� 78.00� 82.67� 80.00� -� 96.00� Sonar� 53.85� 62.98� 71.15� 74.04� 73.08� Vowel� 36.57� 11.11� 56.67� -� 52.32� Wisconsin� 59.94� 87.88� 66.67� 95.15� 95.76� Cleveland� 55.17� 57.93� 65.86� 55.86� 58.28� Thyroid� 36.74� 81.86� 72.09� -� 78.14� � � � � ���������������������������������������������������������� � 1 �Training�and�testing�data�sets�are�the�same�for�this�problem.�

  8. � � �

  9. � � � � � � � � TRAINING�TIME�(IN�SECONDS)� � � REQUIREMENTS�� OF�NEURAL�NETWORK�ALGORITHMS� � 2-Spiral� IRIS� Sonar� Vowel� Wisconsin� Cleveland� Thyroid� MLP� 3556� 12572� 47400� 52800� 10317� 8834� 9746� LVQ� 1500� 1320� 7800� 5237� 1653� 1835� 450� RBF� 120� 120� 360� 3600� 137� 495� 65� PNN� 120� -� 886� -� 220� 2532� -� PPFNN� 120� 120� 621� 3777� 98� 1010� 120� � �

  10. � � � � � SENSITIVITY�OF�CLASSIFICATION� PERFORMANCE�OF�PPFNN�AS�ALPHA�VARIES� � Benchmark� Testing� Maximum� Maximum� Problem� Interval�for� Variation�in� Variation�in� Alpha� Classification� Classification� Performance�for� Performance�for� Training�Data� Test�Data� Sonar� ���[4.0,15]� �����������1.24%� ���������2.89%� Vowel� ���[3.5,15]� �����������3.18%� ���������3.03%� Wisconsin� ���[0.8,6.8]� �����������0.00%� 1.82%� Cleveland� [6.8,18.8]� 6.39%� 4.14%� Thyroid� [0.0012,12]� ���������21.74%� ��������23.26%� Disease� �

  11. CONCLUSIONS� � � � Simulation�results�demonstrate�that�PPFNN�performance�is�comparable�to�or� better� than� that� of� MLP,� RBF,� LVQ� and� PNN� when� the� set� of� seven� benchmark�problems�is�considered.��� � � Performance�criteria�employed�in�the�simulation�study�included�the�network� training�time�and�classification�rates�for�test�data.��� � � Simulation�results�indicated�that�PPFNN�consistently�performed�in�the�leading� group�of�classifiers�over�the�set�of�problems�tested,�which�was�not�the�case�for� the�rest�of�the�neuro-classifier�algorithms.�� � � The�training�time�requirements�of�the�PPFNN�were�generally�minimal�leading� to� the� conclusion� that� the� PPFNN� algorithm� is� a� good� choice� for� real-time� implementation.��� � � PPFNN�performance�was�not�affected�by�large�variations�in�the�value�of�the� only�adjustable�parameter�Alpha�that�determines�the�spread�of�the�potential� functions.�� � � In� conclusion,� simulation� results� indicate� that� PPFNN� is� a� robust� neuro- classifier�algorithm�suitable�for�real-time�environments. � �

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend