A Modified Fuzzy Min-Max Neural Network and Its Application to Fault - - PowerPoint PPT Presentation

a modified fuzzy min max neural network and its
SMART_READER_LITE
LIVE PREVIEW

A Modified Fuzzy Min-Max Neural Network and Its Application to Fault - - PowerPoint PPT Presentation

A Modified Fuzzy Min-Max Neural Network and Its Application to Fault Classification Anas M. Quteishat and Chee Peng Lim School of Electrical & Electronic Engineering University of Science Malaysia Abstract The objectives of this paper are:


slide-1
SLIDE 1

A Modified Fuzzy Min-Max Neural Network and Its Application to Fault Classification

Anas M. Quteishat and Chee Peng Lim

School of Electrical & Electronic Engineering University of Science Malaysia

slide-2
SLIDE 2

Abstract

The objectives of this paper are:

To improve the Fuzzy Min-Max (FMM) classification performance in situations when large hyperboxes are formed by the network.

The Euclidean distance is computed after network training, and both the membership value of the hyperbox fuzzy sets and the Euclidean distance are used for classification.

To assess the effectiveness of the modified FMM network.

Benchmark pattern classification problems are first used, and the results from different methods are compared. In addition, a fault classification problem with real sensor measurements collected from a power generation plant is used to evaluate the applicability of the modified FMM network.

slide-3
SLIDE 3

Introduction

There are two types of learning classifiers: supervised and unsupervised learning, which differ in the way they are trained. The FMM neural network is a pattern classification system that can be used for tackling clustering (unsupervised) or classification (supervised) problems. In this paper FMM is used as a supervised classification system.

slide-4
SLIDE 4

Introduction (contd.)

FMM is constructed using hyperbox fuzzy sets, each of which is an n-dimensional box defined by a set of minimum and maximum points. Each input pattern is classified based on the degree of membership to the corresponding boxes. A smaller hyperbox size means that the hyperbox can contain only a smaller number of patterns, which will increase the network complexity, but yet gives high accuracy. A larger hyperbox size means that the hyperbox can contain a larger number of patterns, and will decrease the network complexity, but this leads to low classification accuracy.

slide-5
SLIDE 5

FMM Neural Network

The FMM classification network is formed using hyperbox fuzzy sets. A hyperbox defines a region of the n-dimensional pattern space that has patterns with full class membership. The hyperbox is completely defined by its minimum and maximum points.

max point min point

  • Fig. 1. A min-max hyperbox
slide-6
SLIDE 6

The definition of each hyperbox fuzzy set Bj is

( ) { }

j j j j j

W V X f W V X B , , , , , =

n

I X ∈ ∀

(1)

The size of the hyperboxes θ can have a value between 0 and 1. A small value of θ will produce

Where, X is the input, In is a unit cube with n dimensions and Vj and Wj are the min and max points, respectively

small-size hyperboxes, and vice versa.

slide-7
SLIDE 7

The input patterns are classified depending on how much they are “contained” by a hyperbox, this is measured using the membership value of a pattern using:

( ) ( ) ( )

=

− − =

n i ji hi h j

w a n A b

1

, 1 min , max 1 , [max 2 1 ) ( γ

( ) ( ) ( )]

, 1 min , max 1 , max

hi ji

a v − − + γ

(2) where, is the hth input pattern, is the min point for Bj , is the max point for Bj, and γ is the sensitivity parameter.

( )

n hn h h h

I a a a A ∈ = ,..., ,

2 1

( )

jn j j j

v v v V ,..., ,

2 1

=

( )

jn j j j

w w w W ,..., ,

2 1

=

slide-8
SLIDE 8

Modifications of the FMM Network

The modification done on the FMM neural network was done on the prediction stage. The original learning FMM procedure was not touched. In this paper we propose two methods for prediction using:

1. Euclidean distance 2. Both Euclidean distance and membership value

slide-9
SLIDE 9

1) Prediction using the Euclidean distance

This prediction method is based on the Euclidean distance between the input pattern and the centroid of the hyperbox. In addition to the min and max points, the centroid of patterns falling in each hyperbox is computed, as follows:

j ji hi ji ji

N C a C C − + =

(3) where Cji is the centroid of the jth hyperbox in the ith dimension, and Nj is the number of patterns included in the jth hyperbox.

slide-10
SLIDE 10

The Euclidean distance between the centroid and the input pattern is calculated using:

( )

=

− =

n i hi ji jh

a C E

1 2

(4) Where, Ejh is the Euclidean distance between jth hyperbox and the hth input pattern.

In the classification process, the hyperbox with the smallest Euclidean distance to the input pattern is selected as the winner, and the pattern is so classified that it belongs to this hyperbox.

slide-11
SLIDE 11

Figure 2 shows the classification process of a two- dimensional input pattern using the described method.

w1

v1

w2

v2 Input

E2 E1 Class 1 Class 2

Centroid of hyperbox 1 C1 Centroid of hyperbox 2 C2

  • Fig. 2. The classification process of an input pattern using the

Euclidean distance

slide-12
SLIDE 12

2) Prediction using both the membership function and Euclidean distance

When θ is large, hyperbox sizes are large, as consequent when calculating the membership value more than one hyperbox will have very large membership value sometimes even unity value. To solve this problem we propose to use both the membership value and Euclidean distance for classification. The hyperboxes with the highest membership value are selected and then the Euclidean distance between the centroid of these boxes and the input pattern is calculated. The hyperbox with the smallest distance is used to classify the input pattern.

slide-13
SLIDE 13

Experiments and results:

The proposed methods were tested using 4 data sets:

A) Bench mark data sets:

1) PID data set 2) IRIS data set

B) Fault diagnosis data sets:

1) The heat transfer conditions. 2) The tube blockage conditions.

slide-14
SLIDE 14

A) Bench mark data sets: 1) PID data set

Pima Indian Diabetes (PID) data set consists of 768 samples in a two class problem. The set was divided into two sets, 75% for training and 25% for testing. The experiment was done 5 times and the average of the results is shown in Figure 3. A comparison between the results obtained by our method and other methods based on the same experimental criteria is as shown in Table 1.

slide-15
SLIDE 15
  • Fig. 3. The testing accuracy rates of the PID problem. Curve A

shows the accuracy rate using the membership value only; curve B shows the accuracy rate using both the membership value and Euclidean distance; curve C shows the accuracy rate using the Euclidean distance only.

30 40 50 60 70 80 90 100 . 1 . 3 . 5 . 7 . 9 . 1 5 . 2 5 . 3 5 . 4 5 . 5 5 . 6 5 . 7 5 . 8 5 . 9 5 theta Testing accuracy (%)

A B C

slide-16
SLIDE 16

Table 1. Classification accuracy from various methods

for the PID data set methods Accuracy (%) LDA 77.5 C4.5 73.0 CART 72.8 K-NN 71.9

Curve A (best result)

76.56

Curve B (best result)

72.4

Curve C (best result)

74.9

slide-17
SLIDE 17

A) Bench mark data sets: 2) IRIS data set

The IRIS data set consists 150 samples in a 3 class classification problem. The data set was divided into two sets; training set which consisted of 80% of each class, and a test set with the remaining samples. The experiment was conducted for the data set, and the results of the proposed methods along with the original FMM are shown in Figure 4. Table 2 shows the maximum accuracy of various methods in comparison with the proposed methods (all the experiments are conducted using the same training and test data sets).

slide-18
SLIDE 18
  • Fig. 4. The testing accuracy rates of the IRIS data set. Curve A shows

the accuracy rate using the membership value only; curve B shows the accuracy rate using both the membership value and Euclidean distance; curve C shows the accuracy rate using the Euclidean distance only.

82 84 86 88 90 92 94 96 . 1 . 3 . 5 . 7 . 9 . 1 5 . 2 5 . 3 5 . 4 5 . 5 5 . 6 5 . 7 5 . 8 5 . 9 5 Theta Testing Accuracy % A B C

slide-19
SLIDE 19

Table 2. Percentage error for various methods

for the IRIS data set

methods Accuracy (%) C4.5 91.60 OC1 93.89 LMDT 95.45 LVQ 92.55 Curve A (best result) 94.00 Curve B (best result) 93.33 Curve C (best result) 94.00

slide-20
SLIDE 20

B) Fault Classification

A fault detection and classification system predicts failures, and when a failure occurs, it identifies the reason(s) of failures. In this study, we investigate the applicability of modified FMM using a set of sensor measurements collected from a power generation plant in Malaysia. The system under consideration is a circulating water (CW) system, as shown in Figure 5

slide-21
SLIDE 21

Low Pressure Turbines To Sea Seawater Primary Bar Screen CW Pumps Common Discharge Header Strainer Condensate (Reused in steam cycle process) Steam

Condenser

  • Fig. 5. The circulating water system
slide-22
SLIDE 22

A data set of 2439 samples was collected. Each data sample consisted of 12 features comprising

  • f

temperature and pressure measurements at various inlet and outlet points

  • f the condenser, as well as other important

parameters. Two case studies were conducted:

  • 1. The heat transfer conditions.
  • 2. The tube blockage conditions.
slide-23
SLIDE 23

B) Fault Classification 1) Heat Transfer Conditions

The heat transfer conditions were classified into two categories: efficient or inefficient. From the data set, 1224 data samples (50.18%) that showed inefficient heat transfer condition, whereas 1215 data samples (49.82%) showed efficient heat transfer condition in the condenser. The data set (excluding one sample) was divided into two equal sets, each containing 1219 samples, one for training and the other for testing. Both data sets contained 50% of the data samples belonging to each class.

slide-24
SLIDE 24

Figure 6 shows the testing accuracy of the proposed methods along with original FMM. Table 3 shows the testing accuracy along with the number of hyperboxes used in the classification.

slide-25
SLIDE 25
  • Fig. 6. The testing accuracy rates of the Heat Transfer Conditions.

Curves A, B, and C show the testing accuracy rates using the membership value only, both the membership value and Euclidean distance, and the Euclidean distance only, respectively.

70 75 80 85 90 95 100 . 1 . 3 . 5 . 7 . 9 . 1 5 . 2 5 . 3 5 . 4 5 . 5 5 . 6 5 . 7 5 . 8 5 . 9 5 theta Testing accuracy (%) A B C

slide-26
SLIDE 26

Table 3. Testing accuracy for the heat transfer data set

Theta

θ

2 92.23 90.43 87.08 0.95 2 92.23 90.43 87.08 0.9 2 92.23 90.43 87.08 0.85 2 92.23 90.43 87.08 0.8 2 92.23 90.43 87.08 0.75 38 92.57 79.19 90.41 0.7 81 90.63 72.05 90.53 0.65 101 89.94 77.32 88.56 0.6 187 92.03 85.56 90.53 0.55 226 93.36 92.82 90.9 0.5 Number of hyperboxes Membership value and Euclidean distance (%) Euclidean distance (%) Membership value (%)

slide-27
SLIDE 27

B) Fault Classification 2) Tube Blockage Conditions

In this experiment, the objective was to predict the

  • ccurrence of tube blockage in the CW system.

The conditions of the condenser tubes were categorized into two classes: significant blockage and insignificant blockage. The data set used in the previous experiment was again employed. A total of 1313 samples (53.83%) showed significant blockage and the remaining samples showed insignificant blockage in the condenser tubes. The data samples were divided into two sets for training and testing. Again, the data set (excluding one sample) was divided into two equal sets, each containing 1219 samples, one for training and the other for testing.

slide-28
SLIDE 28

Figure 7 shows the testing accuracy of the proposed methods along with original FMM. Table 4 shows the testing accuracy along with the number of hyperboxes used in the classification.

slide-29
SLIDE 29
  • Fig. 7. The testing accuracy rates of the Tube Blockage Conditions.

Curves A, B, and C show the testing accuracy rates using the membership value only, both the membership value and Euclidean distance, and the Euclidean distance only, respectively.

75 80 85 90 95 100 105 . 1 . 3 . 5 . 7 . 9 . 1 5 . 2 5 . 3 5 . 4 5 . 5 5 . 6 5 . 7 5 . 8 5 . 9 5 theta testing accuracy (%) A B C

slide-30
SLIDE 30

Table 5. Testing accuracy for the block detection data set

Theta

θ

2 96.75 93.75 92 0.95 2 96.75 93.75 92 0.9 6 96.41 88.98 92 0.85 42 89.25 86.35 92 0.8 21 93.01 85.36 91.88 0.75 81 92.13 85.85 92 0.7 62 93.75 83.39 91.88 0.65 78 95.28 86.79 100 0.6 155 99.68 85.56 100 0.55 264 100.00 96.61 99.88 0.5 Number of hyperboxes Membership value and Euclidean distance (%) Euclidean distance (%) Membership value (%)

slide-31
SLIDE 31

Conclusions

The results obtained reveal the usefulness

  • f the proposed modifications in improving

the performance of FMM when large hyperboxes are formed by the network.