ARTIFICIAL INTELLIGENCE- JNTUK R19 -UNIT6- Uncertainity Measure: Probability Theory

 

6. UNCERTAINTY MEASURE: PROBABILITY THEORY

 

PROBABILISTIC REASONING:

Probabilistic reasoning is a way of knowledge representation where we apply the concept of probability to indicate the uncertainty in knowledge. In probabilistic reasoning, we combine probability theory with logic to handle the uncertainty. We use probability in probabilistic reasoning because it provides a way to handle the uncertainty that is the result of someone's laziness and ignorance.

In the real world, there are lots of scenarios, where the certainty of something is not confirmed, such as "It will rain today," "behavior of someone for some situations," "A match between two teams or two players." These are probable sentences for which we can assume that it will happen but not sure about it, so here we use probabilistic reasoning.

Need of probabilistic reasoning in AI:

·         When there are unpredictable outcomes.

·         When specifications or possibilities of predicates becomes too large to handle.

·         When an unknown error occurs during an experiment.

In probabilistic reasoning, there are two ways to solve problems with uncertain knowledge:

·         Bayes Rule

·         Bayesian Statistics

 

Probability: Probability can be defined as a chance that an uncertain event will occur. It is the numerical measure of the likelihood that an event will occur. The value of probability always remains between 0 and 1 that represent ideal uncertainties.

·         0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.

·         P(A) = 0, indicates total uncertainty in an event A.

·         P(A) =1, indicates total certainty in an event A.

We can find the probability of an uncertain event by using the below formula.


 

·         P(¬A) = probability of a not happening event.

·         P(¬A) + P(A) = 1.

Event: Each possible outcome of a variable is called an event.

Sample space: The collection of all possible events is called sample space.

Random variables: Random variables are used to represent the events and objects in the real world.

2

 
Prior probability: The prior probability of an event is probability computed before observing new information. Posterior Probability: The probability that is calculated after all evidence or information has taken into account. It is a combination of prior probability and new information.

 


Conditional probability: Conditional probability is a probability of occurring an event when another event has already happened. Let's suppose, we want to calculate the event A when event B has already occurred, "the probability of A under the conditions of B", it can be written as:

 

Where, P(AB)= Joint probability of A and B P(B)= Marginal probability of B.

If the probability of A is given and we need to find the probability of B, then it will be given as:


 

It can be explained by using the below Venn diagram, where B is occurred event, so sample space will be reduced to set B, and now we can only calculate event A when event B is already occurred by dividing the probability of P(AB) by P( B ).

Example: In a class, there are 70% of the students who like English and 40% of the students who likes English and mathematics, and then what is the percent of students those who like English also like mathematics?

Solution:


Let, A is an event that a student likes Mathematics. B is an event that a student likes English.

Hence, 57% are the students who like English also like Mathematics.

 


BAYES THEOREM:

Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which determines the probability of an event with uncertain knowledge. In probability theory, it relates the conditional probability and marginal probabilities of two random events. Bayes' theorem was named after the British mathematician Thomas Bayes. The Bayesian inference is an application of Bayes' theorem, which is fundamental to Bayesian statistics.

In monotonic reasoning, at any given moment, the statement is either believed to be true, believed to be false or not believed to be either. In statistical reasoning, it is useful to be able to describe belief’s that are not certain but for which there is some supporting evidence.

When we have uncertain knowledge one way to express confidence about an event is through probability that expresses the chance of happening or not happening. The general characteristics of probability theory is

1.  Probability of a statement is always ≥0 =0(total uncertainty) and ≤1 (=1 total certainty)

2.  Probability of a sure proposition is unity (1).

3.  P (A U B) = P (A) + P (B), if A & B are mutually exclusive. 4. P (~A) = 1- P (A).

The fundamental notation of Bayesian statistics is that of conditional probability P (H/E), i.e., the probability of Hypothesis H, given that we have observed evidence E.

P (Hi/E) = the probability that hypothesis Hi is true given evidence E.

P (E/Hi) = the probability that we will observe evidence E given that hypothesis i is true.

P (Hi) = the a priori probability the hypothesis i is true in the absence of any specific evidence. These probabilities are called prior probabilities or priors.

k = the number of possible Hypotheses. Bayes theorem states that


 

BAYESIAN BELIEF NETWORKS:

Rule 1: If the sprinkler was on last night, then there is suggestive evidence 0.9, that the grass will be wet this morning.

Rule 2: If the grass is wet this morning then there is suggestive evidence 0.8 that is rained last night.

We can draw the network above two rules. These networks represent interaction among events. The main idea is that to describe the real world, it is not necessary to use a huge joint probability in which we list the probabilities of all the conceivable combination of events.

But, here the constraints flow out, incorrectly from sprinkler ‘on’ to rain last night. There are two different ways that propositions can influence the likelihood of each other.

1.    That causes influence the likelihood of their symptoms.

2.    That observing a symptom affects the likelihood of all of its possible causes.

Bayesian network structure makes a clear distinction between these two kinds of influence. For this, we construct a directed acyclic graph that represents casualty relationships among variables. The variables may be propositional or take on values of some other type. A casualty graph for the wet graph example is

The new node tells whether it is currently the rainy season or not. The graph shows the casualty relationship that occurs among its nodes. In order to use this as a basis for probabilistic reasoning we need conditional probability also.

For example, to represent the casual relationships between the propositional variables x1, x2….x6 from the following figure, one can write the joint probability P (x1, x2……x6) by inspection of as a product of conditional probabilities.

CERTAINTY FACTOR THEORY:

This approach was adopted in the ‘MYCIN’ System. MYCIN is an expert system that can recommend appropriate therapies for patients with bacterial infection. It interacts with the physician to acquire the clinical data it needs.

MYCIN represents its diagnostic knowledge as a set of rules. Each rule has associated with a certainty factor CF, which is a measure of the extent to which the evidence that is described by the antecedent of the rule, supports the conclusion that is given in the rule’s consequent. If

1.    The stain of the organism is gram-positive and

2.    The Morphology of the organism is coccus and

3.    The growth confirmation of the organism is clumps then there is suggestive evidence (0.7) that the identity of the organism is staphylococcus.

This means if the antecedents 1, 2, 3 are 100% certain that the identity of the organism only 70% certain to be staphylococcus.

A certainty factor CF [h, e] is defined in two terms.

1.   MB [h, e]: A measure of believe in hypothesis h given the evidence e. i.e. MB measures the extent to which the evidence supports the hypothesis. It is 0, if the evidence fails to support the hypothesis.

2.   MD [h, e]: A measure of disbelieve in hypothesis h given the evidence e. i.e. MD measures the extent to which the evidence supports the negation of hypothesis. If it is 0, the evidence supports the hypothesis.

From these two we can define the certainty factors CF [h, e] = MB [h, e] - MD [h, e]

When multiple pieces of evidence and multiple rules to be applied to a problem, these certainty factors need to be combined. We have the following:

1.    Several rules A, B provide evidence to a single hypothesis C

2.    Consider a believe, when several propositions are taken together

3.    The output of one rule provides the input to another. Several rules provide evidence to a single hypothesis:

The measures of believe and disbelieve of a hypothesis given two observations s1 and s2 are computed


from


Certainty factor of a combination of Hypothesis:

The formulas MYCIN uses for the MB of the conjunction and the disjunction of two Hypotheses are:

MB [h1 ^ h2, e] = min [MB [h1, e] + MB [h2, e]]

MB [h1 V h2, e] = max [MB [h1, e] + MB [h2, e]]

MD can be computed analogously.

Ex: Consider the production rule, if there is enough fuel in the vehicle and the ignition system is working correctly and the vehicle does not start then fault lies in the fuel flow (CF = 0.75)

Every ‘if’ part is known with 100% certainty then consequence CF is 0.75. If the ‘if’ part is not known with the 100% certainty then the following rules are used to estimate the value of MB and MD

MB [h, e] = MB1 [h, e] + max [0, CF [h, e]]

MD [h, e] = MD1 [h, e] + max [0, CF [h, e]]

Whenever, if you want to solve the problem using the third rule, these rules are also adapted.

1.  The CF of the conjunction of several facts is taken to be the minimum of CF’s of the individual facts.

2.  The CF for the conclusion is obtained by multiplying the CF of the rule with the minimum CF of the ‘if’ part.

3.  The CF for a fact produced as the conclusions of one or more rules is the maximum of the CF’s produced. Rule 1: If p & q & r then z CF = 0.65

Rule 2: If u & v & w then z CF = 0.7

p          q          r           u          v          w 0.6       0.45     0.3       0.7       0.5       0.6

I from rule 1: min [p, q, r] = 0.3

II from rule 2: CF [Z] = 0.3 * 0.65 = 0.195

I from rule 2: min [u, v, w] = 0.5

II from rule 2: CF (Z) = 0.5 * 0.7 =0.35

 

 

DEMPSTER-SHAFER THEORY:

Draw backs in Bayesian theory as a model of uncertain reasoning:

1.   Probabilities are described as a single numeric value; this can be distortion of the precision that is actually available for supporting evidence.

Ex: There is a probability 0.7, that the dollar will compact to the Japanese yen over the next 6 months, when we say like this; the probability chance is 0.6 or 0.8

2.   There is nowhere to differentiate between ignorance and uncertainty, these are distinct different concepts and should be treated as same concepts in our Bayesian theory.


3.  Belief and disbelief are functional opposites w. r. to the classical probability theory. If p (A) =0.3 then P (~A)

=0.7. But in our certainty factor concept measure of believe is 0.3 then measure of disbelief is 0, this assignment is conflicting.

To avoid this problems Aurther Dempster and has student Glenn Shafer proposed Dempster Shafer theory. Here considering the sets of propositions, we assign an interval

[Belief, Plausibility]

i.e., Belief denotes Bel which measures strength of evidence in favor of a set of propositions. Plausibility is denoted by Pl which measures the extent to which evidence in favor of ~S leaves room for belief in S. i.e.,

Pl (S) = 1- Bel (~S)

This ranges from 0 to 1. If you have certain evidence in favor of ~S then Bel (~S) will be 1 and Pl (S) will be 0. Initially Bel is a measure of our belief in some hypothesis given some evidence. We need to start with an exhaustive universe of mutually exclusive hypothesis. We will call this as the frame of discernment and we will write it as . For example, in diagnosis problem might consist of set {All, Flu, Cold, Pneu}:

All: allergy Flu: flu Cold: cold

Pneu: Pneumonia

1.  Our goal is to attach, some measure belief elements to

2.  Not all evidences is directly support on individual elements.

3.  Often it supports sets of elements (i.e., subsets of )

Here  fever  might  supports  {All,  Flu,  Cold,  Pneu}.  Since  the  elements  are mutually exclusive, evidence in favor of some may have an effect all over belief others.

In a purely Bayesian systems, we can handle the both of the phenomenon, by visiting all of the combinations of conditional probabilities. But Dempster Shafer theory handles interactions by manipulating sets of hypothesis directly.  We use m-  the probability density function define for all subsets of   including singleton subsets i.e., individual elements.

m (P) – measures the amount of belief that is currently assigned to exactly the set P (hypothesis)

If     contains n elements, then there are 2n subsets of . We must assign m, so that the sum of all the m values assigned to the subsets of     is 1.

1.  Initially, suppose we have no evidence for anyone of the four hypotheses. We define m as { } = 1.0

All other values of m are 0.

2.  Suppose the evidence is fever (at a level of 0.6). so the correct diagnosis is in the set {All, Flu, Cold, Pneu}

 

{All, Flu, Cold, Pneu}=0.6

{ } = 0.4

Now Bel (P) is the sum of the values of m for the set P and for all of its subsets. Thus Bel (P) is over all belief that the correct answer lies somewhere in the set P.

This theory can combine any two belief functions, whether they represent multiple sources of evidence for a single hypothesis, or multiple sources of evidence for different hypothesis. Suppose m1, m2 are two belief functions.

X – sets of subsets of  to which m1 assigns a non zero value.

Y – Corresponding set for m2 then the combination of m3 of m1 and m2 is

We can apply this any set Z ( )


Case 1: When intersection of X and Y generate non-empty sets, suppose m1 is our belief after observing fever

Case 2: m2 is our belief after observing running nose


Now we can compute m3 from the combination of m1 and m2 is

Since, there are no empty factors then the scaling factor is 1 in case 1. Case 2: When empty sets are generated in this case, after producing m3. We consider


Now suppose that m4 is our belief, that patient goes on a trip.

 

We can apply the numerator of the combination rule to produce

Scaling factor = 1- (0.432+0.108) =1- 0.540 =0.46.

Total Belief for  is 0.46 is associated with outcome, that are in fact impossible.

 

FUZZY SETS AND FUZZY LOGIC

 

 

FUZZY LOGIC:

The word fuzzy refers to things which are not clear or are vague. Any event, process, process, or function that is changing continuously cannot always be defined as either true or false, which means that we need to define such activities in a Fuzzy manner.

 

What is Fuzzy Logic?

Fuzzy Logic resembles the human decision-making methodology. It deals with vague and imprecise information. This is gross oversimplification of the real-world problems and based on degrees of truth rather than usual true/false or 1/0 like Boolean logic.

Take a look at the following diagram. It shows that in fuzzy systems, systems, the values are indicated by a number in the range from 0 to 1. Here 1.0 represents absolute truth and 0.0 represents absolute falseness. The number which indicates the value in fuzzy systems is called the truth value.

In other words, we can say that fuzzy logic is not logic that is fuzzy, but logic that is used to describe fuzziness. There can be numerous other examples like this with the help of which we can understand the concept of fuzzy logic. Fuzzy Logic was introduced in 1965 by Lofti A. Zadeh in his research paper “Fuzzy Sets”. He is considered the father of Fuzzy Logic.

 

Fuzzy Logic - Classical Set Theory:

A set is an unordered collection of different elements. It can be written explicitly by listing its elements using the set bracket. If the order of the elements is changed or any element of a set is repeated, it does not make any changes in the set.

Example:

·         A set of all positive integers.

·         A set of all the planets in the solar system.

·         A set of all the states in India.

·         A set of all the lowercase letters of the alphabet.

 

Mathematical Representation of a Set: Sets can be represented in two ways

·         Roster or Tabular Form

·         Set Builder Notation

1.        Roster (or) Tabular Form: In this form, a set is represented by listing all the elements comprising it. The elements are enclosed within braces and separated by commas. Following are the examples of set in Roster or Tabular Form:

·         Set of vowels in English alphabet, A = {a,e,i,o,u}

·         Set of odd numbers less than 10, B = {1,3,5,7,9}

 

2.        Set Builder Notation: In this form, the set is defined by specifying a property that elements of the set have in common. The set is described as A = {x:p(x)}

Example 1: The set {a,e,i,o,u} is written as

A = {x:x is a vowel in English alphabet} Example 2: The set {1,3,5,7,9} is written as

B = {x:1 ≤ x < 10 and (x%2) ≠ 0}

If an element x is a member of any set S, it is denoted by xS and if an element y is not a member of set S, it is denoted by yS.

Example: If S = {1,1.2,1.7,2},1 S but 1.5 S

 

Cardinality of a Set:

Cardinality of a set S, denoted by |S||S|, is the number of elements of the set. The number is also referred as the cardinal number. If a set has an infinite number of elements, its cardinality is ∞∞.

Example: |{1,4,3,5}| = 4,|{1,2,3,4,5,…}| = ∞

If there are two sets X and Y, |X| = |Y| denotes two sets X and Y having same cardinality. It occurs when the number of elements in X is exactly equal to the number of elements in Y. In this case, there exists a bijective function ‘f’ from X to Y.

·         |X| ≤ |Y| denotes that set X’s cardinality is less than or equal to set Y’s cardinality. It occurs when the number of elements in X is less than or equal to that of Y. Here, there exists an injective function ‘f’ from X to Y.

·         |X| < |Y| denotes that set X’s cardinality is less than set Y’s cardinality. It occurs when the number of elements in X is less than that of Y. Here, the function ‘f’ from X to Y is injective function but not bijective.

·         If |X| ≤ |Y| and |X| ≤ |Y| then |X| = |Y|. The sets X and Y are commonly referred as equivalent sets.

 

TYPES OF SETS:

Sets can be classified into many types; some of which are finite, infinite, subset, universal , proper subset, singleton set, empty (or) null, single (or) unit, equal, equivalent, overlapping, disjoint.

Finite Set: A set which contains a definite number of elements is called a finite set.

Example: S = {x|x N and 70 > x > 50}

Infinite Set: A set which contains infinite number of elements is called an infinite set.

Example: S = {x|x N and x > 10}

Subset: A set X is a subset of set Y (Written as X Y) if every element of X is an element of set Y.

Example 1: Let, X = {1,2,3,4,5,6} {1,2,3,4,5,6} and Y = {1,2}. Here set Y is a subset of set X as all the elements of set Y is in set X. Hence, we can write YX.

Example 2: Let, X = {1,2,3} {1,2,3} and Y = {1,2,3}. {1,2,3}. Here set Y is a subset (not a proper subset) subset) of set X as all the elements of set Y is in set X. Hence, we can write YX.

Proper Subset: The term “proper subset” can be defined as “subset of but not equal to”. A Set X is a proper subset of set Y (Written as X Y) if every element of X is an element of set Y and |X| < |Y|.

Example: Let, X = {1,2,3,4, {1,2,3,4,5,6} and Y = {1,2}. Here set Y X, since all elements in Y are contained in X too and X has at least one element which is more than set Y.

Universal Set: It is a collection of all elements in a particular context or application. All the sets in that context or application are essentially subsets of this universal set. Universal sets are represented as U.

Example: We may define U as the set of all animals on earth. In this case, a set of all mammals is a subset of U, a set of all fishes is a subset of U, a set of all insects is a subset of U, and so on.

Empty (or) Null Set: An empty set contains no elements. It is denoted by Φ. As the number of elements in an empty set is finite, empty set is a finite set. The cardinality of empty set or null set is zero.

Example: S = {x|x N and 7 < x < 8} = Φ

Singleton Set (or) Unit Set: A Singleton set or Unit set contains only one element. A singleton set is denoted by {s}.

Example: S = {x|x N, 7 < x < 9} = {8}

Equal Set: If two sets contain the same elements, they are said to be equal.

Example: If A = {1,2,6} {1,2,6} and B = {6,1,2}, {6,1,2}, they are equal as every element of set A is an element of set B and every element of set B is an element of set A.

Equivalent Set: If the cardinalities of two sets are same, they are called equivalent sets.

Example: If A = {1,2,6} and B = {16,17,22}, they are equivalent as cardinality of A is equal to the cardinality of B. i.e. |A| = |B| = 3.

Overlapping Set: Two sets that have at least one common element are called overlapping sets. In case of overlapping sets:

n (A B) = n (A) + n (B) − n (A ∩ B)

n (A B) = n (A − B) + n (B − A) + n (A ∩ B) n (A) = n (A − B) + n (A ∩ B)

n (B) = n (B − A) + n (A ∩ B)

Example: Let, A = {1, 2, 6} and B = {6, 12, 42}. There is a common element ‘6’, hence these sets are overlapping sets.

Disjoint Sets: Two sets A and B are called disjoint sets if they do not have even one element in common. Therefore, disjoint sets have the following properties:

n (A B) = n (A) + n (B) − n (A ∩ B)

n (A B) = n (A − B) + n (B − A) + n (A ∩ B) n (A) = n (A − B) + n (A ∩ B)

n (B) = n (B − A) + n (A ∩ B)

n (A ∩ B) = ϕ

Example: Let, A = {1, 2, 6} and B = {7, 9, 14}, there is not a single common element, hence these sets are Disjoint sets.

 

OPERATIONS ON CLASSICAL SETS:

Set Operations include Set Union, Set Intersection, Set Difference, Complement of Set, and Cartesian Product.

1.        Union: The union of sets A and B (denoted by A B,A B) is the set of elements which are in A, in B, or in both A and B. Hence, A B = {x|x A OR x B}.

Example: If A = {10,11,12,13} and B = {13,14,15}, then A B = {10,11,12,13,14,15}. The common

element occurs only once. A B can be shown as


2.        Intersection: The intersection of sets A and B (denoted by A ∩ B) is the set of elements which are in both A and B. Hence, A ∩ B = {x|x A AND x B}.

Example: If A = {10,11,12,13} and B = {13,14,15}, then A ∩ B = {13}. A ∩ B can be shown as


3.        Difference/Relative Complement: The set difference of sets A and B (denoted by A–B) is the set of elements which are only in A but not in B. Hence, A − B = {x|x A AND x B}.

Example: If A = {10, 11, 12, 13} and B = {13, 14, 15}, then (A − B) = {10,11,12} and (B − A) = {14,15}.

Here, we can see (A − B) ≠ (B − A). Here, A – B & B – A can be shown as

 


4.        Complement of a Set: The complement of a set A (denoted by A′) is the set of elements which are not in set A. Hence, A′ = {x|x A}. More specifically, A′ = (U−A) where U is a universal set which contains all objects.

Example: If A = { 1, 2, 3, 4} and Universal set = U = { 1, 2, 3, 4, 5, 6, 7, 8}. Complement of set A contains the elements present in universal set but not in set A. Elements are 5, 6, 7, 8. Therefore, A complement = A′

= { 5, 6, 7, 8}. Hence, A = {x|x belongs to set of odd integers} then A′ = {y|y does not belong to set of odd integers}.

5.        Cartesian Product / Cross Product: The Cartesian product of n number of sets A1,A2,…An denoted as A1

× A2...× An can be defined as all possible ordered pairs (X1,X2,…Xn) where X1 A1,X2 A2,… Xn An Example: If we take two sets A = {a,b} and B = {1,2},

The Cartesian product of A and B is written as, A × B = {(a,1),(a,2),(b,1),(b,2)} And, the Cartesian product of B and A is written as, B × A = {(1,a),(1,b),(2,a),(2,b)}

 

PROPERTIES OF CLASSICAL SETS:

Properties on sets play an important role for obtaining the solution. Following are the different properties of classical sets

1.        Commutative Property: Having two sets A and B, this property states A B = B A

A ∩ B = B ∩ A

2.        Associative Property: Having three sets A, B and C, this property states A (B C) = (A B) C

A ∩ (B ∩ C) = (A ∩ B) ∩ C

3.        Distributive Property: Having three sets A, B and C, this property states A (B ∩ C) = (A B) ∩ (A C)

A ∩ (B C) = (A ∩ B) (A ∩ C)

4.        Idempotency Property: For any set A, this property states A A = A

A ∩ A = A

 

 

5.        Identity Property: For set A and universal set X, this property states A φ = A

A ∩ X = A

A ∩ φ = φ A X = X

6.        Transitive Property: Having three sets A, B and C, the property states If, A B C, then A C

7.        Involution Property: For any set A, this property states


8.        De Morgan’s Law: It is a very important law and supports in proving tautologies and contradiction. This law states

 

 

TYPES OF MEMBERSHIP FUNCTIONS

·         Triangular Membership Function

·         Trapezoidal Membership Function

·         Gaussian Membership Function

·         Generalized Bell Membership Function

·         Sigmoid Membership Function

 


Fig: Various Types of Fuzzy Membership Functions

 

 

 

 

1.   Triangular Membership Function: A Triangular Membership Function is specified by three parameters {a, b, c} as follows:

 

By using min and max, we have an alternative expression for the preceding equation:


The parameters {a, b, c} (with a < b < c) determine the x coordinates of the three corners of the underlying Triangular Membership Function. Below figure illustrates a Triangular Membership Function defined by triangle (x; 20, 60, 80).

Fig: Triangular Membership Function

 

 

2.  


Trapezoidal Membership Function: A Trapezoidal Membership Function is specified by four parameters {a, b, c, d} as follows:

 


An alternative concise expression using min and max is:

The parameters {a, b, c, d} (with a < b <= c < d) determine the x coordinates of the four corners of the underlying Trapezoidal Membership Function. Below figure illustrates a Trapezoidal Membership Function defined by trapezoid (x; 10, 20, 60 95). Note that a Trapezoidal Membership Function with parameter {a, b, c, d} reduces to a Triangular Membership Function when b is equal to c.

Fig: Trapezoidal Membership Function

Due to their simple formulas and computational efficiency, both triangular MFs and trapezoidal MFs have been used extensively, especially in real-time implementations. However, since the MFs are composed of straight line segments, they are not smooth at the corner points specified by the parameters. In the following we introduce other types of MFs defined by smooth and nonlinear functions.

 

3.  Gaussian Membership Function: A Gaussian Membership Function is specified by tow parameters


 

A Gaussian Membership  Function  is  determined  complete  by  c  and  σ;  c  represents  the  MFs  centre  and σ determines the MFs width. Below figure plots a Gaussian Membership Function defined by Gaussian(x; 50, 20).

Fig: Gaussian Membership Function

4.   Generalized Bell Membership Function: A Generalized Bell Membership Function (or) Bell-shaped Function is specified by three parameters {a, b, c}:

Where, the parameter b is usually positive. (If b is negative, the shape of this MF becomes an upside-down bell.) Note that this MF is a direct generalization of the Cauchy distribution used in probability theory, so it is also referred to as the Cauchy MF.

Fig: Generalized Bell Membership Function

Because of their smoothness and concise notation, Gaussian and Bell MFs are becoming increasingly popular for specifying fuzzy sets. Gaussian functions are well known in probability and statistics, and they possess useful properties such as invariance under multiplication (the product of two Gaussians is a Gaussian with a scaling factor) and Fourier transform (the Fourier transform of a Gaussian is still a Gaussian). The bell MF has one more parameter than the Gaussian MF, so it has one more degree of freedom to adjust the steepness at the crossover points. Although the Gaussian MFs and Bell MFs achieve smoothness, they are unable to specify asymmetric MFs, which are important in certain applications.

 

5.   Sigmoid Membership Function: A Sigmoid Membership Function has two parameters: a responsible for its slope at the crossover point x = c. The membership function of the sigmoid function can be represented as Sigmf (x: a, c) and it is


 

Fig: Sigmoid Membership Function

A Sigmoid Membership Function is inherently open right or left & thus, it is appropriate for representing concepts such as “very large” or “very negative”. Sigmoid Membership Function mostly used as activation function of Artificial Neural Networks (NN). A NN should synthesize a close MF in order to simulate the behavior of a fuzzy inference system.

 

TYPES OF FUZZY PROPOSITIONS:

1.  Unconditional and Unqualified propositions: The canonical form of this type of fuzzy proposition is p:V is

F. Where, V is a variable which takes value v from a universal set U. F is a fuzzy set on U that represents a given inaccurate predicate such as fast, low, tall etc. For example:

p: Speed (V) is high (F)

T(p) = 0.8, if p is partly true T(p)=1, if p is absolutely true T(p)=0, if p is totally false

Where, T(p) = µF(v) membership grade function indicates the degree of truth of v belongs to F, its value ranges from 0 to 1.

2.   Unconditional and Qualified propositions: The canonical form of this type of fuzzy proposition is p:V is F is S. Where, V and F have the same meaning and S is a fuzzy truth qualifier. For example

P: Speed is high is very true

3.  Conditional and Unqualified propositions: The canonical form of this type of fuzzy proposition is p: if X is A, then Y is B. Where, X, Y are variables in universes U1 and U2. A, B are fuzzy sets on X, Y. For example:

p: if speed is High, then risk is Low

4.   Conditional and Qualified Propositions: The canonical form of this type of fuzzy proposition is p: (if X is A, then Y is B) is S. Where, all variables have same meaning as previous declare. For example:

p: if speed is high than risk is low is true.

 

 

FUZZY INFERENCE SYSTEM:

Fuzzy Inference System is the key unit of a fuzzy logic system having decision making as its primary work. It uses the “IF…THEN” rules along with connectors “OR” or “AND” for drawing essential decision rules.

Characteristics of Fuzzy Inference System:

·         The output from FIS is always a fuzzy set irrespective of its input which can be fuzzy or crisp.

·         It is necessary to have fuzzy output when it is used as a controller.

·         A defuzzification unit would be there with FIS to convert fuzzy variables into crisp variables. Functional Blocks of FIS: The following are the functional blocks of FIS

·         Rule Base: It contains fuzzy IF-THEN rules.

·         Database: It defines the membership functions of fuzzy sets used in fuzzy rules.

·         Decision-making Unit: It performs operation on rules.

·         Fuzzification Interface Unit: It converts the crisp quantities into fuzzy quantities.

·         Defuzzification Interface Unit: It converts the fuzzy quantities into crisp quantities. Following is a block diagram of fuzzy interference system.


Working of FIS:

·         A Fuzzification unit supports the application of numerous Fuzzification methods, and converts the crisp input into fuzzy input.

·         A knowledge base - collection of rule base and database is formed upon the conversion of crisp input into fuzzy input.

·         The defuzzification unit fuzzy input is finally converted into crisp output.

 

Methods of FIS: Following are the two important methods of FIS, having different consequent of fuzzy rules:

·         Mamdani Fuzzy Inference System

·         Takagi-Sugeno Fuzzy Model (TS Method)

 

Mamdani Fuzzy Inference System: This system was proposed in 1975 by Ebhasim Mamdani. Basically, it was anticipated to control a steam engine and boiler combination by synthesizing a set of fuzzy rules obtained from people working on the system. Following steps need to be followed to compute the output from this FIS:

1.        Set of fuzzy rules need to be determined in this step.

2.        In this step, by using input membership function, the input would be made fuzzy.

3.        Now establish the rule strength by combining the fuzzified inputs according to fuzzy rules.

4.        In this step, determine the consequent of rule by combining the rule strength and the output membership function.

5.        For getting output distribution combine all the consequents.

6.        Finally, a defuzzified output distribution is obtained.

Fig: Block Diagram of Mamdani Fuzzy Interface System

 

 

 

Takagi-Sugeno Fuzzy Model (TS Method): This model was proposed by Takagi, Sugeno and Kang in 1985. Format of this rule is given as

IF x is A and y is B THEN Z = f(x,y)

Here, AB are fuzzy sets in antecedents and z = f(x,y) is a crisp function in the consequent. The fuzzy inference process under Takagi-Sugeno Fuzzy Model (TS Method) works in the following way

1.      Fuzzifying the inputs − Here, the inputs of the system are made fuzzy.

2.      Applying the fuzzy operator − In this step, the fuzzy operators must be applied to get the output. Comparison between the two methods: The comparison between the Mamdani System and the Sugeno Model is

·         Output Membership Function: The main difference between them is on the basis of output membership function. The Sugeno output membership functions are either linear or constant.

·         Aggregation and Defuzzification Procedure: The difference between them also lies in the consequence of fuzzy rules and due to the same their aggregation and defuzzification procedure also differs.

·         Mathematical Rules: More mathematical rules exist for the Sugeno rule than the Mamdani rule.

·         Adjustable Parameters: The Sugeno controller has more adjustable parameters than the Mamdani controller.

Comments

Popular posts from this blog

ARTIFICIAL INTELLIGENCE- JNTUK- R19- UNIT5- Expert Systems And Applications

ARIFICIAL INTELLIGENCE- JNTUK R19- UNIT3- Logic Concepts