New Post

Nash Equilibrium In Game Theory ~xRay Pixy

Image
 Video Link  CLICK HERE... Learn Nash Equilibrium In Game Theory Step-By-Step Using Examples. Video Chapters: Nash Equilibrium  00:00 Introduction 00:19 Topics Covered 00:33 Nash Equilibrium  01:55 Example 1  02:30 Example 2 04:46 Game Core Elements 06:41 Types of Game Strategies 06:55  Prisoner’s Dilemma  07:17  Prisoner’s Dilemma Example 3 09:16 Dominated Strategy  10:56 Applications 11:34 Conclusion The Nash Equilibrium is a concept in game theory that describes a situation where no player can benefit by changing their strategy while the other players keep their strategies unchanged.  No player can increase their payoff by changing their choice alone while others keep theirs the same. Example : If Chrysler, Ford, and GM each choose their production levels so that no company can make more money by changing their choice, it’s a Nash Equilibrium Prisoner’s Dilemma : Two criminals are arrested and interrogated separately. Each has two ...

PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE

 PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE

PSO is a computational method that Optimizes a problem. It is a Population-based stochastic search algorithm. PSO is inspired by the Social Behavior of Birds flocking. n Particle Swarm Optimization the solution of the problem is represented using Particles. [Flocking birds are replaced with particles for algorithm simplicity]. Objective Function is used for the performance evaluation for each particle / agent in the current population. PSO solved problems by having a Population (called Swarms) of Candidate Solutions (Particles). Local and global optimal solutions are used to update particle position in each iteration.

Particle Swarm Optimization (PSO) Algorithm step-by-step explanation with Numerical Example and source code implementation. - PART 2 [Example 2]

1.) Initialize Population [Current Iteration (t) = 0]
Population Size = 4;
𝑥𝑖 : (i = 1,2,3,4) and (t = 0)
𝑥1 =1.3;
𝑥2=4.3;
𝑥3=0.4;
𝑥4=−1.2

2.) Fitness Function used:

Compute Fitness Values for Each Particle using fitness function.
𝑓1=1.69;
𝑓2=18.49;
𝑓3=0.16;
𝑓4=1.44;

3.) Initialize Velocity for each particle in the current Population.
𝑣1=0;
𝑣2=0;
𝑣3=0;
𝑣4=0;

4.) Find Personal Best & Global Best (𝐺_𝐵𝑒𝑠𝑡=0.4;) for each Particle.
𝐺_𝐵𝑒𝑠𝑡=0.4;

5.) Calculate Velocity for each Particle.
Calculate Velocity by:

𝑣_1^(0+1)=1∗0 +1∗0.233(1.3 −1.3)+1∗0.801(0.4 −1.3) ;
𝑣_1^1=0.7209;
𝑣_2^1=−3.1229;
𝑣_3^1=0;
𝑣_4^1=1.2816;

6.) Calculate Position for each Particle.
Calculate Particles Position by : 

𝑥_1^(0+1)=1.3 +0.7209=2.0209 ;
𝑥_2^(0+1)=4.3 −3.1229=1.1771;
𝑥_3^(0+1)=0.4+0=0.4;
𝑥_4^(0+1)=−1.2+1.2816=0.0819 ;

7.) Calculate Fitness Values for each Particle (t = 1).
𝑓_1^1=4.084;
𝑓_2^1=1.3855;
𝑓_3^1=0.16;
𝑓_4^1=0.0067;

8.) Repeat Until Stopping Criteria is met.

(Output after 100 iterations )
For More details watch this video: 

Comments

Popular Post

Cuckoo Search Algorithm for Optimization Problems

Particle Swarm Optimization (PSO)

PSO (Particle Swarm Optimization) Example Step-by-Step

how is the LBP |Local Binary Pattern| values calculated? Step-by-Step with Example

PSO Python Code || Particle Swarm Optimization in Python || ~xRay Pixy

Grey Wolf Optimization Algorithm

Bat algorithm Explanation Step by Step with example

Grey Wolf Optimization Algorithm Numerical Example

Whale Optimization Algorithm Code Implementation || WOA CODE || ~xRay Pixy