New Post

Nash Equilibrium In Game Theory ~xRay Pixy

Image
 Video Link  CLICK HERE... Learn Nash Equilibrium In Game Theory Step-By-Step Using Examples. Video Chapters: Nash Equilibrium  00:00 Introduction 00:19 Topics Covered 00:33 Nash Equilibrium  01:55 Example 1  02:30 Example 2 04:46 Game Core Elements 06:41 Types of Game Strategies 06:55  Prisoner’s Dilemma  07:17  Prisoner’s Dilemma Example 3 09:16 Dominated Strategy  10:56 Applications 11:34 Conclusion The Nash Equilibrium is a concept in game theory that describes a situation where no player can benefit by changing their strategy while the other players keep their strategies unchanged.  No player can increase their payoff by changing their choice alone while others keep theirs the same. Example : If Chrysler, Ford, and GM each choose their production levels so that no company can make more money by changing their choice, it’s a Nash Equilibrium Prisoner’s Dilemma : Two criminals are arrested and interrogated separately. Each has two ...

Particle Swarm Optimization (PSO) |Part - 2| with Numerical Example and ...

Particle Swarm Optimization (PSO) Algorithm


Particle Swarm Optimization (PSO) Algorithm step-by-step explanation with Numerical Example and source code implementation. 🌞 Particle Swarm Optimization (PSO) Algorithm Matlab code.
Particle Swarm Optimization Main File: main.m
pso;

Particle Swarm Optimization Function File: Sphere(x)
function F1 = Sphere(x) F1 = sum(x.^2); end

Particle Swarm Optimization File Name Save as: pso.m
clear; close all; %% Fitness Function Calling FitnessFunction=@(x) Sphere(x); % Fitness Function Calling % Total Number of Decision Variables Used nVar=10; % Size of Decision Variables Matrix VarSize=[1 nVar]; % Lower Bound LowerBound =-10; % Upper Bound UpperBound = 10; %% Parameters Initialization Phase % Maximum Number of Iterations used. MaxT=100; % Total Number of Search Agents used. PopulationSize = 10; % Initialize PSO Parameters % Inertia Weight w=1; % Inertia Weight Damping Ratio wdamp=0.99; % Personal Learning Coefficient c1=1.5; % Global Learning Coefficient c2=2.0; % Velocity Limits VelMax=0.1*(UpperBound-LowerBound); VelMin=-VelMax; %% Initialization Position, Cost, Velocity, Best_Position, Best_Cost empty_particle.Position=[]; empty_particle.Cost=[]; empty_particle.Velocity=[]; empty_particle.Best.Position=[]; empty_particle.Best.Cost=[]; particle=repmat(empty_particle,PopulationSize ,1); GlobalBest.Cost=inf; for i=1:PopulationSize % Initialize Position for each search Agent in the search space particle(i).Position=unifrnd(LowerBound,UpperBound,VarSize); % Initialize Velocity for each search Agent in the search space particle(i).Velocity=zeros(VarSize); % Fitness Values Calculation for each search Agent in the search space particle(i).Cost=FitnessFunction(particle(i).Position); % Update Personal Best Position for the particles particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best Position for each search Agent in the search space if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end BestCost=zeros(MaxT,1); %% PSO Main Loop for CurrentIteration=1:MaxT for i=1:PopulationSize % Update Velocity for each search Agent in the search space particle(i).Velocity = w*particle(i).Velocity +c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) +c2*rand(VarSize).*(GlobalBest.Position-particle(i).Position); % Apply Velocity Limits particle(i).Velocity = max(particle(i).Velocity,VelMin); particle(i).Velocity = min(particle(i).Velocity,VelMax); % Update Position for Each Particle particle(i).Position = particle(i).Position + particle(i).Velocity; % % Check Boundries [-10, 10] Outside=(particle(i).Position<LowerBound | particle(i).Position>UpperBound); particle(i).Velocity(Outside)=-particle(i).Velocity(Outside); particle(i).Position = max(particle(i).Position,LowerBound); particle(i).Position = min(particle(i).Position,UpperBound); % Fitness Values Calculation particle(i).Cost = FitnessFunction(particle(i).Position); % Update Personal Best if particle(i).Cost<particle(i).Best.Cost particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end end BestCost(CurrentIteration)=GlobalBest.Cost; disp(['Current Iteration Number = ' num2str(CurrentIteration) ': Best Cost Found = ' num2str(BestCost(CurrentIteration))]);

w=w*wdamp; end BestSol = GlobalBest; %% Results figure; %plot(BestCost,'LineWidth',2); semilogy(BestCost,'LineWidth',2); xlabel('Iteration Numbers'); ylabel('Best Cost Found'); grid on;

#Metaheuristic #Algorithms
Meta-heuristic Algorithms
Link - Click Here


Comments

Popular Post

PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE

Cuckoo Search Algorithm for Optimization Problems

Particle Swarm Optimization (PSO)

PSO (Particle Swarm Optimization) Example Step-by-Step

how is the LBP |Local Binary Pattern| values calculated? Step-by-Step with Example

PSO Python Code || Particle Swarm Optimization in Python || ~xRay Pixy

Grey Wolf Optimization Algorithm

Bat algorithm Explanation Step by Step with example

Grey Wolf Optimization Algorithm Numerical Example

Whale Optimization Algorithm Code Implementation || WOA CODE || ~xRay Pixy