New Post

Poplar Optimization Algorithm || Step-By-Step || ~xRay Pixy

Image
The Poplar Optimization Algorithm (POA) is a nature-inspired optimization method based on how poplar trees reproduce. It uses sexual propagation (seed dispersal by wind) for exploration and asexual reproduction (cutting and regrowth) for exploitation. Mutation and chaos factors help maintain diversity and prevent premature convergence, making POA efficient for solving complex optimization problems. Learn the Poplar Optimization Algorithm Step-By-Step using Examples. Video Chapters: Poplar Optimization Algorithm (POA) 00:00 Introduction 02:12 POA Applications 03:32 POA Steps 05:50 Execute Algorithm 1 13:45 Execute Algorithm 2 16:38 Execute Algorithm 3 18:15 Conclusion Main Points of the Poplar Optimization Algorithm (POA) Nature-Inspired Algorithm ā€“ Based on the reproductive mechanisms of poplar trees. Two Key Processes : Sexual Propagation (Seed Dispersal) ā€“ Uses wind to spread seeds, allowing broad exploration. Asexual Reproduction (Cuttings) ā€“ Strong branches grow ...

Particle Swarm Optimization (PSO) |Part - 2| with Numerical Example and ...

Particle Swarm Optimization (PSO) Algorithm


Particle Swarm Optimization (PSO) Algorithm step-by-step explanation with Numerical Example and source code implementation. šŸŒž Particle Swarm Optimization (PSO) Algorithm Matlab code.
Particle Swarm Optimization Main File: main.m
pso;

Particle Swarm Optimization Function File: Sphere(x)
function F1 = Sphere(x) F1 = sum(x.^2); end

Particle Swarm Optimization File Name Save as: pso.m
clear; close all; %% Fitness Function Calling FitnessFunction=@(x) Sphere(x); % Fitness Function Calling % Total Number of Decision Variables Used nVar=10; % Size of Decision Variables Matrix VarSize=[1 nVar]; % Lower Bound LowerBound =-10; % Upper Bound UpperBound = 10; %% Parameters Initialization Phase % Maximum Number of Iterations used. MaxT=100; % Total Number of Search Agents used. PopulationSize = 10; % Initialize PSO Parameters % Inertia Weight w=1; % Inertia Weight Damping Ratio wdamp=0.99; % Personal Learning Coefficient c1=1.5; % Global Learning Coefficient c2=2.0; % Velocity Limits VelMax=0.1*(UpperBound-LowerBound); VelMin=-VelMax; %% Initialization Position, Cost, Velocity, Best_Position, Best_Cost empty_particle.Position=[]; empty_particle.Cost=[]; empty_particle.Velocity=[]; empty_particle.Best.Position=[]; empty_particle.Best.Cost=[]; particle=repmat(empty_particle,PopulationSize ,1); GlobalBest.Cost=inf; for i=1:PopulationSize % Initialize Position for each search Agent in the search space particle(i).Position=unifrnd(LowerBound,UpperBound,VarSize); % Initialize Velocity for each search Agent in the search space particle(i).Velocity=zeros(VarSize); % Fitness Values Calculation for each search Agent in the search space particle(i).Cost=FitnessFunction(particle(i).Position); % Update Personal Best Position for the particles particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best Position for each search Agent in the search space if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end BestCost=zeros(MaxT,1); %% PSO Main Loop for CurrentIteration=1:MaxT for i=1:PopulationSize % Update Velocity for each search Agent in the search space particle(i).Velocity = w*particle(i).Velocity +c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) +c2*rand(VarSize).*(GlobalBest.Position-particle(i).Position); % Apply Velocity Limits particle(i).Velocity = max(particle(i).Velocity,VelMin); particle(i).Velocity = min(particle(i).Velocity,VelMax); % Update Position for Each Particle particle(i).Position = particle(i).Position + particle(i).Velocity; % % Check Boundries [-10, 10] Outside=(particle(i).Position<LowerBound | particle(i).Position>UpperBound); particle(i).Velocity(Outside)=-particle(i).Velocity(Outside); particle(i).Position = max(particle(i).Position,LowerBound); particle(i).Position = min(particle(i).Position,UpperBound); % Fitness Values Calculation particle(i).Cost = FitnessFunction(particle(i).Position); % Update Personal Best if particle(i).Cost<particle(i).Best.Cost particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end end BestCost(CurrentIteration)=GlobalBest.Cost; disp(['Current Iteration Number = ' num2str(CurrentIteration) ': Best Cost Found = ' num2str(BestCost(CurrentIteration))]);

w=w*wdamp; end BestSol = GlobalBest; %% Results figure; %plot(BestCost,'LineWidth',2); semilogy(BestCost,'LineWidth',2); xlabel('Iteration Numbers'); ylabel('Best Cost Found'); grid on;

#Metaheuristic #Algorithms
Meta-heuristic Algorithms
Link - Click Here


Comments

Popular Post

PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE

Cuckoo Search Algorithm for Optimization Problems

Particle Swarm Optimization (PSO)

PSO (Particle Swarm Optimization) Example Step-by-Step

PSO Python Code || Particle Swarm Optimization in Python || ~xRay Pixy

how is the LBP |Local Binary Pattern| values calculated? Step-by-Step with Example

Whale Optimization Algorithm Code Implementation || WOA CODE || ~xRay Pixy

Grey Wolf Optimization Algorithm

Grey Wolf Optimization Algorithm Numerical Example

Bat algorithm Explanation Step by Step with example