New Post

Confusion Matrix with Real-Life Examples || Artificial Intelligence || ~...

Image
Learn about the Confusion Matrix with Real-Life Examples. A confusion matrix is a table that shows how well an AI model makes predictions. It compares the actual results with the predicted ones and tells which are right or wrong. It includes True Positive (TP), False Positive (FP), False Negative (FN), and True Negative (TN). Video Chapters: Confusion Matrix in Artificial Intelligence 00:00 Introduction 00:12 Confusion Matrix 03:48 Metrices Derived from Confusion Matrix 04:26 Confusion Matrix Example 1 05:44 Confusion Matrix Example 2 08:10 Confusion Matrix Real-Life Uses #artificialintelligence #machinelearning #confusionmatrix #algorithm #optimization #research #happylearning #algorithms #meta #optimizationtechniques #swarmintelligence #swarm #artificialintelligence #machinelearning

Particle Swarm Optimization (PSO) |Part - 2| with Numerical Example and ...

Particle Swarm Optimization (PSO) Algorithm


Particle Swarm Optimization (PSO) Algorithm step-by-step explanation with Numerical Example and source code implementation. 🌞 Particle Swarm Optimization (PSO) Algorithm Matlab code.
Particle Swarm Optimization Main File: main.m
pso;

Particle Swarm Optimization Function File: Sphere(x)
function F1 = Sphere(x) F1 = sum(x.^2); end

Particle Swarm Optimization File Name Save as: pso.m
clear; close all; %% Fitness Function Calling FitnessFunction=@(x) Sphere(x); % Fitness Function Calling % Total Number of Decision Variables Used nVar=10; % Size of Decision Variables Matrix VarSize=[1 nVar]; % Lower Bound LowerBound =-10; % Upper Bound UpperBound = 10; %% Parameters Initialization Phase % Maximum Number of Iterations used. MaxT=100; % Total Number of Search Agents used. PopulationSize = 10; % Initialize PSO Parameters % Inertia Weight w=1; % Inertia Weight Damping Ratio wdamp=0.99; % Personal Learning Coefficient c1=1.5; % Global Learning Coefficient c2=2.0; % Velocity Limits VelMax=0.1*(UpperBound-LowerBound); VelMin=-VelMax; %% Initialization Position, Cost, Velocity, Best_Position, Best_Cost empty_particle.Position=[]; empty_particle.Cost=[]; empty_particle.Velocity=[]; empty_particle.Best.Position=[]; empty_particle.Best.Cost=[]; particle=repmat(empty_particle,PopulationSize ,1); GlobalBest.Cost=inf; for i=1:PopulationSize % Initialize Position for each search Agent in the search space particle(i).Position=unifrnd(LowerBound,UpperBound,VarSize); % Initialize Velocity for each search Agent in the search space particle(i).Velocity=zeros(VarSize); % Fitness Values Calculation for each search Agent in the search space particle(i).Cost=FitnessFunction(particle(i).Position); % Update Personal Best Position for the particles particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best Position for each search Agent in the search space if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end BestCost=zeros(MaxT,1); %% PSO Main Loop for CurrentIteration=1:MaxT for i=1:PopulationSize % Update Velocity for each search Agent in the search space particle(i).Velocity = w*particle(i).Velocity +c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) +c2*rand(VarSize).*(GlobalBest.Position-particle(i).Position); % Apply Velocity Limits particle(i).Velocity = max(particle(i).Velocity,VelMin); particle(i).Velocity = min(particle(i).Velocity,VelMax); % Update Position for Each Particle particle(i).Position = particle(i).Position + particle(i).Velocity; % % Check Boundries [-10, 10] Outside=(particle(i).Position<LowerBound | particle(i).Position>UpperBound); particle(i).Velocity(Outside)=-particle(i).Velocity(Outside); particle(i).Position = max(particle(i).Position,LowerBound); particle(i).Position = min(particle(i).Position,UpperBound); % Fitness Values Calculation particle(i).Cost = FitnessFunction(particle(i).Position); % Update Personal Best if particle(i).Cost<particle(i).Best.Cost particle(i).Best.Position=particle(i).Position; particle(i).Best.Cost=particle(i).Cost; % Update Global Best if particle(i).Best.Cost<GlobalBest.Cost GlobalBest=particle(i).Best; end end end BestCost(CurrentIteration)=GlobalBest.Cost; disp(['Current Iteration Number = ' num2str(CurrentIteration) ': Best Cost Found = ' num2str(BestCost(CurrentIteration))]);

w=w*wdamp; end BestSol = GlobalBest; %% Results figure; %plot(BestCost,'LineWidth',2); semilogy(BestCost,'LineWidth',2); xlabel('Iteration Numbers'); ylabel('Best Cost Found'); grid on;

#Metaheuristic #Algorithms
Meta-heuristic Algorithms
Link - Click Here


Comments

Popular Post

PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE

Cuckoo Search Algorithm for Optimization Problems

PSO (Particle Swarm Optimization) Example Step-by-Step

Particle Swarm Optimization (PSO)

how is the LBP |Local Binary Pattern| values calculated? Step-by-Step with Example

PSO Python Code || Particle Swarm Optimization in Python || ~xRay Pixy

Grey Wolf Optimization Algorithm

Bat algorithm Explanation Step by Step with example

Grey Wolf Optimization Algorithm Numerical Example

Whale Optimization Algorithm Code Implementation || WOA CODE || ~xRay Pixy