Hidden Markov Model (HMM) VIDEO LINK: https://youtu.be/YIGCWNG8BIA A Hidden Markov Model (HMM) is a statistical model in which the system has hidden states that cannot be directly observed, but produce observable outputs. It is based on the Markov property, meaning the next state depends only on the current state. Video Chapters: HMM in Artificial Intelligence 00:00 Introduction 00:31 Statistical Model 00:54 HMM Examples 02:30 HMM 03:10 HMM Components 05:23 Viterbi Algorithm 06:23 HMM Applications 06:38 HMM Problems 07:28 HMM in Handwriting Recognition 11:20 Conclusion HMM COMPONENTS A Hidden Markov Model (HMM) is a statistical model in which the system has hidden states that cannot be directly observed, but produce observable outputs. It is based on the Markov property, meaning the next state depends only on the current state. An HMM consists of states, observations, transition probabilities, emission probabilities, and initial probabilities. It is commonly used in a...
Get link
Facebook
X
Pinterest
Email
Other Apps
Markov Chains || Step-By-Step || ~xRay Pixy
Get link
Facebook
X
Pinterest
Email
Other Apps
-
Learn Markov Chains step-by-step using real-life examples.
03:54 States, State Space, Transition Probabilities
06:17 Transition Matrix
08:17 Example 02
09:17 Example 03
10:26 Example 04
12:25 Example 05
14:16 Example 06
16:49 Example 07
18:11 Example 08
24:56 Conclusion
In computer science, Markov problems are typically associated with Markov processes or Markov models. These are related to topics involving stochastic processes and probabilistic systems where future states depend only on the current state, not on the sequence of states that preceded it.
Artificial Intelligence (AI):
Markov Decision Processes (MDP): Used in decision-making problems, especially in reinforcement learning.
Hidden Markov Models (HMM): Widely used in speech recognition, handwriting recognition, and natural language processing.
Machine Learning:
Probabilistic graphical models like HMM.
Stochastic optimization techniques.
Data Science and Statistics:
Statistical analysis using Markov Chains.
Time series forecasting and data modeling.
Networking and Distributed Systems:
Queueing theory and performance modeling using Markov chains.
Reliability analysis.
Game Theory:
Modeling strategies and decision-making under uncertainty.
Simulation and Modeling:
Using Markov Chains to simulate random systems like traffic, communication networks, or biological processes.
Operations Research:
Optimization problems using Markov Decision Processes.
Applications in logistics and supply chain management.
Markov Chains are memoryless because of the Markov Property. In a Markov Chain, the future only depends on the current state, not on the entire history of how you got there.
Q. Why Does This Make Markov Chains Simpler to Analyze?
Less Data to Track: You only need to focus on the current state and the transition probabilities (how likely it is to move from one state to another). There’s no need to track or calculate the entire history.
Simplified Mathematics: Instead of dealing with complicated probabilities that depend on past steps, you can use simple equations based on the current state.
Efficient Predictions: For example, if you want to predict where the robot will be after 10 steps, you don’t have to trace every possible path it took before step 10. You just calculate step by step from the current state.
Key Components of the Markov Property:
Current State:
The future depends only on the current state of the system, and not on how the system arrived at that state.
Transition Probability:
The probability of moving from one state to another depends only on the current state, not on the sequence of past states.
Example: Weather Model
Imagine you are tracking the weather, which can either be Sunny (S) or Rainy (R) each day. The Markov Chain model assumes that the weather tomorrow depends only on today’s weather, not on any previous days.
Let’s say the probabilities are as follows:
If it’s Sunny today, there is a 70% chance it will be Sunny tomorrow and a 30% chance it will be Rainy.
If it’s Rainy today, there is a 40% chance it will be Sunny tomorrow and a 60% chance it will be Rainy.
Transition Matrix
We can represent this as a transition matrix where each element represents the probability of moving from one state (today’s weather) to another (tomorrow’s weather):
How the Markov Chain Works:
Current State: Let's say today it’s Sunny. According to the transition matrix, the probability of tomorrow being Sunny is 70% and the probability of tomorrow being Rainy is 30%.
Memoryless Property: It doesn’t matter if the past few days have been sunny or rainy. Tomorrow’s weather depends only on today’s weather.
t-step Transition Probabilities:
To compute the t-step transition probabilities, you need to calculate the t-th power of the transition matrix P. This gives the probability of transitioning from state i to state j after t steps.
Let’s denote the t-step transition matrix as , and the entry represents the probability of transitioning from state i to state j after the t steps.
For example, for , calculate , and for , calculate
t-step transition matrix gives the probability of moving between states after t transitions.
PARTICLE SWARM OPTIMIZATION ALGORITHM NUMERICAL EXAMPLE PSO is a computational method that Optimizes a problem. It is a Population-based stochastic search algorithm. PSO is inspired by the Social Behavior of Birds flocking. n Particle Swarm Optimization the solution of the problem is represented using Particles. [Flocking birds are replaced with particles for algorithm simplicity]. Objective Function is used for the performance evaluation for each particle / agent in the current population. PSO solved problems by having a Population (called Swarms) of Candidate Solutions (Particles). Local and global optimal solutions are used to update particle position in each iteration. Particle Swarm Optimization (PSO) Algorithm step-by-step explanation with Numerical Example and source code implementation. - PART 2 [Example 2] 1.) Initialize Population [Current Iteration (t) = 0] Population Size = 4; 𝑥𝑖 : (i = 1,2,3,4) and (t = 0) 𝑥1 =1.3; 𝑥2=4.3; 𝑥3=0.4; 𝑥4=−1.2 2.) Fitness Function u...
Cuckoo Search Algorithm - Metaheuristic Optimization Algorithm What is Cuckoo Search Algorithm? Cuckoo Search Algorithm is a Meta-Heuristic Algorithm. Cuckoo Search Algorithm is inspired by some Cuckoo species laying their eggs in the nest of other species of birds. In this algorithm, we have 2 bird Species. 1.) Cuckoo birds 2.) Host Birds (Other Species) What if Host Bird discovered cuckoo eggs? Cuckoo eggs can be found by Host Bird. Host bird discovers cuckoos egg with Probability of discovery of alien eggs. If Host Bird Discovered Cuckoo Bird Eggs. The host bird can throw the egg away. Abandon the nest and build a completely new nest. Mathematically, Each egg represent a solution and it is stored in the host bird nest. In this algorithm Artificial Cuckoo Birds are used. Artificial Cuckoo can lay one egg at a time. We will replace New and better solutions with less fit solutions. It means eggs that are more similar to host bird has opportunity to de...
Particle swarm optimization (PSO) What is meant by PSO? PSO is a computational method that Optimizes a problem. It is a Population-based stochastic search algorithm. PSO is inspired by the Social Behavior of Birds flocking. n Particle Swarm Optimization the solution of the problem is represented using Particles. [Flocking birds are replaced with particles for algorithm simplicity]. Objective Function is used for the performance evaluation for each particle / agent in the current population. PSO solved problems by having a Population (called Swarms) of Candidate Solutions (Particles). Local and global optimal solutions are used to update particle position in each iteration. How PSO will optimize? By Improving a Candidate Solution. How PSO Solve Problems? PSO solved problems by having a Population (called Swarms) of Candidate Solutions (Particles). The population of Candidate Solutions (i.e., Particles). What is Search Space in PSO? It is the range in which the algorithm computes th...
Particle Swarm Optimization (PSO) is a p opulation-based stochastic search algorithm. PSO is inspired by the Social Behavior of Birds flocking. PSO is a computational method that Optimizes a problem. PSO searches for Optima by updating generations. It is popular is an intelligent metaheuristic algorithm. In Particle Swarm Optimization the solution of the problem is represented using Particles. [Flocking birds are replaced with particles for algorithm simplicity]. Objective Function is used for the performance evaluation for each particle / agent in the current population. After a number of iterations agents / particles will find out optimal solution in the search space. Q. What is PSO? A. PSO is a computational method that Optimizes a problem. Q. How PSO will optimize? A. By Improving a Candidate Solution. Q. How PSO Solve Problems? A. PSO solved problems by having a Population (called Swarms) of Candidate Solutions (Particles). Local and global optimal solutions are used to ...
Local Binary Pattern Introduction to Local Binary Pattern (LBP) Q. What is Digital Image? A. Digital images are collections of pixels or numbers ( range from 0 to 255). Q. What is Pixel? A. Pixel is the smallest element of any digital image. Pixel can be categorized as Dark Pixel and Bright Pixel. Dark pixels contain low pixel values and bright pixels contain high pixel values. Q. Explain Local Binary Pattern (LBP)? A. Local binary pattern is a popular technique used for image processing. We can use the local binary pattern for face detection and face recognition. Q. What is LBP Operator? A. LBP operator is an image operator. We can transform images into arrays using the LBP operator. Q. How LBP values are computed? A. LBP works in 3x3 (it contain a 9-pixel value ). Local binary pattern looks at nine pixels at a time. Using each 3x3 window in the digital image, we can extract an LBP code. Q. How to Obtain LBP operator value? A. LBP operator values can be obtained by ...
Grey Wolf Optimization Algorithm (GWO) Grey Wolf Optimization Grey Wolf Optimization Algorithm is a metaheuristic proposed by Mirjaliali Mohammad and Lewis, 2014. Grey Wolf Optimizer is inspired by the social hierarchy and the hunting technique of Grey Wolves. What is Metaheuristic? Metaheuristic means a High-level problem-independent algorithmic framework (develop optimization algorithms). Metaheuristic algorithms find the best solution out of all possible solutions of optimization. Who are the Grey Wolves? Wolf (Animal): Wolf Lived in a highly organized pack. Also known as Gray wolf or Grey Wolf, is a large canine. Wolf Speed is 50-60 km/h. Their Lifespan is 6-8 years (in the wild). Scientific Name: Canis Lupus. Family: Canidae (Biological family of dog-like carnivorans). Grey Wolves lived in a highly organized pack. The average pack size ranges from 5-12. 4 different ranks of wolves in a pack: Alpha Wolf, Beta Wolf, Delta Wolf, and Omega Wolf. How Grey Wolf Optimiza...
There are about 1000 species of Bats. Bat Algorithm is based on the echolocation behavior of Micro Bats with varying pulse rates of emission and loudness. All bats use echolocation to sense distance and background barriers. Microbats are small to medium-sized flying mammals. Micro Bats used a Sonar that is known as Echolocation to detect their prey. Bats fly randomly with the velocity at the position with a fixed frequency and loudness for prey. Q. Whats is Frequency? A. Frequency is the number of waves that pass a fixed point in unit time. Wavelength is the minimum distance between two nearest particles which are in the same phase. Here, Sound waves are used by microbats to detect prey. Q. What is Position? A. A place where something or someone is located. Q. What is Velocity? A. Speed of something in a given direction. Q. What is loudness. A. Loudness refers to how soft or loud sound seems to listeners. Q. What is pulse rate? ...
Grey Wolf Optimization Algorithm Numerical Example Grey Wolf Optimization Algorithm Steps 1.) Initialize Grey Wolf Population. 2.) Initialize a, A, and C. 3.) Calculate the fitness of each search agent. 4.) 𝑿_𝜶 = best search agent 5.) 𝑿_𝜷 = second-best search agent 6.) 𝑿_𝜹 = third best search agent. 7.) while (t<Max number of iteration) 8.) For each search agent update the position of the current search agent by the above equations end for 9.) update a, A, and C 10.) Calculate the fitness of all search agents. 11.) update 𝑿_𝜶, 𝑿_𝜷, 𝑿_𝜹 12.) t = t+1 end while 13.) return 𝑿_𝜶 Grey Wolf Optimization Algorithm Numerical Example STEP 1. Initialize the Grey wolf Population [Initial Position for each Search Agent] 𝒙_(𝒊 ) (i = 1,2,3,…n) n = 6 // Number of Search Agents [ -100, 100] // Range Initial Wolf Position 3.2228 4.1553 -3.8197 4.2330 ...
Whale Optimization Algorithm Code Implementation Whale Optimization Algorithm Code Files function obj_fun(test_fun) switch test_fun case 'F1' x = -100:2:100; y=x; case 'F2' x = -10:2:10; y=x; end end function [LB,UB,D,FitFun]=test_fun_info(C) switch C case 'F1' FitFun = @F1; LB = -100; UB = 100; D = 30; case 'F2' FitFun = @F2; LB = -10; UB = 10; D = 30; end % F1 Test Function function r = F1(x) r = sum(x.^2); end % F2 Test Function function r = F2(x) r = sum(abs(x))+prod(abs(x)); end end function Position = initialize(Pop_Size,D,UB,LB) SS_Bo...
Comments
Post a Comment