How can I get the symbolic steady state vector of a Markov Chain?
조회 수: 21 (최근 30일)
이전 댓글 표시
Hello, does anyone know how to obtain the symbolic steady state vector (i.e. the long-term probability of each state) of this Markov Chain example in MATLAB?
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1035775/image.png)
At the end of this demonstration, it does not show how can I further get a steady state vector?
It will be very appreciated if you can help me with this problem.
댓글 수: 0
답변 (1개)
John D'Errico
2022년 8월 7일
편집: John D'Errico
2022년 8월 7일
Easy, peasy. For example, given a simple Markov process, described by the 3x3 transition matrix T.
T = [.5 .2 .3;.1 .4 .5;.1 .1 .8]
There are no absorbing states. We can see this is indeed the transition matrix of a Markov chain. One good test is the rows all sum to 1, and none of the elements are greater than 1, or less than zero.
sum(T,2)
What are the steady-state probabilities?
[V,D] = eig(T')
Take eigenvector that corresponds to the unit eigenvalue. In this case, it is the first eigenvector.
P = V(:,1)';
Normalize so the elements sum to 1.
format long g
P = P/sum(P)
Those are the steady state probabilites for this system. We can see that this does not change P.
P*T
I won't do your homework for you, but you can easily enough see how to proceed from here.
댓글 수: 4
Walter Roberson
2022년 8월 8일
John, with symbolic coefficients, is it going to be possible to find the entry with eigenvalue 1?
Bruno Luong
2022년 8월 8일
You don't need to compute eigen value, you can compute this, possibly easier in symbolic way:
ss = null(T.'-eye(size(T))).';
ss = ss/sum(ss)
참고 항목
카테고리
Help Center 및 File Exchange에서 Markov Chain Models에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!