Caution
You're reading an old version of this documentation. If you want up-to-date information, please have a look at 0.9.1.
librosa.sequence.viterbi¶
- librosa.sequence.viterbi(prob, transition, p_init=None, return_logp=False)[source]¶
Viterbi decoding from observation likelihoods.
Given a sequence of observation likelihoods prob[s, t], indicating the conditional likelihood of seeing the observation at time t from state s, and a transition matrix transition[i, j] which encodes the conditional probability of moving from state i to state j, the Viterbi algorithm [1] computes the most likely sequence of states from the observations.
- 1
Viterbi, Andrew. “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm.” IEEE transactions on Information Theory 13.2 (1967): 260-269.
- Parameters
- probnp.ndarray [shape=(n_states, n_steps), non-negative]
prob[s, t] is the probability of observation at time t being generated by state s.
- transitionnp.ndarray [shape=(n_states, n_states), non-negative]
transition[i, j] is the probability of a transition from i->j. Each row must sum to 1.
- p_initnp.ndarray [shape=(n_states,)]
Optional: initial state distribution. If not provided, a uniform distribution is assumed.
- return_logpbool
If True, return the log-likelihood of the state sequence.
- Returns
- Either states or (states, logp):
- statesnp.ndarray [shape=(n_steps,)]
The most likely state sequence.
- logpscalar [float]
If return_logp=True, the log probability of states given the observations.
See also
viterbi_discriminative
Viterbi decoding from state likelihoods
Examples
Example from https://en.wikipedia.org/wiki/Viterbi_algorithm#Example
In this example, we have two states
healthy
andfever
, with initial probabilities 60% and 40%.We have three observation possibilities:
normal
,cold
, anddizzy
, whose probabilities given each state are:healthy => {normal: 50%, cold: 40%, dizzy: 10%}
andfever => {normal: 10%, cold: 30%, dizzy: 60%}
Finally, we have transition probabilities:
healthy => healthy (70%)
andfever => fever (60%)
.Over three days, we observe the sequence
[normal, cold, dizzy]
, and wish to know the maximum likelihood assignment of states for the corresponding days, which we compute with the Viterbi algorithm below.>>> p_init = np.array([0.6, 0.4]) >>> p_emit = np.array([[0.5, 0.4, 0.1], ... [0.1, 0.3, 0.6]]) >>> p_trans = np.array([[0.7, 0.3], [0.4, 0.6]]) >>> path, logp = librosa.sequence.viterbi(p_emit, p_trans, p_init, ... return_logp=True) >>> print(logp, path) -4.19173690823075 [0 0 1]