Title

this is a code
#button {
    border: none;
}

\alpha

High-order Hidden Markov Model for trend prediction in financial time series

df = df.names

contribution

application of high order HMM for better stock prediction

strength

showed how the pre-existing HMM algorithms, mostly focused on one-layer hmm, could be applied to higher order

validated with CSI data as well as S&P data for robustness of the proposed algorithm.

presented how the hmm output can be transformed to make the final decision: trading algorithm.

weakness

would be better if the meaning of the model hyper-parameters( were elaborated

continuous HMM emission probability : modeled as Gaussian Mixture model

W: the size of sliding window N: the number of underlying hidden state d: the dimension of observation state, namely, the number of time series features ω: the threshold that long position win rate in trading algorithm µ: the threshold that take short wining rate in trading algorithm µ: the threshold that take short wining rate in trading algorithm n: the order of Markov chain

is it appropriate to use the same hyper-parameter in CSI300 data set with S&P

data

CSI, S&P daily + open, close, lowest, highest, volume

model

new hidden state

introduced the state-transformation approach in high-order HMM to facilitate the training process and improve the calculation efficiency of parameter estimation introduce a new hidden state variable q_t new transition prob: a ˆ ij = P(q ˆ t = j|ˆq t −1 = i)

GMM – gaussian testing

check gaussian: determine appropriate distribution for CSI300 daily return, 4 indicators, AIC, BIC, log-likelihood, Kolmogorov–Smirnov test P-value is used to measure the fit. AIC penalizes parameter#, BIC penalizes parameter# + sample size# normal dist has lowest log-likelihood, highest AIC, BIC -> nonGaussian -> therefore we use gmm

K is the number of Gaussian mixture components, c ik is the mixture coefficient for the kth mixture in state i, g(o t , µ ik , Σ ik ) is the multivariate Gaussian probability density function maximize the posterior likelihood function by maximizing Baum’s auxiliary with Expectation-Maximization algorithm = Baus – Welch observation sequence o_t is set to be the normalized daily logarithmic return series g_t

describes the process by which high order hmm can be represented as one order hmm high-order HMM {o t , i t } — represent — > first-order HMM {o t , q t } Sequence {q t _hat} and {o t } constitute a which is equivalent to the

decision

if the trading signal y t +1 = 1, buy the stock index at the opening price of the next day, y t +1 = −1, sell the stock index future at the closing price of the day, y t +1 = 0, stay still

additional

  • showed that high order hmm has higher risk avoidance ability by comparing the sharpe ratio with low order hmm.

 b_{i}\left(o_{t}\right)=\sum_{k=1}^{K} c_{i k} g\left(o_{t}, \mu_{i k}, \Sigma_{i k}\right)


where   c_t = log(p_t) - log(p_(t-1))