### Forecasting different indices: RUT, SPX, QQQ, DJI, HSI

In this post, we still stick to the most basic case. input: CurrDayChange, output nextDayChange. So far we have backtested our methods with the RUT (Russell 2000 index) only. Our justification is:

– RUT is less popular than SPX or DJI in the trader’s community. Therefore, less likely that clever traders/computer bots optimized or removed the inconsistencies of that market.

– RUT has higher beta (higher variance) than the other indices (RUT is comparable to the HSI beta), therefore if our method can produce alpha (profit), that alpha is more expressed in RUT than in the other, slower moving indices.

In this study we tested the following 5 indices: RUT, SPX, QQQ, DJI, and HSI.

Luckily all of them are available from 1987-09-10, so the comparison is fair, because all of them are based on the same period.

However, note that the number of days is not perfectly equal. For example HSI has fewer days, because the Chinese market has different bank holidays than the USA market.

And there are also slight differences in terms of the number of days even in the USA indices, but it is hardly worth mentioning.

We tested different strategies.

-Buy&Hold,

-daily Mean Reversion,

-daily Follow Through,

-Naive Learner with 2 bins and 4 bins and the

-continuous NN prediction.

For geeks, here is the code we used for the different strategies.

And some performance charts.

Portfolio Value at the end, assuming $1 invested:

Geometric Cumulative Annual Growth Rate %:

We highlighted the cells that were discussed in our previous posts. Note that they are not exactly equal to our previous measurement. The reason is that another 1 month has been passed, and in this study we used price quotes until 2011-03-29. The previous studies used quotes until 23th February.

Notes:

– The most important conclusion is that the **RUT performance is better in all the strategies. We are not suprised.**

– looking at the **Buy&Hold**, and the CAGR table, we read that **the Chinese HSI gave the most return: 10.27% annual**, QQQ: 9.28%, DJI: 8.31%, RUT: 8.1%, SPX: 7.47%. To be frank, we are suprised the Buy&Hold performance of the DJI. We expected them to be the less profitable. However note that in 2011, we are still in the aftershock of the 2008 financial crysis, and we assume that DJI fall less in these years than the other stocks. That gives DJI a relative advantage, but we don’t expect this advantage to be kept in the near future.

– we made bold the Naive Learner(2bins) strategy results. We would like to emphasize its importance. This is what is very easily achievable by almost any adaptive (trained) strategy. Instead of playing rigid pre-determined rules (like MR, FT), we should adapt our rules to the last X days (200 days in our backtests).

– in the CAGR table **compare the Buy&Hold annual profits against the NaiveLearner profit. The adaptive NL was always better than the B&H** strategy. It was only slightly better on low beta indices

DJI: 8.31% B&H, 8.55% NL2bin

SPX: 7.47% B&H, 9.5% NL2bin

but inspect how significant the gain was in high beta indices:

HSI: 10.27% B&H, 17.81% NL2bin

QQQ: 9.28% B&H, 25.03% NL2bin

RUT: 8.1% B&H, 27.89% NL2bin

Note however, that playing this in real life may results daily rebalancing with significant comission and ask-bid spread loss if the zero-cost funds are not used.

(but we suggest to use them)

– in the CAGR table** compare NN learning strategy against B&H: NN is better everywhere** (albeit in DJA the difference is negligible)

Note that the backtest used 51 NN ensembleMembers for voting in the NN strategy case.

– in the CAGR table **compare NN learning against Naive Learner 2 bins: 3 out of 5 times: NN better than deterministic Naive learner. **So, we can say that **the complex and difficult NN strategy is better than the simple Naive Learner. This is something that we have strived for. Without it, there is no point to do complex NN learning and our efforts are not well rewarded.**

– **the directional accuracy is highest in the RUT case. Probably, because of our optimization.**

– **we are not concerned that these backtest performances with indices of (SPX, QQQ, DJI, HSI) are lower or slightly lower than the RUT case.** There are **2 reasons of it:**

1. among the indices, the RUT gives the highest return. No wonder, **RUT has the highest volatility among all of them**. Note that the more volatile the index, the more profit we have with the training algorithms (DJI is the less volatile, that is where we have hardly any extra profit compared to B&H

2. don’t expect as good performance as for the RUT, because **we optimized our method based on RUT**.

Optimal outlierThreshold: 4% (for SPX, another would be better)

optimal inputBoost and outputBoost;

optimal lookbackdays for learning: 200

– So, **it is not bad that we got poorer results in non RUT indices.** However, **it would have been a big warning sign if we had got negative profit, loss on other indices.**

Indices have different characteristics, we would optimize the parameters to different values according to the underlying index character.

Note that optimized parameters based on the past don’t guarantee the same perfarmance in the future.

In fact, they guarantee that the performance will be worse.

However, we cannot do better than using the past, using the recency to optimize the parameters. By betting on that in the future, similar parameters will be optimal than in the past, we assume the optimal parameters will not change too much. That is the correct strategy to do.

We don’t expect that the stellar past performance will be repeated again, but we expect a little bit smaller, but similar performance in the future.

– The whole study **suggests another strategy: self optimizing NN. If the main cause of these underperformance in (SPX, DJI, HSI, QQQ) is the parameter optimization (and not the low beta), we have a solution. Do parameter optimization ‘on-the-fly’**. Based on the last 10 years data (or all the available past data). For example, this is how to determine the optimal lookback days. Currently, on every day, we train 1 NN with 200 days past values. Instead of this, on every day, do a backtest with a fix set of 50, 60, 70, 80, .. 380, 390, 400 lookback days. That is to train 35 different NN for EVERY day. Do a running backtest, in which you keep track of the performance of all the 35 NN. On a given day, the self-optimizing NN strategy would select that NN from the 35, which is the winner and would play that. You can imagine the computation requirement of this backtesting .

Filed under: Uncategorized | 1 Comment

I do not even know the way I stopped up here, but I thought this post used to be

good. I do not realize who you’re however definitely you’re going to a well-known blogger

for those who aren’t already. Cheers!