### First backtests using currDayChange input

As it was described in the previous post, the performance, **representing the dayOfTheWeek input as 5 dimensional data was not better than the best 1 dimensional representation, but was better than the average 1 dimensional representation.** Therefore we forsake some performance for stability and we preferred the 5 dimensional input. We mention here this, because the reader shouldn’t be surprised that the 5 dimensional dayOfTheWeek performance is worse than the best 1 dimensional case. The best 1 dimensional representation was the modulo 0 case, in which Monday means 1, Friday means 5.

Let’s do some quick backtests for the different dayOfTheWeek representations.

Use 10 ensemble members. Apply the FF(%) method that is the FeedForward network (not the GRNN), and the % means that we predict the next day %changes, and not the sign(change). We train for 5 epochs and we use 2 neurons if not stated otherwise. The lookback window size is 200 days, and we used fix 4% output outlier elimination threshold.

However, note that as we increase the input dimension with a new currDayChange input, it would be sensible to add at least another neuron to the network. But the number of neurons has to be determined by a rigorous optimization algorithm and not by this ‘sensible’ hunch.

**Backtest 1: dayOfTheWeek is 5 dimensional.**

**Without currDayChange input:**

D_stat: 51.56%, projectedCAGR: 11.25%, TR: 133.81%

**With currDayChange input:** (2 different experiments. The results are random of course.)

D_stat: 51.59%, projectedCAGR: 11.54%, TR: 147.10%

D_stat: 52.09%, projectedCAGR: 15.81%, TR: 283.79%

In average, every measurement is improved. Some notes:

– The 2 experiments give very different results. This suggests that it is a very volatile method. It is expected. As we increased the input dimensions space, the ANN training algorithm can vary more in different experiments. To decrease volatility we can increase the number of epochs and/or we can increase the number of ensemble members.

– It is a beautiful result, considering we used only 2 neurons while the input dimension is 5+1=6.

**Backtest 2: dayOfTheWeek is 1 dimensional.**

**Without currDayChange input:**

D_stat: 52.45%, projectedCAGR: 11.77%, TR: 148.93%

**With currDayChange input:** (3 different experiments.)

D_stat: 52.68%, projectedCAGR: 18.67%, TR: 410.62%

D_stat: 52.06%, projectedCAGR: 11.30%, TR: 139.74%

D_stat: 52.32%, projectedCAGR: 11.10%, TR: 134.43%

With currDayChange input, but with 3 neurons instead of 2: (2 different experiments.)

D_stat: 53.09%, projectedCAGR: 18.72%, TR: 413.51%

D_stat: 52.32%, projectedCAGR: 15.93%, TR: 289.30%

A TR% chart for the new method can be seen here

We like the chart. It is quite monotonous (except in the 2008 earthquake, but that is acceptable).

Overall, **it seems promising to use the new currDayChange input. For the 1 dimensional case, it can improve the CAGR from 12% to 16% and the TR from 150% to 300%. These are our best results so far!!! **However, we shouldn’t rush to this conclusions yet. We need to learn and optimize many things; we want to understand the behaviour of the components. Overall, I reckon we have to spend at least 1 full month before concluding that it is really a promising strategy and before announcing real backtested results. The issues we want to solve don’t take too much to code, but it takes too long to backtest, to run the actual experiment. **A backtest for an ensemble of 10 members for 3000 days** (12 years) takes about 2 hours. Because of the randomness, we have to repeat every test at least 2-4 times. **That is 8 hours to try even a simple thing.** And if we make a mistake in the code, or when we fine tune parameters (nNeuorns, nEpoch, lookbackdays), we have to repeat these 4 experiments many times. Even if there is nothing surprise in the road ahead.

However, there is one thing that is very important to learn. With this new currDayChange input, we introduced a continuous, non-discrete and very random (almost Gaussian random) input. We don’t fully understand yet how ANN predictions work for this kind of input. Whet it is successful, when it is not. Can it predict at all? Note that because of the Gaussian randomness, the samples on the edges are thinly represented. Can the ANN learn this kind of data well? These questions have to be studied. And exactly these very important questions are those that are neglected by other ANN practitioners (even in academic articles). Newbie ANN users hope that it is enough to feed the ANN whatever input we have and it will predict well. Ouch. The truth is very far from this as we have proved in this blog many times.

This is the **roadmap **how we imagined to make headway.

–** study the 1 dim. case with only the currDayChange** input. **Test daily MR (Mean Reversion), FT (Follow Through) strategy.** It didn’t work in our previous tests 6 months ago. Why should it work now? We hope that many things have changed since: we have output normalization, input normalization and there is outlier elimination now; to mention some.

– Write a deterministic predictor with different input data representation (continuous, 2 bins, 6 bins)

– test a continuous input representation

– test with discrete input representation: 2 bins,

– test with discrete input representation: 6 bins with equal number of samples

– train it for optimizing (nNeurons, nEpoch),

– optimize lookbackDays

– **study the 2 dimensional input**

– Write a deterministic predictor with different input data representation

– continuous case

– test with discrete input representation: 2 bins,

– test with discrete input representation: 6 bins with equal number of samples

– train it for optimizing (nNeurons, nEpoch),

– optimize lookbackDays

– **study the 6 dimensional input**

– Write a deterministic predictor

– continuous case

– test with discrete input representation: 2 bins,

– test with discrete input representation: 6 bins with equal number of samples

– train it for optimizing (nNeurons, nEpoch),

– optimize lookbackDays

– heterogeneous ensemble

– make the prediction live on the Internet as NeuralSniffer Predictor version 2.

Overall, these premature backtest of this post shows that **we can have hope that the new currDayChange input improves the prediction performance by at least 50%. Instead of 12% CAGR, we target 16% CAGR.**

Filed under: Uncategorized | Leave a Comment

## No Responses Yet to “First backtests using currDayChange input”