### We nailed it down!!! We report the first successful prediction.

But don’t be too euphoric. The results are quite mediocre.

See previous posts for further details.

The NN (Neural Network) has 2 inputs:

– previous week VIX %change

– previous week RUT closePrice %change

The predicted output is the

– next week RUT closePrice %change

We have 618 samples, 618 weeks. (about 11 years)

We train the network on the next week closePrice %change (a float value), but

we use the NN only for directional prediction.

So, we predict only the direction of the move. (Up or Down next week)

We used the first 60% of the samples for training and validation (48% for training and 12% for validation, chosen randomly).

However, the test set was not picked randomly, but deterministically to be the last 40% of the samples. (about the last 4 years).

1.

Naive Foracaster:

The naive forecaster forecasts for the next week exactly the same value as it was last week.

Quite naive, isn’t it?

It achieves 48.71% directional accuracy: 299 good forecast vs. 319 bad forecasts week from 618 samples.

2.

NN forecast.

As the number of neurons is a parameter of the algorithm, I changed it to different values.

**After about running 2600 different tests** we have this result:

Number of Neurons/ | winLoseRatios Mean/ |
stddev | |

3 | 51.03% | 4.43% | |

4 | 51.35% | 4.35% | |

8 | 51.65% | 4.01% | |

10 | 51.59% | 3.94% | |

15 | 51.40% | 3.81% | |

20 | 51.49% | 3.62% | |

40 | 51.07% | 3.73% | |

80 | 50.45% | 3.64% |

For those, who likes the chart here is the chart.

Notes:

1.

I cannot stress enough that the **NN learning process is a random process**, because it randomly divides the ‘independent’ samples to training and validation sets. Therefore, anyone who teaches a random NN should be aware that testing the NN performance once is not enough. If you teach the NN 100 different times, due to the random nature, you will have 100 different NN, with 100 different prediction for next week.

Therefore, I can only laugh on articles, Master thesis’s, PhDs that shows the good NN prediction power, without running the learning process many times.

Note that I run 2600 different tests and averaged the results.

After 2500 tests, the mean of the prediction ratio doesn’t change too much.

2.

Note also that as we increase the number of neurons, the prediction power increased until 8, but decreased after that.

Also note that the StdDev decreased even after that.

3.

Note that there is a theoretical limit to the number of neurons used.

One limit is: 2n+1, where n is the input vector dimension.

We have 2 dimensional input vectors (VIX change, RUT change), therefore

the nNeurons should be less or equal to 2*2 +1 = 5.

However, we see here that **the optimal nNeurons is 8**.

Quite close match. I promise we will study this later.

Summary:

So far, I am pleased with the results.

After 2-3 months, this is the first time when we have a proof that

the NN could be used for weekly RUT %change prediction.

The **edge is quite small: 1.6%, but it is significant.**

And it is not a random, but a stable result;

Filed under: Uncategorized | Leave a Comment

## No Responses Yet to “We nailed it down!!! We report the first successful prediction.”