Scrutinize every effect for historical stability


In this post, we don’t train the NN. Not yet. It takes time to get there. Yes, the good work requires a lot of time. (It recollects the famous adage that says

The good work requires a lot of time; but bad work requires even more.

🙂 )
So, instead of training the NN, we are very cautious and we study the weekly data a little bit more.
Let’s concentrate on the 66% UpWeek probability in the case when the input is in the top-right quadrant. For the details see the previous post. Is this 66% UpwWeek probability a lasting effect or it started only in the last 3 years? Is this effect (the tendency that VIX up and RUT up weeks are followed by RUT up weeks) persistent over the 13 years of our testing period?

In general, when you discover an effect by data-mining, you should always measure the strength of the effect over the testing period.

Why we are interested in this?
We need this historical stability information partly, because when we train the NN,
we have to split the input data into 3 sets:

  • training set: used for training. For the backpropagation algorithm.
  • validation set: used for validating that we don’t do overtraining or overfitting the input.
    After each epoch (1 epoch = 1 training cycle when we show all the training set samples to the NN),we check that the Error of the validation set is larger than the Error of the validation set of the previous epoch (previous iteration).
    Usually, the Error for the validation set decreases.
    However, when it starts to increase for more than 6 iteration (you can change this parameter), that means that the traning overtrained on the training samples.
    The NN loses the generalization capability if it overtrains. In this case, we terminate the training.
  • test set: it is not used in the training process. It is never used at all. It may be used for having an unbiased estimate for the prediction power of the NN. These test samples will be the out-of-sample data.

Note that our period is from 1997 to 2010, weekly data. We have 620 weeks.

  • NN Training Method A:
    Usually, when we train the NN, we randomly divide the 620 samples to 60% training, 20% validation, 20% test samples. Especially because of these random picks, we need to know that the random training data beheaves the same way as the random validation or the random test data.
  • NN Training Method B:
    On the other hand, we can overwrite the default training if we want. Instead of the randomness, we can deterministically set the chronological first 70% to be the trainig data. The next 15% to be the validation data and the last 15% to be the out-of-sample test data. However, if the effect exists only in the first 50% of the period, the effect samples will be in the training data only (first 70%), but not in the validation (next 15%) and in the test samples (last 15%). That is a problem. All the training/validation/test samples should show similar characteristics.

We will decide later which training (A or B) we use. For now, let’s see that the
‘the tendency that VIX up and RUT up weeks are followed by RUT up weeks’ is a stable effect over the 13 years or not.

In the first image, I plot the 620 weeks and
– draw a green bar if that week was in the ‘VIX up and RUT up’ quadrant and the next week was Up
– draw a half red bar if that week was in the ‘VIX up and RUT up’ quadrant and the next week was Down

I used half bar for the red instead of the full bar as a visual aid (beside the colour)
(the possible problem is that if the two weeks in different sets are very close to each other, and in that case the red and the green bars would overlap)
I rotated the chart, because the html theme in this blog only allows 500 pixel wide images. If an image is more than 500 pixel wide, it will squashed by the CSS style, and become unreadable.

We don’t really see any recognizable pattern in the distribution. That is good news.
If we see that the green bars clump together only in the first 5 years and there are very few green bars in the last 8 years, we would conclude that this effect was very strong in only the first 5 years, but it diminished, or even reversed in the last 8 years.
What I see with my own eyes in the green and red bars show more or less a ‘uniform’ distribution.
But we should never believe in our eyes if we can check it with numbers.
Let’s calculate for every week the
AccumulatedOccurenceOfUp / (AccumulatedOccurenceOfUp + AccumulatedOccurenceOfDown) ratio.
From the previous post, we know that on the last week, for the previous accumulated 620 weeks, this ratio is 66%. Up weeks occurred with 66% probability to the total weeks over the 13 years period. In an ideal world, if we run this calculation over the history, for every week we would get about 66% probabilities.
Here is what we got:

I would say that looks quite good. The variance was a little bit more than usual at the beginning, but this is attributed to the low sample count. There was about only 4 samples in the first year. But my conclusion is that by and large it shows that this effect, this tendency was persistent during the last 13 years.
Note that it doesn’t mean that it will be persistent in the near practical future (the next 3-5 years).


No Responses Yet to “Scrutinize every effect for historical stability”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: