Smoother MA inputs instead of the ‘random’ VIX


See previous post. It is a continuation of that.
The previous week performance and VIX as inputs haven’t got too much predictive power. But at least they had some. We have to find other inputs that has more predictive power. For instance weekly interest rates may have an effect to the stock market. However, now, we try the SMA. SMA stands for Simple Moving Average.

Let’s try 2 other inputs:

  • Input 1: %percentage of the closePrice from the SMA(199) (or you can imagine SMA(200)) as a long term average, The input is positive if the closePrice is over the SMA, and negative if it is under it.
    (“indexFromLongMA = indexCloses ./ longMA – 1;”)
  • Input 2: %percentage of the closePrice from the SMA(50) as a short term (10 weeks) average.
  • The reason for using these 2 values, because these are the famous golden cross indicators.
    Long-term Trend is DOWN usually means: (50SMA<200SMA)

    The reason for the %percentage input is that we can make non-binary inputs: %percent above/bellow the SMA. That is better than -1/1 signals.

    Just for a little data torturing, let’s plot these inputs separately.
    So, instead of
    z = F(x,y), we now visualize z = F(x) and z = F(y);
    Let’s do it with the SMA(199), so we can see what is the pattern the NN should learn.
    We plot the AverageWeeklyPriceChange vs. the previous week %percentage of the closePrice from the SMA(199).

    The Matlab plot is here:

    and the same Excel plot:

    What we can read from this.

  • when the Russell is over the MA(199) (X axis > 0), we tend to have stable positive weeks.
  • when the Russell is under the MA(199) (X axis <0) , it is very random that it is a positive or negative week, and it has very high swings. Big uncertainty.
  • when the Russell is more than 10% (0.1 of X) above the MA(199), it is too bullish market. It tends to revert. It is like a spring. If it is elongated too much, it will snap back to its normal state (towards the MA).
  • These explanations are quite intuitive. (I am sure it is not so simple, but let’s accept it now.)
    Let’s see whether the NN can learn this or not.

    Results after averaging 2500 tests.

    Test 1: very large out of sample test set. Few training samples.
    The test set is picked randomly from the samples over the total period. TestRatio: 40%, Validation ratio: 12%. Train: 48%

    Number of Neurons/ winLoseRatios Mean/ stddev
    1 52.28% 3.67%
    2 52.41% 3.75%
    3 52.63% 3.52%
    4 51.43% 3.58%
    5 51.41% 3.65%
    7 51.09% 3.43%
    10 50.98% 3.31%
    20 50.54% 3.09%
    40 49.61% 2.97%

    The same with chart

    Test 2: small test set. Many training samples.
    The test set is picked randomly from the samples over the total period. TestRatio: 5%, Validation ratio: 18%. Train: 77%
    nNerons = 3, winLoseRatios Arithmetic Mean: 53.83%, stdev: 9.10%

    So, if we increase the training set, we could achieve 53-54% directional accuracy.
    That is good as a start. And my best result so far.
    But this is only directional accuracy. It doesn’t tell how profitable this strategy is.

    Suggested future tests:

  • just for fun try the NN learning with only 1 input; (like the MA(200))
  • try to change both input number of days parameters: for instance to MA(20) vs. MA(100).
  • as a weekly data, we can use the Monday/next Monday pairs, or the Tuesday/next Tuesday pairs, etc.
    We should use all in our training data. That would effectively increase the training samples 5 times.
  • Advertisements

    No Responses Yet to “Smoother MA inputs instead of the ‘random’ VIX”

    1. Leave a Comment

    Leave a Reply

    Fill in your details below or click an icon to log in: Logo

    You are commenting using your account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s

    %d bloggers like this: