NN for weekly forecast: there is success (somewhere and sometime: 17 years ago)


Recently I turned to be pessimistic about daily stock market return prediction made with Neural Networks.
There is so much randomness in that that it makes the problem very complex.
Giving simple raw daily open/high/low/close/volume data to the technical analysts may give some results,
because how the human mind can synthesize patters from the data after drawing candle-stick charts.
However giving simple raw daily open/high/low/close/volume data to a NN may be not fruitful.
The NN should be feed with less random and more expressive signals.
So, these raw signals must be preprocessed to ‘derived signals’. One example is the ‘%delta of the closePrice from the MA(200)’.
Or maybe a much simple derived signal: 1 if closePrice is over the MA(200), -1 otherwise.

I still have the opinion that the NN approach can be useful for prediction.
This is supported by these articles for example

[1] Roman, J., and A. Jameel, “Backpropagation and recurrent neural
networks in financial analysis of multiple stock market returns
,” Proceedings
of the Twenty-Ninth Hawaii International Conference on System Sciences, Vol.
2, 1996, pp. 454–460
[2] K. Kamijo and T. Tanigawa. Stock price pattern recognition: A recurrent neural network approach. In
Neural Networks in Finance and Investing, chapter 21, pages 357–370. Probus Publishing Company,
[3] P. G McCluskey. Feedforward and recurrent neural networks and genetic programs for stock market and time series forecasting. Technical Report CS-93-36, Brown University, September 93.

Focus now on the third article from McCluskey.
Download this Technical Report from this link (first link in the page)
That is a very very good resource for us. It contains downloadable studies in pdf form for many approaches (NN, heuristic, Genetic Alg., Fuzzy logic, Chaos, etc.)

This McCluskey technical report is a base for his PhD that compares NNs.
He says recurrent networks achieved 30%. (this was the best among all tested. single layer network returned 21%)
Quote from him:

The networks were designed to predict S&P 500 index prices several weeks in advance.
A large performance increase resulted by adding “windowing”. Windowing is the process of remembering previous inputs of the time series and using them as inputs to calculate the current prediction. Recurrent networks returned 50% using windows, and cascade networks returned 51%. Thus, the use of recurrence and remembering past inputs appears to be useful in forecasting
the stock market.

In addition, I did tests where the input consisted only of windowed S&P values
and of the raw input. In all cases, these approaches produce a buy and hold strategy, indicating that they were unable to see any pattern other than the long term
trend of rising stock prices.

While it was fairly easy to produce profits that exceeded a buy and hold strategy, it took substantial effort to exceed the hand coded results.


I modeled transaction costs by a commission that is a fraction of any change in the level of investment. I measured the profits using 0%, 0.1% and 1.0% commission rates.

All the results listed in these tables ignore commission costs.

These two comments are confusing. My note that he taught the NN with the commission!.
Great idea. So, the NN was penalized whet it changed from short to long, because of the commission. He not only predicted the next week closeprice of the SP500, but he predicted (outputted) the next week gain. We need more study here.

Cascade 2 and recurrent cascade correlation produced the best results of the algorithms studied.

My notes after reading his paper:

  • he forecast weekly prices; he goes short or long for the next week with a small partition (not 100% long or 100% short,but he uses a level of conviction). Actually what I wanted to do;
  • he achieves good results only with ‘derived inputs’, complex inputs; he cannot achieve any prediction with only S&P raw values and raw inputs.
  • he says that his ‘handcoded’ version was better than the NN, however take a strong note that his ‘handcoded’ version is all in-the-sample test. Of course he cannot do any out-of-sample test here.
    In the contrary, the NN versions all show their performance with out-of-sample test.
    So, even if he says that in his test the handcoded was better than the NN, I bet in real life, in out-of sample environment his hand-coding cannot beat the NN.
  • he uses windowing the inputs: instead of only giving the last week data, he feeds the last 12 weeks of data to the network. (I have to do it)
  • he uses averaging the outputs. He runs 64 different NN at the same time and averages its results to get the final estimate. (I have to do it)
  • in the 1960-1993 period all his SRN recurrent network returned 20% vs. the buy and hold of 10%. That is double the performance of the index. Without even playing on leverage. (The C2 cascading networks achieves 50% annually. Worth investigating.)
  • we have to use those ‘derived’ inputs. Most of them are some fundamental data.
    See his table for raw and derived data in page 34..38. (he used: sp500, Dow, Producer Price Index, M2, M3 values, Industrial Production, Federal Reserve Discount Interest Rate)

So, I have to gather (all or part) of these inputs to carry out these tests he performed 17 years ago.


No Responses Yet to “NN for weekly forecast: there is success (somewhere and sometime: 17 years ago)”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: