### Exploiting the weekend anomaly

We made a separate study that uses ARMA (Auto-Regressive Moving Averages) machine learning technique for predicting daily and weekly SPX, RUT values.

We found that the overall gain was worse for the weekly case than for the daily case.

It seems that when we play only weekly rebalancing, we can be smarter only 52 times per year, while if we play daily rebalancing, we can be smarter 260 times per year. It is like the fractal nature of the seaboard. If we measure the length of the coast with a 1 meter stick, we measure more coastal length than measuring it by a 1 km length stick. We can leverage on the small fractal details if we do daily rebalancing.

My suggestion for weekly rebalancing was that the weekly prices fluctuate less, they are less random, but it seems it is not enough for having a strategy that produces more annual gain.

So, we after 2 months of weekly forecast we go back to daily forecast.

In the literature, you can find a couple of anomalies that worked in the past. One is the Day of the Week anomaly. Or more concretely, the weekend effect.

More here:

http://marketsci.wordpress.com/2010/03/22/a-curious-case-of-the-mondays/

Note that MarketSci tested it for SPY and he found that the anomaly disappeared from 1990. However, we made a study and according to our statistics, it is still present in the RUT. Encouraged by this, we plan here a very basic ANN with a 1 dimensional input.

Input: DayOfTheWeek from 1..5, where 1 is Monday, 2 is Tuesday, … 5 is Friday.

Output: nextDay%Gain.

Let’s see the distribution of the nextDay%Gains as a function of the dayOfTheWeek from 1998.

It is very clear that on Fridays, it is worth forecasting that the nextDay (next Monday or next Tuesday, if Monday is holiday), is a down day.

Feed this to a NN.

We show different NN surfaces for different number of Neurons.

These surfaces in theory should approximate the previous chart, the nextDay%Gains distribution chart. (Note that every run of the algorithm produces different NN weights, since the random nature of the validation set from the training set.)

nNeurons = 1;

nNeurons = 2;

Generally, they don’t seem too be right. Why after learning all the past, for example the first chart of the NN surface (nNeurons = 1 case) is so bad? It seems it couldn’t learn the function.

As we move to the 4 neuron case, the function starts to twist and wiggle. That is bad for generalization. It is more overfitting.

Based on this, we select the number of neurons to be 1 or 2, but it is dangerous to increase it more.

Hopefully, to decrease the randomness, we can use the ensemble method:

we run different NNs and every one of them casts its vote.

The votes of the members are aggregated as the average vote.

The parameter that we use for this is the nRepeatLearning.

Let’s run the backtest. The parameters are

nNeurons = 2;

lookbackWindowSize = -1; % look back as far as it can

testSamplesRatio = 0.50;

nRebalanceDays = 1;

Because we use 50% of the samples for the initial training, the real forecast starts from the 1461 samples. So, our backtest tests the last 2922-1461=1461 days. That is about the last 5.5 years.

In that period the RUT went from 577 => 682

And in this period the buy&hold strategy would have given 18% gain.

indexCloses(2922)/indexCloses(1461) = 1.18; (that is 577.9100->682.2500)

This is the period from 2005 january to 2010. The 18% overall gain is not too good, because of the

stock market crash of 2008-2009 is in the tested period.

1. Blind every day Up Voter:

winLose:, 52.84%, avgDailyGainP:, 0.03%, **projectedCAGR: 1.44%, periodGain: 18.05%**

That is exactly what we expected. 18% gain as the buy&hold.

2. Blind every day Down Voter:

winLose:, 47.16%, avgDailyGainP:, -0.03%, projectedCAGR: -1.42%, periodGain: -46.97%

Interesting that the 18% up gain can be transformed to -46% loss

if we short every day with FULL 100%; every day is a rebalance day.

We know from separate studies that the problem is the rebalancing.

Note that if we didn’t rebalance, just we did a short&hold strategy, we would have lost only the -18%

(that those buy&hold users have gained).

The moral takings is that we should be very careful if we short. And don’t rebalance frequently if you short. You usually better not short!

3. Lonely voter

nRepeatLearning = 1;

Note that because of the daily random decisions, another backtest run of the lonely voter gives different results.

Some backtest runs:

winLose: 54.07%, avgDailyGainP:, 0.07%, projectedCAGR: 3.65%, periodGain: 116.92%

winLose: 52.29%, avgBarGainP:, 0.00%, projectedCAGR: 0.08%, periodGain: -19.10%

winLose: 52.84%, avgBarGainP:, 0.02%, projectedCAGR: 0.88%, periodGain: 1.34%

That is too much variability, therefore, it is advisable to use the ensemble method.

4. Ensemble vote with 3 members

nRepeatLearning = 3;

winLose: 52.22%, avgBarGainP:, 0.04%, **projectedCAGR: 2.24%, periodGain: 47.57%**

5. Ensemble vote with 7 members

nRepeatLearning = 7;

winLose: 52.91%, avgBarGainP:, 0.04%, **projectedCAGR: 2.15%, periodGain: 43.92%**

The chart is

Note that the 7 members ensemble NN successfully** increased the 5.5 years cumulative gain from 18% to 44%. (2.5x gain)**

That is equivalent to increasing the CAGR from 1.44% to 2.15%. (1.5x gain)

So, we **increased the CAGR by 50%**.

Also note that because we do short and long as well, our %gain chart has smaller drawdown than

the buy&hold strategy.

One can say that: Now then what?

The pattern (down Mondays) are present in the samples. After one recognize the pattern, one can write a simple deterministic algorithm that shorts the market at the end of every Friday. Deterministically. However, the main point is that what if we (humans) cannot recognize the pattern in the data? The NN can automatically recognize it. The NN cannot ‘explain’ to us why it forecasts down days for Mondays, it won’t give us reasoning,

it is a black box.

So, in this 1 dimensional case, it is easy to give some human reasoning, however try to deduct some logical reasoning if the output depends on a 100 dimensional input vector. Humans have no chance.

Verdict: It is dandy that it is so simple; Only 1 dimensional input and it is a profitable application of the NN.

Our main goal in the future is to find more of this anomalies and add them to the input space.

Filed under: Uncategorized | Leave a Comment

## No Responses Yet to “Exploiting the weekend anomaly”