Conceptual Framework : “Machine learning based, non-objective probability-model for regular and extreme values of stock returns”

11Nov12

Now and then everybody needs a conceptual framework.

Our conceptual framework is different from the link above.

Here is the outline of a system that can help in stock market decisions. One of best way to illustrate a concept is the Flow Chart that usually tells the viewer the flow of information or sequential steps.

Here is our future conceptual framework:

 

The system is general enough that it can work in the prediction of the SPX, the RUT, the VIX, the EUR/USD ratio or anything else.

 

Let’s detail the bubbles in the flow chart a little bit more:

 

1. Learning probability distributions from historical data. Here we use the term Machine Learning in a general sense. We want to extract some useful prediction from historical data using the machine, the computer.

Let’s imagine a simple system that we usually don’t regard as a Machine Learning system.

Imagine that today, the SPX is over its 200 days SMA(200) (Simple Moving Average). A technical trader wants to take a bet for tomorrow’s market direction. How does he determine that he goes short or long?

He looks back at the last 20 years or 100 years history and based on that he calculates that every time the spot SPX was above the SMA(200), its next day return was 0.1% in average, and when it was under the SMA(200) its next day return was -0.2%. (These numbers are illustration only).

So, our technical trader ‘learns from the past samples’ that the expected profit for tomorrow is positive, so he goes long next day and buy SPX futures.

This is a very simple ‘machine learning’ system that everybody can ‘calculate’ it in Excel in about 1 hour.

 

Our conceptual framework is general enough that we didn’t fix the machine learning method. It can be a simply Excel calculation as mentioned before, or it can be a Linear Regression, a Neural Network, an SVM (Support Vector Machine) or a genetic algorithm, anything.

The different learning algorithms create different mathematical models.

We try to use a machine learning that is deterministic, so the same result can be reproduced with different backtests, but deterministic nature is not a must.

We prefer those machine learning algorithms that learns a probability distribution function (PDF), and Excel is not like that. The reason is that in one of the next step, we need not only the mean return, (anyway we prefer the median return) and we also wish to obtain the expected volatility, the standard deviation.

Therefore we prefer to work with probabilities.

 

We think the biggest mistake researchers generally make is that they research only the mean return, the expected profit or loss, but they fail to determine the expected volatility. For example, the previously mentioned SMA(200) days crossover method. We obtained that the profit is positive above the SMA(200), but we know nothing about the volatility. We contend that by forcing volatility down can even increase the profit in the long term, which contradicts the general efficient market theory. That says that the more profit we expect, the more volatility we should suffer.

 

 

2.

The ‘new’ classical probability approach (by Irving Fisher) uses only objective information coming from historical observations. That is the model that we have built in step 1. However, before the Fisher method of probability, others, like Bayes 100 years earlier regarded  that probabilities should be defined subjectively, based on prior beliefs.

Imagine the Nassim Taleb’s turkey example. A human farmer feeds the turkey nicely every day in the last 1000 days. The turkey is very happy with his friend: the farmer. However, on day 1001, it is Xmas time and the farmer comes with a knife, instead of food. A Fisher following mathematician would build up the turkey probability model only using the historical data, the last 1000 days of observation.

Fisher wouldn’t use other ‘fundamental information’, like the ‘general knowledge’ (belief) that

– farmer’s turkeys are eaten at the end anyway (almost without exception), as that is the purpose of raising turkeys.

– as the Xmas day is coming there are higher and higher probability that the farmer brings the knife instead of the food.

These fundamental knowledge doesn’t fit into the Fisher approach, but Bayes and Pascal would happily use this information too to build up the PDF (Probability Distribution Function).

Plato had no idea about PDFs, but even for him, this would have seen the better approach. Probability distributions are eternal objects. They exist irrespective of the observations. No matter how many observations we make, we cannot get to know the probability distribution totally through this way.

This is especially true for extreme values, outliers, power law distributions, in which extreme events occur very rarely, so observing them is quite difficult or impossible.

We can call this ‘new’ (rather old) approach belief based, subjective or non-objective probability. We tend to prefer the term ‘non-objective’, but it is only personal taste; they mean the same.

 

 

What do we mean by subjectivity (not objective) in the concept of stock market prediction?

Things that effect the PDF, but that cannot be observed through the historical samples of the last 3 years. Examples like:

– Events like USA presidential election in the next week (because the last 3 years data doesn’t contain the previous one)

– our general belief (by reading news and media) that the iPhone sales are ‘probably’ very good, because there were long queues in front of Apple shops.

– Mario Draghi makes a speech that he is willing to do everything to save the EU currency, and we believe he will do

– our belief that Cloud computing will be a big success in the future, so all cloud companies will perform better than other technology companies.

 

Because these fundamental things cannot be expressed through historical observations in step 1, we include these effects into our mathematical model here in step 2.

But how? It is not easy.

In step 1 we synthetized a probability distribution based on historical samples. We can use Gaussian distributions, log-normal distributions, Levy stable distributions, etc. If we use a Gaussian distribution, it can be described by 2 parameters: Mean, StDev, therefore our belief in step 2 can modify these parameter values. For example, our bullish (bearish) belief can increase (decrease) the Mean. If we expect higher volatility (in the case of coming USA election), our belief increases the StDev. If we expect lower volatility (because ECB starts to buy Southern country bonds), we decrease the StDev. Unfortunately, we prefer to work with log-normal and Levy stable distributions. Those have more obscure parameters, and therefore it is not so easy to express our belief as we mentioned here.

There is another question too? How much should we change these parameters?

No general, formalized answer for this.

We suggest modifying a little bit, and run 100K simulations based on those new parameters and calculate the CAGR, maxDD, StDev of the PV, to see the effect of those modifications.

 

 

3.

Using our non-objectively modified probability distribution we generate 1 million samples for the next day return.

 

 

4.

We determine Mean, Median, StDev and other statistics for the next day return from the simulated samples. You don’t have to do it for the Gaussian distribution, but you have to do it for all general probability distributions.

 

 

5.

Because we work with time series and we place bets every day, even if we have a positive expected next day return, it doesn’t mean we should place a bet.

See the volatility drag and the previous post for mare explanation.

In short, if the volatility is high, it is better to stay in cash, even if the expected profit is positive.

In step 5, based on the StDev we determine the minimum  threshold for the Mean. If the simulated Mean/Median is smaller than this threshold, we stay in cash.

For example: with 4.5% daily StDev, the volatility drag is 26% annual, so the threshold is 0.1% daily.

It means if the simulated Mean/Median is positive, but less than 0.1%, we stay in cash, and don’t go Long. Similarly, if the simulated Mean/Median is negative, but more than -0.1%, we stay in cash, and don’t go Short.

 

For determining these thresholds we can use the already generated 1M simulation samples, and assuming that these samples make a long time series.

 

 

 

 

This system can be called Conceptual Framework, but we prefer to call it “Machine learning based, non-objective probability-model for regular and extreme values of stock returns”.

We build up and use a probability model that can use non-Gaussian heavy tail distributions. Generating 1 million simulations or more — because this system is simulation based — we can model not only Gaussian, but extreme long tail stock market moves too.

 

Advertisements


No Responses Yet to “Conceptual Framework : “Machine learning based, non-objective probability-model for regular and extreme values of stock returns””

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: