Archive for March, 2011

In the previous post, we studied the money management idea with an adaptive algorithm, the Naive Learner (2 bins). That algorithm used the previous 200 days as a lookback period for determining what happened in the past in Up and in Down days and made a forecast according to statistics. See the details there; for […]


This is an unusual post, because it is not about training any Neural Network (NN). It is not about training anything at all. It is not about a post on Artificial Intelligence. It is more about investment and position sizing. It is good news for some, because this post will be easily understandable by non-neural […]


About half a year ago, we experimented with the normalization of the inputs and outputs in the Matlab version. For example, we discovered that Matlab uses automatic normalization, but that is not adequate and Matlab’s target normalization was even buggy. We learned our lesson that time. Instead of relying on the normalization mechanism of the […]


In neural network training one of the most difficult problems is to find the best network structure. This includes the optimal number of layers, the number of neurons in each layer, bias per layer and the activation functions. These are usually determined by trial & error. We do the same here by perturbing the activation […]


Encog quality

06Mar11

The previous post compared the speed of Encog and Matlab Neural Network (NN) Toolbox. This post compares Encog and Matlab qualitatively. Can Encog predict with the same prediction power as Matlab did for us in the past? Before answering that question, we amend our previous post. In the previous post, we contend that a test […]