Unser Projekt:
Alzheimer-Erkrankung: Früherkennung & Entrainment

Time series analysis – Explanation of Toki’s algorithm

It is common knowledge that turtles have a good memory. Toki takes advantage of this fact and memorizes your 30 latest clicks. So, Toki always has a sequence of 30 clicks in her mind that goes like right, right, left, left, left, right, right, right, ... After each click she forgets the oldest element of that sequence and adds the newest one right at the end.

This sequence is converted into a series of zeros and ones for the following analysis. Zero corresponds to a left-click and one corresponds to a right-click. Thus, the sequence above will become 1, 1, 0, 0, 1, 1, 1, ....

Neural networks are the method of choice for recognizing patterns in data sets. Now, to predict from a sequence which button the player will push the next time, we use the neural net shown below.

Leider unterstützt Ihr Browser die Anzeige von <canvas>-Elementen nichtUnfortunately your browser doesn’t support <canvas> elements.

The neural network employed consists of three layers: an input layer with three neurons I₁, I₂, I₃, a hidden layer in the middle with six neurons and one output neuron O₁ in the output layer. The data for the prediction are entered into the input layer. The output layer then delivers the prediction of the result.

Each of the neurons of one layer is connected to all the neurons in the next layer, i.e., I₁ is connected by six connections to all neurons in the hidden layer. The same applies to I₂ and I₃, resulting in 18 connections between input layer and hidden layer. The six neurons in the hidden layer are connected to the neuron in the output layer in the same fashion, giving another six connections. Summing up all the connections, we get to a total of 24.

We decided to use this structure for the neural network because it showed good results against a wide range of players. To determine the structure we used an evolutionary algorithm, i.e., an algorithm which is able to optimize itself by testing how well it would have done with a different structure in the past.

Finding the appropriate structure of a neural net in combination with an optimal training method is the actual challenge one has to meet if one wants to produce reliable prediction. Too big a neural net will need a very long training process and might suffer from overfitting to a problem, that is, it won't be able to extract the relevant information, but will mainly interpret the noise in the data.

The strength of each connection is represented by a number which indicates how much is passed from one neuron to another one. Briefly speaking, the strength of a connection gives you the importance of this connection for the neural network.

Each neuron adds up the signals coming into it and passes the sum of the incoming signal to an activation function. The result of the activation function, which is typically a logistic or hyperbolic function, is then passed on to the neurons in the next layer by the subsequent connections.

A concrete example

Before the neural network is trained on the data, the strengths of the connections are set at random values. At this stage the neural net is, so to speak, in a state of ignorance and first needs to learn to understand the data. This can be compared with the state of a brain of a newborn whose neurons are not yet connected in a way that it can, e.g., comprehend a language. Just as it takes a long time until a child can easily speak a language, the learning process for a neural network takes a relatively long time. Once the strengths of the connections have been optimized to deal with the problem, new data can be processed very quickly.

To explain the concrete training process, we look at a training data set generated by a sequence of 30 clicks that Toki always keeps in her mind:

Let us say, that you had clicked the following sequence during the last 30 clicks 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0. As you remember, 0 stands for a left-click and 1 stands for a right-click.

Above we already revealed the structure of the neural net we are going to use: We have three input neurons and one output neuron. Consequently, we always have to pass three consecutive clicks out of the long sequence in Toki's memory to the input neurons and compare the resulting value of the output neuron to the click that follows the three clicks in the input neurons as that is the one we want to predict.

The first data set the randomly initialized net gets are the first three clicks of the sequence, i.e., 0, 1, 1 for I₁, I₂, and I₃. With this input the net would get a result for O₁ of, say, 0,84. The reference value for O₁ would be the fourth click of the sequence, that is, 0. Next, the strengths of the individual connections will be slightly modified so that the result of the net gets closer to 0. After that the net would get the next series of three clicks, viz., click No. 2, 3, and 4 as input data. That would be 1, 1, 0 for I₁, I₂, and I₃. Again, the net would get a result for O₁ which would deviate from the reference value 1 of click No. 5. The strengths of the connections will be modified again to reduce the deviation. This process goes on and on until a specific number of iterations is surpassed or the average deviation falls below a fixed minimum. When this is done, the training process is over and the neural net can be used to predict your next click by passing your three latest clicks, i.e., 0, 1, 0 to the input neurons.

We hope that this short introduction gave you some insights into the world of time series analysis and will arouse your interest for modern methods of data analysis.

If you have any questions, don't hesitate to contact us.