Everyone can use a neural network as a black box. But rarely does that provide meaningful results. In order to get them many steps of optimisation and thinking are required. Now I talk about data and network optimisation.
In general, it would be best to give the network all your raw data. But there are two problems.
First network training is difficult and works better on simple problems,
second given too much data the network likes to overfit and see noise as data.
The second task is deciding the network architecture. Here also in theory more neurons and layers would be better. But again training large networks becomes expensive and is prone to problems.
So the guidelines are to give the network data that is preprocessed and meaning-full and to keep the network as simple as possible but as complex as necessary. At the same time, these choices aught to be made such that you do not introduce a preference for the outcome in these steps.
Network structure is a very difficult topic. In a quick very simplified summary, the more layers the smarter and the more neurons the better network memory. Too many layers make the network too difficult to train and too many neurons is a recipe for overfitting (the network may now "remember" the features of the data.).
It may sometimes be useful to squeeze networks (making a middle layer smaller than the outer layers. ) This forces the network to go through a bottleneck and make an autonomous data reduction to the most relevant components.
It helps with overfitting as the network at the short layer has to forget all data not essential and helps with training by reducing the number of neurons. However in this case, with rather small amount of available data and small number of neurons this turns out to be not necessary. I might make a detailed post on this later.
For data pre-processing I decompose the data into different timescales. Then I search for the most significant features and combine them to the full input.
What I find is the following.
The data is mostly a random walk with no strong preference. By just using today as a prediction for tomorrow you reach an accuracy of 2.5068%. This is the minimum baseline for the neural network.
When taking only very long trends into account you find an overall growth and you can make a prediction to take todays value multiplied by the overall growth. This reaches an accuracy of 2.4893% or 0.0174% better than doing nothing. For this you really do not need a neural network. If you invest in bitcoin and expect that the past growth will continue in the future this is your strategy.
Now lets really start to use the network. I find a significant 2-day trend in the data. This means that when today was a good day tomorrow will be a bad day. Many people will naively have expected this, but the signal in reality is incredibly weak. By using the network on this effect you improve an additional 0.0017% percent over just expecting the average growth. Personally I felt that previous day pushback should be in the 0.1 percent region but is is much smaller than that.
The second relevant structure was a 1.5 month feature. It can be thought of as a support/suppression line but in reality is more complex than that. Over a growing analysis this adds another 0.0094% and is much bigger than the 2-day feature.
All other features were too small to be included in the network given the limited training data. In combination using all the data at once (enabling correlations) I obtain a total accuracy of 2.4737%, and improvement of 0.0157% over the analysis of expecting past growth. That may sound not much, but buying and holding will only provide an expected 0.0174% daily growth.
This means that the neural network analysis is almost twice as good! These numbers are tested on data that has not been used for training the network, but note that all this assumes that there is no new information entering the markets.
But it requires work and potentially transaction and conversion fees and thus may not be relevant on small amount of money.
Finally some plots.
This is the repeated one-day forecast. The network expects a minor correction down for tomorrow to 8839$ and then a slowed relaxation towards 10600$ over the following days. But note that repeated applications of 1-day predictions quickly diverge as the error is exponentially increasing. I will post more reliable long time forecasts later. Note that the prediction is smooth as the network filters out all the noise and does not generate fake noise.
This is the best fit. We see that the network rejects most of the data as noise and finds a small 2-day wiggle plus a longer roughly 1.5 month feature.
This post is in no way financial advice.
Hello , sir I'mupvote(will 100%) and Follow you (Comment) please mutual support please Upvote and Comment me.
https://steemit.com/dmania/@yasayanoluler/steempower-need-memes-zg1hbmlh-5y1ox
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit