I've run several tests and with different time frames and indicators, I seem to be getting mostly decent performance for the In Sample data (almost too good from what I've been able to create manually), but the Out of Sample portion falls off considerably. Using 3 years of data the last 6 months' profits are mostly flat and/or slightly down in almost every single case where I was able to achieve consistent results in the In Sample period. I can make an algorithm with similar Profit Targets and Stop Loss manually that performs much better when testing Out Of Sample.
Is it possible it should be weighting Out of Sample results more aggressively when training the model? Because that is what matters most when coming up with an algorithm that actually works on the live market. The Out of Sample and forward testing needs to be mostly consistent with the overall results. Otherwise, you are reliant on an auto-optimization process that is curve-fitting the model to be useless on the live market/OOS data.
Is there some way of testing and creating models to reduce curve fitting with this software that I have missed?
Is it possible it should be weighting Out of Sample results more aggressively when training the model? Because that is what matters most when coming up with an algorithm that actually works on the live market. The Out of Sample and forward testing needs to be mostly consistent with the overall results. Otherwise, you are reliant on an auto-optimization process that is curve-fitting the model to be useless on the live market/OOS data.
Is there some way of testing and creating models to reduce curve fitting with this software that I have missed?
0