Options
Data mining applications using non-linear scientific methods
Author
Ramakrishnan Arun
Supervisor
Khoo, Guan Seng
Abstract
This thesis describes in detail two novel applications of the back-propagation neural network. In the first application, the neural network is viewed as a component extractor. Here, the network attempts to dynamically find the best indicators (through non-linear weighted averaging) that give a trading signal that matches as close as possible the perfect foresight. The results obtained after training the network on the Kuala Lumpur Stock Exchange Composite Index are presented and discussed. In the second application, the network is used to forecast the modified regressed slopes of price returns. Thus, the ideas of both regression and neural networks are fruitfully combined. The forecasted value is used in a trading strategy that is reasonable and intuitive. The assumptions involved in using the regressed slope are inspected critically. Attention is given to performance. This is accomplished by means of the Sharpe Ratio by which ambiguity that may result from benchmarking a strategy against other indicators is avoided.
In the early chapters, we start off by considering the basic neural network unit, the linear threshold gate, and investigate its computational capability. Then, the ideas are generalized to polynomial threshold gates. Having seen how Boolean functions can be realized, some fundamental results are summarized about the capability of neural networks to approximate continuous functions. Back-propagation is introduced as an intuitive but powerful rule that is used in neural network learning. Some issues pertaining to the convergence and speed of convergence of back-propagation networks are discussed in detail. These topics include weight initialisation, tuning the learning rate and momentum parameters, and the effects of the nature of the activation function. The generalization capability of back-propagation networks is also investigated. A widely used generalization technique called cross-validation is presented.
We then proceed with the background for the applications, by first giving an overview of fundamental and technical analysis and then illustrate why technical analysis is useful in gauging the market. The indicators used for the first application, the portfolio selection problem, are discussed. Some aspects of the theory of least squares regression are seen. The motivation for using the regressed slope of returns in our second application is presented. Then, the properties of regression parameter estimates are explored. These properties are found to be attractive in view of our applications. The Guess-Markov conditions which must be satisfied for the desirable properties of unbiasedness and "bestness" are stated.
The applications are then discussed, the essentials of which have been described in the first paragraph. In conclusion, we summarize all the work done and explore some related areas where further research can be conducted. The appendix includes the complete source-code of the back-propagation neural network with the additional enhancements of adaptive learning rates and momentum. The program, which was used for all the simulations, is designed to work in all C-compatible platforms as minimum reliance is placed on special functions (like the random number generator) that may be used in a particular platform.
In the early chapters, we start off by considering the basic neural network unit, the linear threshold gate, and investigate its computational capability. Then, the ideas are generalized to polynomial threshold gates. Having seen how Boolean functions can be realized, some fundamental results are summarized about the capability of neural networks to approximate continuous functions. Back-propagation is introduced as an intuitive but powerful rule that is used in neural network learning. Some issues pertaining to the convergence and speed of convergence of back-propagation networks are discussed in detail. These topics include weight initialisation, tuning the learning rate and momentum parameters, and the effects of the nature of the activation function. The generalization capability of back-propagation networks is also investigated. A widely used generalization technique called cross-validation is presented.
We then proceed with the background for the applications, by first giving an overview of fundamental and technical analysis and then illustrate why technical analysis is useful in gauging the market. The indicators used for the first application, the portfolio selection problem, are discussed. Some aspects of the theory of least squares regression are seen. The motivation for using the regressed slope of returns in our second application is presented. Then, the properties of regression parameter estimates are explored. These properties are found to be attractive in view of our applications. The Guess-Markov conditions which must be satisfied for the desirable properties of unbiasedness and "bestness" are stated.
The applications are then discussed, the essentials of which have been described in the first paragraph. In conclusion, we summarize all the work done and explore some related areas where further research can be conducted. The appendix includes the complete source-code of the back-propagation neural network with the additional enhancements of adaptive learning rates and momentum. The program, which was used for all the simulations, is designed to work in all C-compatible platforms as minimum reliance is placed on special functions (like the random number generator) that may be used in a particular platform.
Date Issued
2000
Call Number
QA276.4 Ram
Date Submitted
2000