Options
Artificial intelligence techniques in science research & education
Author
Zhang, Shouwen
Supervisor
Khoo, Guan Seng
Abstract
This thesis is organized into seven chapters :
Chapter 1 outlines what was done in this thesis and how it was implemented.
Chapter 2 introduces the typical architecture, general features and characteristics of multilayer neural nets (MLNNs) studied in this research.
Chapter 3 focuses on the most popular learning algorithm, the standard Backpropagation (BP) method. The standard steepest descent algorithm is studied in detail and a fast adaptive steepest descent (FASD) algorithm, is proposed. The objective of the FASD is to maximize the stepsize every step, for the steepest descent algorithm. Also, the standard conjugate gradient algorithm and its extension to nonquadratic problems are studied while the Scaled Conjugate Gradient Algorithm (SCG) is studied in detail and a Modified Scale Conjugate Gradient Algorithm (MSCG) is proposed and implemented. The MSCG is an effective and feasible method to implement the principle of the standard conjugate gradient to nonquadratic problems and it can make the learning fast and has less calculation complexity and memory usage involved in training the neural networks.
Chapter 4 presents a heterogeneous model of the ANNs, an alternative method to train the multilayer neural nets faster which has different activation functions among different hidden neurons.
Chapter 5 discusses a global minimum algorithms, i.e. the simulated annealing algorithm in detail.
In Chapter 6, a novel application of artificial intelligence, forecasting force-field parameters and lattice energies using neural networks, is discussed and it provides a complementary alternative to traditional methods and demonstrates the innate strength of neural networks and their potential as forecasting techniques in solid state physics and condensed matter physics.
Chapter 7 summarizes the student's research work and recommends a Multiple Conjugate Gradient Algorithm.
Chapter 1 outlines what was done in this thesis and how it was implemented.
Chapter 2 introduces the typical architecture, general features and characteristics of multilayer neural nets (MLNNs) studied in this research.
Chapter 3 focuses on the most popular learning algorithm, the standard Backpropagation (BP) method. The standard steepest descent algorithm is studied in detail and a fast adaptive steepest descent (FASD) algorithm, is proposed. The objective of the FASD is to maximize the stepsize every step, for the steepest descent algorithm. Also, the standard conjugate gradient algorithm and its extension to nonquadratic problems are studied while the Scaled Conjugate Gradient Algorithm (SCG) is studied in detail and a Modified Scale Conjugate Gradient Algorithm (MSCG) is proposed and implemented. The MSCG is an effective and feasible method to implement the principle of the standard conjugate gradient to nonquadratic problems and it can make the learning fast and has less calculation complexity and memory usage involved in training the neural networks.
Chapter 4 presents a heterogeneous model of the ANNs, an alternative method to train the multilayer neural nets faster which has different activation functions among different hidden neurons.
Chapter 5 discusses a global minimum algorithms, i.e. the simulated annealing algorithm in detail.
In Chapter 6, a novel application of artificial intelligence, forecasting force-field parameters and lattice energies using neural networks, is discussed and it provides a complementary alternative to traditional methods and demonstrates the innate strength of neural networks and their potential as forecasting techniques in solid state physics and condensed matter physics.
Chapter 7 summarizes the student's research work and recommends a Multiple Conjugate Gradient Algorithm.
Date Issued
1997
Call Number
Q335 Zha
Date Submitted
1997