Abstract
Here, we present two new neuron model architectures and one modified form of existing standard feedforward architecture (MSTD). Both the new models use self-scaling scaled conjugate gradient algorithm (SSCGA) and lambda–gamma (L–G) algorithm and entail the properties of basic as well as higher order neurons (i.e., multiplication and the aggregation functions). Of these two, compensatory neural network architecture (CNNA) requires relatively smaller number of inter-neuronal connections, cuts down on the computational budget by almost 50% and speeds up convergence, besides, gives better training and prediction accuracy. The second model sigma–pi–sigma (SPS) ensures faster convergence, better training and prediction accuracy. The third model (MSTD) performs much better than the standard feedforward architecture (STD). The effect of normalizing the outputs for training also studied here shows virtually no improvement, at low iteration level, say ∼500, with increasing range of scaling. Increasing the number of neurons beyond a point also shows to have little effect in the case of higher order neuron.The numerous simulation runs for the problem of satellite orbit determination and the complex XOR problems establishes the robustness of the proposed neuron models architectures.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Sinha, M., Kumar, K. & Kalra, P. Some new neural network architectures with improved learning schemes. Soft Computing 4, 214–223 (2000). https://doi.org/10.1007/s005000000057
Issue Date:
DOI: https://doi.org/10.1007/s005000000057