The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
A Theoretical Framework for Parallel Implementation of Deep Higher Order Neural Networks
Abstract
This chapter proposes a theoretical framework for parallel implementation of Deep Higher Order Neural Networks (HONNs). First, we develop a new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network). This will allow us to use a network of computers (rather than a single computer) to train a HONN to drastically increase its learning speed: all of the computers will be running the HONN simultaneously (parallel implementation). Next, we develop a new learning algorithm so that it can be used for HONN learning in a distributed system environment. Finally, we propose to improve the generalisation ability of the new learning algorithm as used in a distributed system environment. Theoretical analysis of the proposal is thoroughly conducted to verify the soundness of the new approach. Experiments will be performed to test the new algorithm in the future.
Related Content
S. Karthigai Selvi, Sharmistha Dey, Siva Shankar Ramasamy, Krishan Veer Singh.
© 2025.
16 pages.
|
S. Sheeba Rani, M. Mohammed Yassen, Srivignesh Sadhasivam, Sharath Kumar Jaganathan.
© 2025.
22 pages.
|
U. Vignesh, K. Gokul Ram, Abdulkareem Sh. Mahdi Al-Obaidi.
© 2025.
22 pages.
|
Monica Bhutani, Monica Gupta, Ayushi Jain, Nishant Rajoriya, Gitika Singh.
© 2025.
24 pages.
|
U. Vignesh, Arpan Singh Parihar.
© 2025.
34 pages.
|
Sharmistha Dey, Krishan Veer Singh.
© 2025.
20 pages.
|
Kalpana Devi.
© 2025.
26 pages.
|
|
|