Distributed Reduced Convolution Neural Networks
Main Article Content
Abstract
A Convolution Neural Network (CNN) is a popular tool in the domains of pattern recognition and machine learning. The performance of KCNN (kernel-based convolutional neural networks) is better than that of regular CNN. Although the KCNN can solve challenging nonlinear problems, doing so when dealing with a large-size kernel matrix is time-consuming and memory-intensive. The computational load and memory usage could be drastically decreased by adopting a reduced kernel strategy. But as the total amount of training data grows at an exponential pace, it becomes hard for a single worker to efficiently store the kernel matrix. Because of this, there can be no effective centralised data mining. In this research, we suggest the use of a distributed reduced kernel, or DRCNN, to train CNN using data that is stored in several locations. The data in the DRCNN will be spread out amongst the nodes at random. Static communication between nodes is defined by the network's architecture rather than the quantity of training data kept on each node. The DRCNN is an alternate direction multiplier (ADMM)-based distributed training technique, in contrast to the standard reduced kernel CNN. Experiments on the huge data set show that the distributed method may yield nearly the same results as the centralised algorithm, and it takes significantly less time. As a result, the amount of time spent computing is drastically reduced.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.