6 | CONCLUSION AND FUTURE SCOPES
This paper provides a detailed review of arguments and factors affecting designing the filters for the most promising architectures based on the convolutional neural network. The paper introduces the convolutional neural network, its layers, and signal flow, followed by a brief overview of hyperparameters like filter size, number of filters, activation function, and more.
The primary purpose of this paper is to shed light on the factors and supporting arguments made in promising studies for designing filters. The review starts with the filter initialization and its importance and specifies different types of the same. Each of them is briefly explained with strong and weak points. Due to the different nature of learning, the filter designing study is divided into supervised and unsupervised learning groups. The filter designing parameters are discussed in detail for promising supervised methods like AlexNet, ResNet, VGG, and similar variants with subsequent versions. The relevance of these parameters on input data, objective functions, application types, computational power, and other parameters are noted and critically compared. We have surveyed and reviewed the studies on unsupervised methods like AE, K-means, SOM, and SSL with the same objective, and the arguments are concluded. Having lack of mathematical backing, filter designing is mainly summarized as a data-dependent process. Deep learning is still in its infancy, and having open questions like optimum filter designing is a tremendous opportunity for the current algorithms.