This paper conducted thorough experiments to compare performances of finite with and infinite width networks. The network architectures investigated are FNN, CNN with/without global average pooling (GAP), and two parameterizations of them (standard one and NTK one) were compared. Several techniques such as regularization and ensemble learning are applied to these methods. Throughout the experiments under several different settings, they derived several conclusions from several view points. They also developed best practices for using non-trainable kernels on the CIFAR-10 classification task. The authors conducted thorough experiments in depth. The obtained insights will be quite useful for practitioners and promote further investigations of infinite width networks. Thus, this study is beneficial for the community.