SYMBOLIC ANALYSIS OF CLASSICAL NEURAL NETWORKS FOR DEEP LEARNING


Vladimir Milićević, Igor Franc, Maja Lutovac Banduka, Nemanja Zdravković, Nikola Dimitrijević

Abstract: Deep learning is usually based on matrix computing with a large number of hidden parameters that are not visible outside the computing module. A deep learning algorithm can be implemented in hardware or software as a non-linear system. It is common for researchers to visualize a computing module and monitor its hidden parameters. In this paper, we propose, as a proof of concept, to start the system design by drawing a single neuron. A more complex scheme of the neural network is obtained by using the copy, move, and paste commands for the simplest unit. The number of neurons and layers can be chosen arbitrarily. When the scheme is complete, implementation code is automatically executed using symbolic inputs, system parameters, and symbolic activation functions. This cannot be done manually because the system response is extremely complex. With the symbolic expression of outputs obtained from inputs and parameters, including pure symbolic activation functions, many other properties can be derived in closed form, such as classification with respect to a single system parameter, activation function, or inputs. This unique original method can help scientists and programmers design complex machine learning algorithms and understand how deep learning algorithms work. This paper presents several examples with new achievements. The proposed algorithm can be implemented in any programming language with symbolic computing. Although it was developed for a classical neural network, the same methodology can be used for any type of neural network.

Keywords: artificial neural networks, closed-form expression, feature extraction, machine learning

DOI: 10.24874/IJQR19.01-06

Recieved: 13.03.2024  Accepted: 10.09.2024  UDC:

Reads: 42   

Download document