Distributed memory of neural networks and the problem of the intelligence`s essence

Ibragim E. Suleimenov, Dinara K. Matrassulova, Inabat Moldakhan, Yelizaveta S. Vitulyova, Sherniyaz B. Kabdushev, Akhat S. Bakirov


The question of the nature of the distributed memory of neural networks is considered. Since the memory capacity of a neural network depends on the presence of feedback in its structure this question requires further study. It is shown that the neural networks without feedbacks can be exhaustively described based on analogy with the algorithms of noiseproof coding. For such networks the use of the term "memory" is not justified at all. Moreover, functioning of such networks obeys the analog of Shannon formula, first obtained in this paper. This formula allows to specify in advance the number of images that a neural network can recognize for a given code distance between them. It is shown that in the case of artificial neural networks with negative feedback it is really justified to talk about a distributed memory network. It is also shown that in this case the boundary between distributed memory of a neural network and information storage mechanisms in such elements as RS-triggers is diffuse. For the given example a specific formula is obtained, which connects the number of possible states of outputs of the network (and, hence, the capacity of its memory) with the number of its elements.


Distributed memory; Essence of intelligence; Methodology; Neural networks; RS-trigger

Full Text:


DOI: https://doi.org/10.11591/eei.v11i1.3463


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats