This is an old revision of the document!
Contact: Amir Hesam Salavati
Room: BC 160
Tel: 021 - 693 81 37
Email: hesam [dot] salavati [at] epfl [dot] ch
Memorizing patterns and correctly recalling them later is an essential ingredient of neural activity. In past 25 years, a number of neural networks has been invented memorize and recall patterns. Interestingly, some of these networks are able to recall the correct pattern even if the input pattern contains error and is partially corrupted because of noise. In this regard, these artificial networks resemble error correcting codes, i.e. they are able to recognize the correct pattern in presence of noise.
However, the storage capacity of these networks are quite small compared to their counterparts in coding theory. Using low correlation sequences and some structured neural networks, we have been able to increase this capacity a bit (see the paper below). Nevertheless, we have not yet been able to find a tight bound on the actual number of errors that can be corrected using the suggested scheme. Finding such a bound is the main objective of this project.
This project has two phases:
The first phase requires reading some background literature. The second one needs some background in discrete mathematics and finite fields.
This project is suitable for students interested in neural networks, coding theory and mathematics who prefer theoretical works.
The prerequisites are:
To read further, the following paper is recommended: [1] A.H. Salavati, K.R. Kumar, A. Shokrollahi and W. Gerstner, “Neural Pre-coding Increases the Pattern Retrieval Capacity of Hopfield and Bidirectional Associative Memories”, IEEE International Symposium on Information Theory (ISIT), 2011. (pdf)