This is an old revision of the document!
Contact: Amir Hesam Salavati
Room: BC 160
Tel: 021 - 693 81 37
Email: hesam [dot] salavati [at] epfl [dot] ch
Memorizing patterns and correctly recalling them later is an essential ingredient of neural activity. In past 25 years, a number of neural networks has been invented memorize and recall patterns. Interestingly, some of these networks are able to recall the correct pattern even if the input pattern contains error and is partially corrupted because of noise. In this regard, these artificial networks resemble error correcting codes, i.e. they are able to recognize the correct pattern in presence of noise.
However, the storage capacity of these networks are quite small compared to their counterparts in coding theory. Given the fact that modern codes use the same basic structure to do error correction and the one used by neural networks, i.e. a bipartite graph with local message passing, it seems interesting to consider applications of modern coding theory to increase the storage capacity of neural networks by finding the appropriate weights as well as proper update rules for the neural graph.
This project has three phases:
The first phase requires reading some background literature. The second one needs programming skills (preferably in MATLAB) in order to implement the considered algorithms. Finally, the third step is a combination of theoretical work (doing research) and implementing some new ideas to find the best one in practice.
This project is suitable for students interested in neural networks, coding theory and mathematics who prefer a combination of theoretical and empirical works.
The prerequisites are: 1)Basic knowledge of coding theory and linear algebra (for the last phase). 2)Being familiar with a suitable programming language (MATLAB is preferred. But C/C++ is acceptable.)