EPFL

Algo+LMA

**3-19 January**

In the last two weeks or so, I was mainly busy with writing progress reports for previous months and documenting the MATLAB codes that we have. Besides that, together with Mr. Amin Karbasi, we were able to follow some ideas on learning a sparse vector that is orthogonal to a set of given
training vectors. The algorithm is a combination of the message passing algorithm for compressed sensing by Donoho et al. and that of Xu et al. for learning an orthogonal vector to a set of patterns in the training set.

In this report, you can find more details about this idea.

Biocoding project progress report: 3-19 January 2012

**20 January - 3 February**

In the last two weeks, Mr. Amin Karbasi and I were busy writing a paper for ISIT 2012. the paper is about a two-level neural network capable of memorizing an exponential number of patterns while being able to tolerate a fair amount of noise. Furtheremore, the proposed approach comes with a gradual learning algorithm which makes it suitable for neural applications.

The final draft of the paper can be downloaded from the link below.

Multi-Level Error-Resilient Neural Networks

**5-10 February**

I was mainly busy with the French classes in this period.

**13 February**

In the last few days, Mr. Amin Karbasi and I worked on the technical report corresponding to the ISIT 2012 paper. This is a more complete version of the paper and contains the proofs of the theorems as well as more background reviews.

This report can be downloaded from the following link:

Multi-Level Error-Resilient Neural Networks with
Learning

**14-29 February**

In the last two weeks, my primary focus was on extending the proof of the learning algorithm
which I mentioned in November 2011 reports to more general cases so that it becomes ready for the
journal paper. Corresponding to the proof, I read two elegant papers on proving the convergence of
some iterative neural learning algorithms. I also started writing the text for the journal version of
our ITW paper. Finally, I spent some time organizing MATLAB codes for simulating the learning
algorithm and started implementing the idea of having a neural network which adaptively learns
the constraints, i.e. starting with a few constraints and based on the performance learning more if
needed.
In this report, I am going to explain the steps I have taken to prove the convergence of the
sparse learning algorithm and the summary of the two papers I have read.

This report can be downloaded from the following link:

Biocoding project progress report: 14-29 February 2012

The journal paper draft is also available below:

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - DRAFT (15 Feb 2012)

**1-16 March**

In the last two weeks, I was mainly busy with the proof of the journal paper. However, with not much success. I had a discussion with Amin and will explore the ideas he suggested in the coming weeks.
I also completed the first version of the code for making our neural network adaptive, i.e. learn more constraints if needed. I also discussed with Mr. Amin Karbasi so that we exploit the adaptiveness into a two-level neural network with improved error correction and speed performance.

The first version of the adaptive neural associative memory is accessible from the link below.

MATLAB Code for: Adaptive Neural Associative Memroy - V1

**17-30 March**

In the last two weeks, I was mainly busy working on the journal paper. I have made some progress
both in terms of proving the convergence of the learning algorithm as well as analyzing the proba-
bility of recall error for the proposed neural associative memory.
Regarding the multi-level neural associative memory, together with Mr. Amin Karbasi, we
worked on nding a proper model in order to be able to analyze the algorithm we submitted to
ISIT 2012. We are going to rst do a sanity check on the neural aspect of the model with the help
of experts on the neural networks. Having done that we will work on analyzing the performance
of the proposed method. I also worked on the MATLAB code for the adaptive single-level neural
associative memory. The code works nicely now for the single level. I am going to extend the code
to multiple levels in the upcoming weeks so that we will have a very nice framework.

In what follows, I am going to explain the theoretical progress for the journal paper.
In this report, you can find more details about this idea.

Biocoding project progress report: 17-30 March 2012

The
MATLAB code for the adaptive neural network as well as the verication of the performance analysis can be found in the accompanying files.

MATLAB Code for: Adaptive Neural Associative Memroy - V2

Analysis Verification - Journal Paper - March 2012

**2-13 April**

In the last two weeks, I was mainly busy with writing the second draft of the journal paper and completing the proof for the convergence of the learning algorithm as well as analyzig the performance of error correcting algorithms. I also conducted some simulations to verify the correctness of the analysis as well as the proposed algorithms. The PDF file of the second draft as well as all other files (including the latex file) can be found via the following links:

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - SECOND DRAFT (13 April 2012)

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - SECOND DRAFT (13 April 2012)

The code for smiluating the proposed algorithms can also be found via the following link:

MATLAB Codes for Simulations in the Journal Paper

Another topic which I worked on together with Mr. Amin Karbasi is the model for our multi-level neural associative memory. We have fixed a meeting with the group of Prof. Gerstner to present our model to them and do a sanity-check from a neuroscientific point of view. The details of the model is accessible below.

Overlapping Clustered Neural Associative Memories

Finally, I read the paper “Verification-based decoding for packet-based low-density parity-check codes” by Luby et al. since it seems relevant to the denoising algorithm we are going use in the multi-level model described above. However, as it turned out, although their algorithm is very simple and elegant, the one used in peeling decodes is much more similar to the algorithm we have in mind. A summary or the paper and my notes are accebile from this link.

**16-30 April**

In the last two weeks, we presented our new model on neural associative memories to the group of Prof. Gerstner and they really liked it, specially its modularity, i.e. local connections among neurons. Having revived the green light from their group on the soundness of the model, Mr. Amin Karbasi and I have started analyzing the dynamic behavior of the network. Interestingly, it is very similar to the analysis of the \emph{Peeling decoder} introduced by Luby et. al in “Efficient Erasure Coding Codes”. Therefore, I read that paper again more carefully to understand the way they analyze the decoding procedure. Although our neural algorithm is very similar to the operations done by the peeling decoder, they are exactly the same as each other. The preliminary analysis for our neural algorithm as well as the summary of “Efficient Erasure Coding Codes” is explained in the following report.

Biocoding project progress report: 16-30 April 2012

Finally,I finished writing the first draft of the journal paper. Although simulation results are not complete yet, the text is finished. The draft as well as the simulating code can be found below:

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - THIRD DRAFT (25 April 2012)

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - THIRD DRAFT (25 April 2012)

MATLAB Codes for Simulations in the Journal Paper - 25 April 2012

**1-16 May **

In the last two weeks, Mr. Amin Karbasi and I were busy nding an altrenative alnaysis for the proposed neural algorithm, based on the ideas Prof. Amin Shokrollahi had mentioned in our meeting. In the report below, you can find the list of ideas as well as the details of our investigation for some of them.

Biocoding project progress report: 1-16 May 2012

We also prepared the paper which we submitted to the NIPS 2012 conference. The paper
is based on the idea of clustered neural associative memory and the error correction results are absolutely impressive compared to our previous approaches. The pdf of the paper can be found below.

Iterative Learning in Modular Associative Memories

Finally, the MATLAB codes for simulating the paper is accessible from the link below.

MATLAB Codes for Simulations in the NIPS 2012 Paper

**17-31 May **

Vacation

**4-15 June **

In the past two weeks, I was busy preparing for the upcoming ISIT to make the presentation and
the additional technical materials to put on Arxiv, both of which can be found separately on the
ERC log.

I also prepared the poster for the upcoming North American School on Information Theory. The poster, which can also be found on the ERC log, focuses mainly on our recent progress in multi-level neural networks, which is going to be presented at ISIT 2012, and encourages new applications of coding theory in neurosceience.

Mr. Karbasi and I also had a meeting with Prof. Pster from UT Austin to discuss his recent
work on spatially coupled codes. This was triggered as a result of his talk in SuRI, where we
witnessed a lot of similarity between their model and out clustered neural networks which we have
submitted to NIPS 2012. to our benet, they have proposed a very nice framework to analyze these
models which we can easily adapt to our neural approach. I will describe their method in more
details in the following. We will use this framework to expand our research to spatially coupled
neural networks which interestingly seems very plausible according to neurophysiological data.

Biocoding project progress report: 4-15 June 2012

The presentation and the final version of the technical reports for the ISIT are accessible from the following links

Multi-Level Error-Resilient Neural Networks with Learning

ISIT 2012 Presentation: Multi-level Error Resilient Neural Networks

The IT School postercan be accessed via the following link

Multi-Level Neural Associative Memory with Exponential Pattern Retrieval Capacit

**16-30 June **

In the past two weeks, we were busy applying the analysis proposed for spatially-coupled codes to our model. To this end, we read another paper about this method and worked on designing proper simulation scenarios to test this approach on a database of natural images. The set of images is perfect for the spatially-coupled model since we can divide the picture into overlapping clusters and learn the constraints for each cluster. Then, each cluster in a row will act as a component code in a Generalized LDPC (GLDPC) code. The clusters in one row of the image then form a (GLDPC) code and different rows will form a spatially coupled GLDPC code, much like the approach proposed in the paper “Approaching Capacity at High Rates with Iterative Hard-Decision Decoding”. This work will hopefully form the basis for a paper which we would like to submit to Allerton 2012.

Biocoding project progress report: 16-30 June 2012

**1-13 July **

In the past two weeks, Mr. Amin Karbasi and I were busy finishing the paper on applying the spatially-coupled codes to neural associative memories, which we submitted to Allerton 2012 conference. The paper can be found in the link below.

Coupled Neural Associative Memories

The MATLAB code for simulating the synthetic dataset is also provided below.

MATLAB Code for: Coupled Neural Associative Memories

**16-31 July **

In the past two weeks, we were mainly busy writing a technical report on our learning algorithm for the Allerton paper (available below). We have also been working on simulations for the NIPS conference in order to answer some of the reviewers' comments.

Learning Algorithm for Coupled Neural Associative Memories

I have also written a new draft for the journal version of the ITW work, which is available from the following link.

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning - DRAFT (27 Jul 2012)

**1-17 August**

In the past two weeks or so, our main focus was to address the comments provided by the reviewers of NIPS 2012. To this end, we continued simulating various approaches to memorize features from
a dataset of spoken English words. Additionally, we studied a couple of papers mentioned by the
reviewers, which turned out to be very helpful. A brief summary for two of these papers is provided in the following report.

Biocoding project progress report: 1-17 August 2012

**20-31 August**

In the past 10 days, I started doing research on some of the widely-used feature extraction methods, in particular, the Gabor-type wavelets. Gabor filters is of major interest for us because there are some evidence that suggest similar filters are used in different stages of human visual system to extract features from images.

Using Gabor like filters in our simulation was another subject that we have worked on. We would like to see if utilizing such filters results in features that lie (approximately) on a subspace, which in turn makes it possible for our learning algorithm to memorize them.

We also read some papers to continue our exploration into the field of pattern recognition and object classification. Among them, one of the most noteworthy approaches is \emph{convolutional neural networks} the idea of which is very similar to that of spatially coupled LDPC codes, or as otherwise known, \emph{convolutional LDPC codes}. Interestingly, a variant of such models (called Convoluational Deep Belief Networks) seems to be capable of learning not only to detect edges in images, but also object parts and object themseleves (i.e. faces, cars, etc.). This result becomes even more remarkable when noting that these networks act in an unspuervised manner, i.e. they learn by themeseleves!!! some of the key aspects of these models is explained in a very nice talk given by Prof. Andrew Ng at Google (see this link).

The summary of some of the other papers is provided in the rest of this report.

Biocoding project progress report: 20-31 August 2012

**1 - 14 September **

In the pas two weeks, Mr. Amin Karbasi and I have been busy working on proper models to apply our method to a database of natural images. We have been playing with various combinations of feature learning stages as well as non-linear transformations. In this report, we will explain two models, one that works and one that fails. However, the one that fails could be modified so that it also work. We will explain this modification and discuss problems in which these models can be useful.

Biocoding project progress report: 1-14 September 2012

**18 - 30 September **

In the pas two weeks, Mr. Amin Karbasi and I have been busy preparing our work for the ICML 2013 conference. For this purpose, we have witten a draft (which can be found separately in
the ERC log) as well as performing new simulations over real datasets. The paper is based on the
(rejected) NIPS 2012 draft, with a few modications, and it now contains simulations over a dataset
of spoken English words. The results were very promising and have encouraged us to proceed in
this direction.

On a related topic, we were working to perform simulations over a dataset of natural images.
We have played around with dierent models that suits our error correcting algorithm and at the
moment, have a very promising one, explained in more details in what follows. In this report, we
will also discuss some of the models that have failed and the reason behind their failure.
Currently, we are interested in memorizing and denoising images. But we also have an eye
on extending our algorithm to image classication. something which might require some radical
changes in the recall (classication) algorithm.

Biocoding project progress report: 18-30 September 2012

The ICML 2013 draft is also available from the link below:

Iterative Learning and Denoising in Convolutional Neural Associative Memories

Finally, the MATLAB codes for simulating the ICML paper and the progress report is acceible from the following link. The simulation data is also stored separately on a local disk.

ICML 2013 MATLAB Codes

**1 - 18 October **

In the past two weeks, I was mainly working on applying our method to a database of natural images. I have played with different models and tested a few different ideas to find the best result. I will explained these approaches, most of which failed, as well as some ideas I am going to work for future.

Biocoding project progress report: 1-18 October 2012

I also worked on finalizing the journal version of the ITW paper. The newest version of this paper can be downloaded from the link below.

ITW Journal Paper Draft - 10 October 2012

The MATLAB code for this period is also available below.

MATLAB Codes for the 1-18 October Period

**19 - 31 October **

IIn the past days, we have continued working on applying our method to a dataset of natural images.
We have managed to explore dierent approaches, some with success, some with failure and some
promising avenues to explore further. We will discuss these techniques in more details in this
report. To this end, we have also read the paper “Sparse Filtering” which addresses a novel way of
feature extraction for image classication purposes. We will use their approach extensively in our
simulations.
On another topic, we read a very nice survey on some connections between compressed sens-
ing and brain activities. The paper, entitled “Compressed Sensing, Sparsity, and Dimensionality
in Neuronal Information Processing and Data Analysis”, summarizes compressed sensing achieve-
ments rst. Then, it discusses applications of compressed sensing in analyzing brain activities and
improving imaging methods. Finally, the authors consider various compressed sensing techniques
performed by the brain. This last section is very interesting and provides many nice ideas for future
research activities on using compressed sensing ideas to design more ecient neural networks.

Biocoding project progress report: 19-31 October 2012

**1 - 15 November **

In the past two weeks our main focus was to address the comments of the reviewers on our ICML
paper. To this end, we had to implement and test various ideas for extracting features from a
dataset of natural images. Furthermore, we unfortunately found out our simulations on the dataset
of spoken english words were also incorrect due to an error in pre-processing the dataset. Thus, we
spent most of our time on feature extraction techniques. We have tested a few dierent approaches the details of which is given in the following report.

Biocoding project progress report: 1-15 November 2012

**16 - 30 November **

In the past two weeks I was mainly busy with my courses. But I also spent some time in organizing the bibtex files for the papers I have read during the course of the project.

In addition, Together with Dr. Amin Karbasi and Dr. Lav Varshney, we have started to work on a new project on “noisy” neural networks. More specifically, we are considering a neural architecture (similar to our previous models) in which neurons are noisy themselves as well. Therefore, not only we have to deal with external noise, but also the internal noise in the decision maiking procedure of neurons. At the moment, we are considering different models for the noise to find the one that is somewhat realistic and tractable at the same time.

**1 - 18 December **

In the past two weeks, in addition to perform yet some simulations for the ICML paper and doing some coursework, we spent some time developing our model for the faulty neural networks. Together with Dr. Amin Karbasi and Dr. Lav Varshney, we have decided to consider a model in which neurons “suffer” from bounded internal noise. We will then try to adjust corresponding neural update thresholds to overcome the effect of these types of noise as well as eliminating a reasonable amount of external noise.

On a related topic, I also read the paper “performance of LDPC codes under faulty iterative decoding” which considers a similar setting for decoding LDPC codes over faulty hardware. In the proposed model, both variable and check nodes are noisy as well the edges between them. However, the author shows that one can incorporate the noise of the nodes into the noise of the links, which results in a model that is actually very similar to the noisy communication model between variable and check nodes.

The authors show that when the noise model is bounded with respect to the message values (i.e. in the BP algorithm where messages can be very large) one can find settings in which the decoding error probability actually goes to zero, even though there is an inherent noise during the decision making procedure. However, when the noise and the messages are of the same order (i.e. in the Gallager decoding algorithm for the BSC channel), then we can not achieve a zero error probability. However, we still could find situations in which the error rate is very small and acceptable in practical situations.

With our noisy neural model, we would like to see if we could find similar statements and find the “region of usability” for the neural architecture in which it makes sense to use the network to correct errors and not introducing more errors than the “channel” itself!

**19 - 31 December **

On Vacation

Back to the previous page