January



Report 1 - 19 January 2011
In the past month, we considered new models for neural associative memory. More speci cally, we investigated non-binary neural networks, i.e. neural networks in which neurons outputs can have positive integer values as their output instead of being binary. In fact, continuous valued neurons are more biologically realistic.
Non-binary neural networks are specially interesting from a coding point of view since we can use codes in real eld to investigate the behavior of such networks. A good example is the use of lattice codes, which we will consider in more details in section 3. We will show that using lattice code-like methods, we will be able to memorize an exponential number of patterns in terms of the number of neurons when noise power is small enough.
In This report, we start by briefly reviewing a number of papers regard- ing the storage capacity of Hop eld networks. We will then consider some papers that address non-binary Hop eld networks. We will then investigate applications of lattice codes in neural networks. Section 4 of the report provides some simulation results about the use of lattice codes as a means of associative memory. Finally, section 5 will conclude this reports and gives a number of ideas fr future works.


February



ISIT Paper
During the past month, Raj and I were mainly busy writing a paper for ISIT about applications of coding theory in neural associative memories. More specifically, the paper entails a summary of our work on Gold sequences and shows that neural pre-coding increases the storage capacity of Hopfield and certain Bidirectional Associative Memories (BAMs). The submitted version of the paper can be found in the above link.


March



Report 2 - 7 March 2011
This report is mainly about clearly introducing our new approach to use lattice codes as a means of increasing the pattern retrieval capacity of neural associative networks. The suggested method is based on non-binary neurons in a hetero-associative neural network with asymmetric weights. We will show that if we select weights properly, we will be able to store an exponential number of patterns in terms of the size of network.
In the report, we will first explain the model more clearly. A proper way to select weights, which is based on lattice codes in coding theory, is discussed next. Some simulation results are provided as well to evaluate the proposed approach. Finally, we suggest some extensions to the current model which might improve the performance.


May



ITW Paper - 13 May 2011
In the past 2 months, Raj and I were busy developing some algorithms to increase the pattern retrieval capacity of neural associative memories. We have proposed two simple algorithms to have an associative memory with the ability to store an exponential number of patterns (in terms of the number of neurons). The suggested methods employ non-binary neurons to enforce a set of constraints on the patterns that network aims to memorize. These constraints are later used to correct the errors in the input and return the closest pattern in response to a given corrupted query. It was analytically shown that the proposed scheme is guaranteed to correct any two erroneous nodes. Simulation results also show that the suggested algorithms are able to maintain a small probability of error when the number of erroneous nodes increase.

The code for simulating the results can be downloaded from the link below:
Neural Error Correction with Nonbinary Neurons and Expander Graphs


June



4-20 June
Expander codes are closely related to our neural constraint enforcing algorithm as expander graphs and bit flipping algorithms are used in both cases. Therefore, we read a bit more about applications of expander graphs in coding methods. The considered papers were interesting in the sense that they provide some nice mathematical background which could be helpful in finding suitable expander graphs for our purposes and also help us in obtaining theoretical bounds on the number of correctable errors using the suggested method.

The full report is available via the following link:
Biocoding project progress report: 4-20 June 2011

On a separate topic, we presented a poster in the I&C Research Day on June 15, 2011. The poster can be downloaded from the following link:
Neuroscience meets Coding theory: Neural Associative Memory with Exponential Pattern Retrieval Capacity

21-30 June
In the neural error correcting algorithm, so far we had assumed that the constraint matrix was given and all that is required is a message paasing algorithm to correct the errors. Since we have found such an algorithm, the next step would be to find a way to determine the constraint matrix from a given data set that satisfy some constraints. This problem is closely related to fiding the null basis of a linear code (the parity check matrix). However, not any such matrix is within our field of interest since we look for a sparse and expander parity check matrix. To find a method to help us obtain such matrices, we have started reading some papers on this topic the datils of which are given below. Furthermore, we gave a talk on our work (entitled “Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory”) as an ALGO Seminar on June 29.

The full report is available via the following link:
Biocoding project progress report: 21-30 June 2011

The powerpoint for the ALGO seminar slides is also available via the following link:
Slides: Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory


July



1-14 July
In the past two weeks, we were mainly busy with extending the ISIT paper in order to have a more accurate bound on the number of errors that can be corrected by the proposed neural associative mechanism. The problem is closely related to the roots of an elliptic curve over a finite field.

We have also worked on preparing the powerpoint for the upcoming ISIT conference.

15-31 July
In the past week, we read some papers about higher order neural networks since they have similar properties to our double layer constraint enforcing network. On the plus side, there are learning rules that enable us to find the hidden structure in a set of given training patterns. However, the down side is that the complexity of such learning meachanisms is prohibitive.

The other subject we read some papers about was matrix recovery and completion methods. In this problem, we are interested in infering a low rank matrix from a set of available linear measurements. In a sense, this problem is closely related to the problem of learning the constraint matrix in our neural compressed sensing problem. However, the methods given in most of the papers in this field is not realizable by simple neurons. The papers mentioned in the following give us some required conditions on the desired matrix and the measurements in order to ensure recoverability.

On a different topic, we prepared the powerpoint for the upcoming ISIT conference. We also had a pizza talk on July 29, 2011.

The full report is available via the following link:
Biocoding project progress report: 15-31 July 2011


August



ISIT Conference, Saint Petersburg, Russia
I took part in the IEEE International Symposium on Information Theory from 31 July to 5 August. I presented our paper with Raj, Amin and Prof. Gerstner titled: “Neural Pre-Coding Increases the Pattern Retrieval Capacity of Hopfield and Bidirectional Associative Memories”, which was about using low correlated patterns in increasing the storage capacity of binary neural associative memories. The final version of the paper as well as the slides are accessible via the links below:

ISIT 2011 Camera Ready Paper: Neural Pre-Coding Increases the Pattern Retrieval Capacity of Hopfield and Bidirectional Associative Memories

ISIT 2011 Camera Ready Paper: Neural Pre-Coding Increases the Pattern Retrieval Capacity of Hopfield and Bidirectional Associative Memories

8-15 August
In the past week, we continued reading some papers to help us find a way to solve the problem of learning the constraint matrix from a set of given patterns. Given that our approach on enforcing constraints is very similar to higher order neural networks, we read a bit about these type of neural networks. Interestingly, it seems that one can use higher-order neurons to learn the constraint matrix. The only major drawback is that the this approach has a very high computational cost.

Another interesting subject close to our interests is the problem of learning subspaces from sample data. In this problem, we are given a bunch of patterns that belong to a subspace. We are asked to learn the basis vectors using a neural network from these patterns. As we will see, this problem is closely related to our problem of interest in enforcing constraints to neural patterns. However, ensuring sparse connectivity matrix still remains a big issue.

Finally, as mentioned in previous reports, expander-based compressive sensing approaches are very similar to the method suggested in our ITW paper. In both cases, expander graphs are used to recover sparse signals from linear measurements. The only difference is that we have more restrictions on the algorithm in neural networks as it must be simple enough to be implemented neurally. Because of these similarities, we have been also reading a few papers on compressive sensing methods and the required restrictions on the measurement matrix in order to ensure the correct recovery. The report is accessible via the link below:
Biocoding project progress report: 16-28 August 2011


16-28 August
During the last week, we were mostly focused on reading some new papers that adress ideas related to learning the constraint matrix of a constraint satisfaction problem from the patterns that satisfy those constraints. Although the following papers do not address the exact problem that we are interested in, with slight modifications they can be suitable for our purpose. The report is accessible via the link below:
Biocoding project progress report: 16-28 August 2011




September



20-30 September
In the past 10 days and after my vacation, I got back to reading some new papers to find a way to implement the learning phase in our neural associative memory with exponential capacity. These papers include a paper about learning the basis vector for the null space of a set of patterns as well as one on compressive sensing plus two more on various topics related to associative memory. I also worked on implementing one of the ideas to implement the learning phase which was partially successful. I also defined two semester projects on our two neural projects, one of which was taken by a master student.

In this report, you will find the summary of the papers I read as well as some details on the idea that I have been working on.
Biocoding project progress report: 20-30 September 2011




October



1-15 October
In the last two weeks, I was mainly busy with implementing an idea to solve the learning part of our neural constraint method in, i.e. to find a sparse matrix W such that for all vectors x in the training data set we have Wx=0. The idea is based on the method to identify null basis vectors of the a subspace from the vectors that belong to that subspace. The end result has been a partial success so far in the sense that the MATLAB code finds a relatively sparse basis for the null space from the patterns in the data set. The performance in the error correcting phase is also not that bad, however, since the end result is not an expander graph, it is still not comparable to what we expect. Fruthermore, since the code is slow to run, I have not yet managed to run extensive simultions over large graphs to see if the performance improves for larger network sizes.

In this report, you will find the details of the algorithm as well as some ideas for future works.
Biocoding project progress report: 1-15 October 2011


16-20 October, The Information Theory Workshop (ITW)
I took part in the IEEE Information Theory Workshop from 16 to 20 October. I presented our paper with Raj and Amin titled: “Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory”. In this paper we suggest a methods that employs non-binary neurons to enforce a set of constraints on the patterns that the network aims to memorize. These constraints are later used to correct the errors in the input and return the closest pattern in response to a given corrupted query. It was analytically shown that the proposed scheme is guaranteed to correct any two erroneous nodes. Simulation results also show that the suggested algorithms are able to maintain a small probability of error when the number of erroneous nodes increase.

The final version of the paper as well as the slides are accessible via the links below:

Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory

Slides: Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory


24-31 October
In the last week, I was mainly busy in making the code for implementing the null space learning problem parallel. The motif was the necessity of having larger-scale simulations which requires a lot of computational time. Therefore, to speed things up, I modi ed the original algorithm so that it can be run in parallel using our cluster. Aside from that, I managed to read a new paper on a similar idea that I am implementing now.

The report below contains more details on the code implementation as well as the paper.
Biocoding project progress report: 24-31 October 2011

The MATLAB code for the parallel version of the algorithm is accessible via the following link:
Null Space Method Implementation - Parallel Version 1


November



1-15 November
In the last two weeks, I read a couple of papers on compressed sensing and `1-norm minimization techniques because for the learning phase of our neural associative memory, we are interested in fi nding some sparse weight vectors that are orthogonal to a set of given patterns. If we put all such patterns in a big M*n matrix X, where M is the number of patterns and n is their length, then our problem is to nd a sparse weight vector w such that Xw = 0. That's the reason I have been working on L1-norm minimization techniques because we have exactly the same problem in compressed sensing and there, in order to fi nd a sparse answer, L1-norm minimization is used as L0-norm minimization is not possible. In this report, you can nd the summary of the papers that I have read along with some comments and ideas. I also have tried to implement di fferent ideas on learning sparse matrices from training data set.


Biocoding project progress report: 1-15 November 2011


16-30 November

In the last two weeks, I continued working on compressive sensing methods and their application in neural associative memories. I also worked on an iterative algorithm to learn a sparse vector that is orthogonal to the patterns that belong to a subspace. This algorithm corresponds to the learning phase of our neural network proposed in the ITW paper which memorizes patterns that belong to a subspace. I was also able to provide a rst sketch to prove the convergence of the algorithms. Simulation results show that the proof is rather correct although some technical details are missing.

I also read a couple of papers and tested some of the approaches for sparse estimation proposed in the “Approximate Message Passing (AMP) algorithm by Donoho et al. I explored a couple of new ideas about a di erent architecture to memorize neural patterns as well. However, these ideas are still yet to be matured. In what follows, one can nd the details of proposed learning algorithms and the rst draft for the convergence proof.

In addition, I have brought the important points of the papers on compressed sensing that are most relevant to our work. Specially, one can nd some useful remarks on the paper “Message-Passing Algorithms for Compressed Sensing” which proposes an elegant iterative algorithm for recovering sparse signals.

The report below contains more details on the code implementation as well as the paper.
Biocoding project progress report: 16-30 November 2011

The zip file below contains all the necessary MATLAB codes for performing the learning phase in parallel as well as some pieces of code to analyze the results in a graphical way:
MATLAB Codes for Performing and Analyzing the Learning Phase


December



1-15 December
In collaboration with Mr. Amin Karbasi, during the last two weeks, I was mainly working on two ideas: extending the neural error correcting algorithm in our ITW paper to multi-level networks, giving it better error correction capabilities and making it more similar to the real world scenarios, as well as extending the approach message passing approach in compressive sensing proposed by Donoho et al. as a framework for the learning phase of our neural algorithms.

In this report, you can find how the first subject was rather successful and how our attempt for the second one failed.
Biocoding project progress report: 1-15 December 2011

The zip file below contains all the necessary MATLAB codes for performing the learning and recall phases as well as some pieces of code to analyze the results in a graphical way:
MATLAB Codes for Implementing and Analyzing the Multi-level Neural Associative Memory


16-31 December
In the last two weeks, Mr. Amin Karbasi and I were busy extending the approach introduced in the paper “Message Passing Algorithms for Compressed Sensing” Donoho et al.to the case of learning a sparse vector which is orthogonal to a set of m patterns of length n with non-negative integer entries, stored in an S*n matrix X.

In the following report, you can nd the details of the proposed algorithm and its proof of convergence.
Biocoding project progress report: 16-31 December 2011

The zip file below contains all the necessary MATLAB codes for performing the learning phase in parallel as well as some pieces of code to analyze the results in a graphical way:
MATLAB Codes for Implementing and Analyzing the Neural AMP Approach


Back to the previous page