EPFL

Algo+LMA

**1-6 January**

On Vacation

**7 January - 11 January**

In this week, I have been mostly busy finishing the technical report for one of my courses, called “computational game theory and applications”. Although it is not directly related to my thesis and projects, but it might have some nice connections to both coding theory and distributed systems.

The goal of the project, entitled “Evolutionary game theory and development of language”, is to model the development of human languages in an evolutionary game theoretic form. the model is based on the famous work of Martin A. Nowak and and we try to extend it to the situations in which communications are done in a noisy environment. What we are interested in is if the population chooses to go for longer, but more costly, words to reduce the probability of error, or tolerate high probabilities of error and stay with shorter and less costly words to communicate. From coding theory, we know that the first scenario is the more desired one and based on our simulations it seems o be also the case for a population of individuals who maximize their payoff (i.e. number of successful communications minus the cost of communication), at least in some scenarios.

I am going to work further on this project as it seems really interesting and has a nice connection to coding theory. The technical report for the course can be found below.

Evolutionary Game Theory and Development of Language

**14 January - 31 January**

In the past two weeks, we continued working on the faulty neural networks model. We were able to formulate the problem in a neat fashion and, to some extent, analytically investigate the model. However, the most amazing part came from the simulations where, if we haven't made any mistakes, shows that a certain amount of noise actually improves the performance of the network in correcting errors, compared to the faultless neural networks! Although it sounds surprising, after further inspection it makes sense because a fair amount of noise acts, for instance, as the mutation in the genetic algorithm and helps the algorithm to escape local minima. We have submitted this model to ISIT 2013 and will hopefully continue working on it to further solidify our results.

The ISIT paper can be found in the following link.

Neural Networks Built from Unreliable Components

The technical report for the ISIT paper, which contains full proofs and extended results, can also be accessed from the following link.

Neural Networks Built from Unreliable Components - Technical Report

The MATLAB code for simulating the results is also given in the following zip file.

MATLAB Files for Neural Networks Built from Unreliable Components

Finally, a draft that contains some of our ideas and analysis that are not in the paper can be found below.

Neural Memories Based on Faulty Elements - V1

In addition to the faulty neural models, we also sent our *coupled neural model* to ISIT 2013 as well. This paper is virtually the same as the one we submitted to Allerton 2012 (and got rejected!) except for some small corrections and tidying up the whole paper.

The paper itself can be found from the following link.

Coupled Neural Associative Memories

And the technical report of the paper, which contains the proofs, can be found below.

Coupled Neural Associative Memories - Technical Report

**1-15 February**

In the past two weeks I was mainly busy with organizing some of the stu we had done in the past
(the MATLAB codes, reports, etc.). We also nally nished writing the journal paper for our ITW
work and submited it to IEEE Transaction on Neural Networks and Learning Systems.
We also re-initiated our simulations on the natural images dataset. We are testing some new ideas, some of which is explained in the following report. We are going to host two master students as well for their semester projects to help us with feature extraction algorithms (both on the theoretical side and implementations).

The report can be found in the following link.

Biocoding project progress report: 1-15 February 2013

The MATLAB files for processing database of images as well as prelimanry codes for classification can be accessed from the folloiwng links.

MATLAB Files for Database Preparation - February 2013

MATLAB Files for Classification - February 2013

Finally, the journal paper and its extended version, which we have submitted to arXiv, can be found in the following links.

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning: Extended Results

**18-28 February**

In the past 10 days, we continued our works on neural networks built from unreliable components and constructing feature extracting methods suitable for our learning algorithm. Regarding the former case, we have studied some papers on the analysis of LDPC decoders built from faulty hardware as well as some natural phenomenon that is similar to our observation that some amount of noise actually improve the performance of the system in correcting external errors.

As for the feature extraction methods, we started reading about Non-Linear PCA (NLPCA) methods which extract non-linear structures from input data. We then applied some ideas borrowed from such approach to our learning algorithm to see if it improves the learning results (which did). Furthermore, we tested different approach to perform clustering for the recall phase the details of which can be found in the following report.

I have also met with the two students conducting master semester projects and we are working on implementing various feature extraction methods as well as possibly building one of our own.

The report can be found in the following link.

Biocoding project progress report: 18-28 February 2013

The MATLAB files for processing database of images as well as prelimanry codes for classification can be accessed from the folloiwng links.

MATLAB Files for the Progress Report of 18-28 February 2013

**3-25 March**

In the past three weeks, we worked a bit on the neural network built from unreliable components.
We performed more simulations to assess the performance of the algorithm from dierent aspects.
We also did some theoretical work to have more accurate bounds on the nal BER of the algorithm.
The details are provided in the sequel.
We also worked on implementing a multi-layer neural network to perform Non-Linear Minor
Component analysis (NLMCA). The learning algorithm is nished and working but the recall
algorithm still needs some tinkering.
We also continued our simulations on the CIFAR-10 dataset in line with our ICML paper. We
have tested yet some new ideas but to no veil (for now) as we still lack a neural network capable of
learning and performing the recall operation to the required level both at the same time. We will
continue on this direction and test some other ideas with the help of the new master students that
have taken the semester projects on this topic.

Biocoding project progress report: 3-25 March 2013

**26 March-7 April**

Vacation

**8-30 April**

In the past 10 days, we were mainly busy testing a new idea to learn non-linear constraints from the input data. Surprisingly, a simple trick does the job, as will be explained in this report.

We also implemented the recall phase for the Non-Linear Minor Component Analysis (NLMCA) idea. It seems that since we have a two-layer neural network, we should go for the “scheduling based” recall process.

I was also working with the semester project students on their projects. In one of the projects, we are working on applying the Maximization of Mutual Inofrmation (MMI) idea to increase the classification rate of an SVM classifier over the CIFAR-10 dataset.

Biocoding project progress report: 8-30 April 2013

And here is the MATLAB code for this report.

MATLAB Files for the Progress Report of 8-30 April 2013

We also applied reviewers comments from ISIT and re-submitted our manuscript “Coupled Neural Associative Memory” to ITW 2013. The paper can be accessed from the following link.

Coupled Neural Associative Memories

And here is the longer version of the paper which is put on arXiv.

Coupled Neural Associative Memories - Technical Report

**1-20 May**

In the past 20 days, we worked on extending the idea of learning polynomial non-linear constraints to the case of classifiers. More specifically, instead of learning the non-linear curve that is “orthogonal” to the data in one class, we learn the non-linear curve that separates two classes in the n-dimensional space. This is hardly a new topic as there are numerous non-linear kernel SVMs. However, the proposed method here seems to be easy to implement and rather fast. More extensive investigations are necessary though.

We was also made some progress with the master semester projects. In one of the projects, Maximization of Mutual Information (MMI) idea has been successfully applied to the CIFAR-10 dataset. Preliminary results look very fascinating.

I the other project, a GUI is being built to apply different feature extraction techniques for compressing the images while maintaining the reconstruction quality to an acceptable level.

Biocoding project progress report: 1-20 May 2013

And here is the MATLAB code for this report.

MATLAB Files for the Progress Report of 1-20 May 2013

**21-31 May**

In addition to supervising the master semester projects, in the past 10 days, we were busy preparing our *fault tolerant neural network idea* (which was rejected in ISIT) to submit for NIPS 2013. The submitted draft is accessible from the following link.

Noise-Enhanced Associative Memories

**3-14 June**

In the past 10 days or so, I was mainly busy with the semester projects. Thankfully, we got good results in the end. Both students passed with a grade of 6. The final reports can be found below.

Nonlinear Dimensionality Reduction Techniques and Their Application in Neural Networks

The MATLAB Code for the Project "onlinear Dimensionality Reduction Techniques and Their Application in Neural Networks"

Additional MATLAB Files for the Project "onlinear Dimensionality Reduction Techniques and Their Application in Neural Networks"

Implementing Some Feature Extracting Techniques to Model Human Visual System

The MATLAB Code for the Project "Implementing Some Feature Extracting Techniques to Model Human Visual System"

We also prepared a poster for the upcoming ICML 2013 and the I&C research day. The poster can be found in the following link.

Neuroscience meets Coding theory: Iterative Learning and Denoising in Convolutional Neural Associative Memories

Back to the previous page