This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
en:projects:details:neuralsemproject [2011/09/23 15:33] amir |
en:projects:details:neuralsemproject [2016/06/23 11:26] (current) |
||
---|---|---|---|
Line 4: | Line 4: | ||
The database entry: | The database entry: | ||
"type" is one of the following: phd theses, phd semester, master thesis, master semester, bachelor semester | "type" is one of the following: phd theses, phd semester, master thesis, master semester, bachelor semester | ||
- | "status" is one of the following: available, taken, completed (please upgrade accordingly!!!!!!!!!!) | + | "state" is one of the following: available, taken, completed (please upgrade accordingly!!!!!!!!!!) |
"by" should be filled as soon as the project is taken/completed | "by" should be filled as soon as the project is taken/completed | ||
"completed_dt" is the date when the project was completed (YYYY-MM-DD). | "completed_dt" is the date when the project was completed (YYYY-MM-DD). | ||
Line 11: | Line 11: | ||
*/ | */ | ||
---- dataentry project ---- | ---- dataentry project ---- | ||
- | title : Artificial Neural networks with error correcting abilities and high storage capacities | + | title : Implementing Some Feature Extracting Techniques to Model Human Visual System |
contactname: Amir Hesam Salavati | contactname: Amir Hesam Salavati | ||
contactmail_mail: hesam.salavati@epfl.ch | contactmail_mail: hesam.salavati@epfl.ch | ||
Line 17: | Line 17: | ||
contactroom: BC 160 | contactroom: BC 160 | ||
type : master semester | type : master semester | ||
- | status : available | + | state : completed |
created_dt : 2010-11-15 | created_dt : 2010-11-15 | ||
- | taken_dt : YYYY-MM-DD | + | taken_dt : 2013-01-19 |
- | completed_dt : YYYY-MM-DD | + | completed_dt : 2013-06-12 |
- | by : the full name of the student | + | by : Diego Marcos Gonzalez |
- | output_media : en:projects:neural_storage_capacity.pdf|Download Abstract in PDF Format | + | output_media : en:projects:master_semester:marcos_salavati_semester_project_report_2013.pdf|Download Project Report in PDF Format |
table : projects | table : projects | ||
====== | ====== | ||
Line 30: | Line 30: | ||
\\ | \\ | ||
/* Description of the project */ | /* Description of the project */ | ||
- | ===== Background ===== | ||
- | \\ | ||
- | Memorizing patterns and correctly recalling them later is an essential ingredient of neural activity. In past 25 years, a number of neural networks has been invented memorize and recall patterns. Interestingly, some of these networks are able to recall the correct pattern even if the input pattern contains error and is partially corrupted because of noise. In this regard, these artificial networks resemble error correcting codes, i.e. they are able to recognize the correct pattern in presence of noise. | ||
- | |||
- | However, the storage capacity of these networks are quite small compared to their counterparts in coding theory. Given the fact that modern codes use the same basic structure to do error correction and the one used by neural networks, i.e. a bipartite graph with local message passing, it seems interesting to consider applications of modern coding theory to increase the storage capacity of neural networks by finding the appropriate weights as well as proper update rules for the neural graph. | ||
- | \\ | ||
- | \\ | ||
===== Project Description ===== | ===== Project Description ===== | ||
\\ | \\ | ||
- | This project has three phases: | + | In computer vision, there are various different techniques to extract important features from images. These features are then later used in patten recognition, image classification, etc. Some of these techniques are comparable to some models of specific parts in human visual system. |
- | - In the first phase, we become familiar with the main concepts of neural networks, associative memory and, to some extent, codes on graphs as well as compressed sensing. | + | |
- | - Next, we focus on implementing proper neural update rules for a specific neural network with error correcting capability and compare them based on their convergence speed and error correction ability. This phase is more | + | |
- | - Finally, if time admits, we concentrate on the process of learning the connectivity matrix of such neural networks from training data sets. | + | |
- | The first phase requires reading some background literature. The second one needs programming skills (preferably in MATLAB) in order to implement the considered algorithms. Finally, the third step is a combination of theoretical work (doing research) and implementing some new ideas to find the best one in practice. | + | In this project, we are interested in implementing some of the widely used techniques in feature learning (extraction) and applying them to a dataset of natural images. This usually corresponds to solving some optimization problem to find the features that represent the data more accurately. |
- | This project is suitable for students interested in neural networks, coding theory and mathematics who prefer a combination of theoretical and empirical works. | + | The implementation can be either done in C or MATLAB (MATLAB is preferred). |
+ | |||
+ | And here are some lines to give you an idea about why we are interested in this project: | ||
+ | Once the feature extraction techniques are implemented, the learned features will then be used as inputs to a neural network which mimics some parts of human memory (neural associative memory). The ultimate goal would be to see if one will get better information storage capacities in artificial neural memories when the inputs are natural stimuli (such as images) and pre-processed before being stored. HEre, pre-processing refers to the feature extraction procedure. | ||
+ | \\ | ||
+ | \\ | ||
+ | This project is suitable for students interested in computer vision, neural networks and mathematics who prefer a combination of theoretical and empirical works. | ||
The prerequisites are: | The prerequisites are: | ||
- | 1)Basic knowledge of coding theory and linear algebra (for the last phase). | + | 1)Basic knowledge of linear algebra. |
2)Being familiar with a suitable programming language (MATLAB is preferred. But C/C++ is acceptable.) | 2)Being familiar with a suitable programming language (MATLAB is preferred. But C/C++ is acceptable.) | ||
+ | |||
+ | Some knowledge about feature learning models in computer vision is not necessary but is deeply encouraged. | ||
+ | |||
\\ | \\ | ||
\\ | \\ | ||
\\ | \\ | ||
- | To read further, the following paper is recommended: | + | |
+ | ====== Report ====== | ||
+ | The report is available via the following link: | ||
\\ | \\ | ||
- | [1] K.R. Kumar, A.H. Salavati and A. Shokrollahi, "Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory", //To appear in// IEEE Information Theory Workshop (ITW), 2011 ([[http://infoscience.epfl.ch/record/168882/files/ITW11_May13.pdf|PDF]]) | + | {{:en:projects:master_semester:marcos_salavati_semester_project_report_2013.pdf|Implementing Some Feature Extracting Techniques to Model Human Visual System}} |