Aivojen matematiikkaa, Mathematics of the Brain

Last modified by kulikov@helsinki_fi on 2024/03/27 10:19

Mathematical Models of Neural Networks (Feb. 2014)

Thanks to all the participants for a nice course!

 

How to pass

Every week I gave a set of exercises on Tuesday and the students did some of them. Most of the exercises were worth 1 point, but some were worth more. Those students who got 12 points, have already passed. The rest, but only those who have done at least 4 exercises have the opportunity to collect the remaining points by doing exercises from Exercise 4 and submitting solutions to me via e-mail. The number of points for each exercise is denoted in the brackets. You can do them even if you have already passed: If you collect 18 points, you will get an extra ECTS.

The deadline for the last submissions is Thursday 20.03.2014 at 23:59 EEST.

Content


How does brain learn? How does the complicated face recognition, motorics and abstract thinking emerge from the simple workings of neurons? Still today, no one really understands how does brain really work, but a slight idea of what is happening can be gained by looking at some mathematical models of neural networks.

On this course we cover the basics of (theoretical) computational neuroscience and specifically concentrate on the Hopfield model (which is similar to the Ising model in physics), Hebbian learning and, if we have time, some models of unsupervised learning and/or backpropagation algorithms.http://page.mi.fu-berlin.de/rojas/neural/index.html.html

The two-fold benefit from understanding these models is: on one hand to gain philosophical and cognitive understanding of the workings of the brain and on the other hand to be able to actually implement neural network and build artificial intelligences which have human-like adaptive learning abilities.

Here is an example of a neuronetwork which learns and recognises simple images.

Material

Everything is based on this book: Neural Networks - A Systematic Introduction by Raul Rojas The book and individual chapters can be downloaded also from here. Also see the links in the Course Blog below.

Course blog:

Fourth and the last week: Exercise 4 (deadline: Thu 20.03.2014 23:59 EEST)

  • On Tuesday we looked more closely at backpropagation: Sections 7.2.2, 7.2.3: Algorithm 7.2.1, Proposition 11; and Section 7.2.4 and layered networks, Section 7.3. Here is a Wikipedia article on Neural backpropagation – the biological counter part.
  • On Thursday we looked at synchronisation (material for synchronisation is here) and the Cybenko theorem (nice exposition of the proof is found here). Here is the python code (requires python 2.7 and pygame) for the app I presented at the lecture which shows the synchronisation phenomenon. The code uses the continuous version Kuramoto model. In my material a discrete version is considered. See below for all the other apps from the course.
  • Extra material concerning synchronisation: Kuramoto, Other mechanisms. Note: These articles are perfect material for Bachelors theses.
  • Other related material: Hoffman et al (body schemas in robotics), http://deeplearning.net/ , link suggested by a student: Deep learning.  

Third week 18 and 20.02: Exercise 3

  • Tue 18.02. we went through examples on how to use Hopfield networks to find (approximate) solutions to problems such as the Travelling Salesman Problem (Section 13.5 from the book).
  • Thu 20.02: We superficially went through basic concepts of Chapters 5, 7 and 15.

Second week 11 and 13.02.2014: Exercises 2 file for the programming exercise These exercises are due to Tue 18.2. We covered:

  • Hebbian learning in heteroassociative and autoassociative networks. (Chapter 12)
  • Pseudoinverse as a substitution of Hebbian learning (12.4)
  • Bidirectional Associative Memory (BAM) (Chapter 13, 13.1.2), energy function (13.1.3) and convergence (Proposition 19).
  • Hopfield models (13.2), energy function and convergence (13.3, Proposition 20). The proofs of 19 and 20 are similar.

First week 04 and 06.02.2014: Slides Exercises 1 (not all slides were covered). We covered:

  • The relevant basics of molecular anatomy of the biological neurons (Chapter 1)
  • The definition of a McCulloch-Pitts unit (uninhibitory and inhitory) and how logical functions can be defined in terms of them and the geometric representation as linear separation (Chapter 2 (2.1-2.2)).
  • Units with weighted inputs (2.3.1, 2.3.2) and perceptrons (Definition 1 of Chapter 3), 3.2.
  • Again geometric representation as linear separation, 3.3
  • We went through the algorithm 4.2.1 and proved Proposition 8 (page 88, Chapter 4)

Python scripts

During the course I presented some scripts I wrote in python which demonstrate some of the neual phenomena. Here are all of them downloadable. They require python 2.7 and pygame.

The (10-) rooks problem (Section 13.5.3),

 

Prerequisites

Basics of linear algebra.

 

Lecturer

Vadim Kulikov

Post-doc at Kurt Gödel Research Center for Mathematical Logic,
Vienna, Austria
vadim.kulikov 'at' iki.fi

luennoija.jpg

 


Questions?

Send me e-mail: vadim.kulikov 'at' iki.fi