Introduction to Neural Networks

Instructors: Nici Schraudolph and Fred Cummins
Istituto Dalle Molle di Studi sull'Intelligenza Artificiale
Lugano, CH


Course content


Our goal is to introduce students to a powerful class of model, the Neural Network. In fact, this is a broad term which includes many diverse models and approaches. We will first motivate networks by analogy to the brain. The analogy is loose, but serves to introduce the idea of parallel and distributed computation.

We then introduce one kind of network in detail: the feedforward network trained by backpropagation of error. We discuss model architectures, training methods and data representation issues. We hope to cover everything you need to know to get backpropagation working for you. A range of applications and extensions to the basic model will be presented in the final section of the module.

Lecture 1: Introduction

Lecture 2: The Backprop Toolbox By popular demand: Lectures 1 & 2 as a ZIP file.

Lecture 3: Advanced Topics

By popular demand, a list of English terms for mathematical expressions that we are using.

A few suggestions for possible project topics.

Lecture 3 as a ZIP file.



This module will consist of three extended lectures of three hours each. Lectures will be on Fridays, as usual. There will be no set assignments. Material covered in this module may be used in course examinations, however.



The best way to reach us is either at the lectures (during the break, or after the end) or by email:




Simulators and code:

Brainwave: a Java based simulator
tlearn: W*ndows, M*cintosh and Un*x implentation of backprop and variants. Written in C.
PDP++: C++ software with every conceivable bell and whistle. Un*x only. The manual also makes a good tutorial.

Related stuff of interest: