CEng 501 - Deep Learning -
Fall 2025

Images generated by the “Stable
Diffusion” method. From left to right: “a bunch of students learning
deep neural networks” by Picasso, Van Gogh and Da Vinci.
Instructor: Emre Akbas; e-mail: emre at ceng
dot metu.edu.tr; Office: A-403; Office hours: click + by apppointment
Lectures: Thursday 9:40-12:30 at BMB-3
Online communication: (forum, homework submissions) https://odtuclass.metu.edu.tr/
Syllabus: click
here
Announcements
- Oct 6, 2025 - Student with these IDs
can add the class during add-drops this week (starting from Monday
afternoon). Those who are already registered do not need to do
anything.
- Sep 5, 2025 - I am receiving too many e-mails about taking this
course as a special student, as an undergrad, from other
departments/universities, etc. I am not able to respond each of them. If
you want to take the course, please make sure you attend the first
lecture. And, please see my answers to frequently asked
questions.
Late submission policy
Any work, e.g. an assignment solution, that is submitted late past
its deadline will receive -10 points per day delayed. For example, if
the deadline is Oct 14, 23:59 and a student submits his/her work anytime
on Oct 15, that work will be evaluated over 90 instead 100.
Detailed syllabus
An important note: Lecture slides provided below are by no
means a complete resource for studying. I frequently use the board in
class to supplement the material in the slides.
Playlist
of lecture recordings
Week 1
Week 2
- Lecture topics: A brief review of machine learning background (slides).
- Hands-on tutorial: developing a basic classifier using hinge loss:
Colab
notebook
Week 3
- Lecture topics: Artificial neuron, perceptron learning rule,
multilayer perceptrons, activation functions, initialization,
backpropagation, stochastic gradient descent, momentum (slides).
- Colab
notebook on building, training and evaluating a basic MLP.
- Recommended reading: sections 8.1 and 8.3 from “Chapter
8: Optimization for training deep models” from the book “Deep
Learning.”
Week 4
- Lecture topics: momentum, adaptive learning rate methods, modular
backpropagation, convolutional neural networks (CNNs), AlexNet (slides).
- Colab
notebook on building, training and evaluating a basic CNN.
- Recommended reading: LeCun,
Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature,
521(7553), 436-444.
Week 5
- Lecture topics: implementing a convolutional layer, normalization
layers, important CNN architectures, types of convolution, recurrent
neural networks (RNNs), backpropagation though time, gated RNNs, GRUs,
LSTMs (slides).
- Recommended reading: He, K., Zhang, X., Ren, S. and Sun, J. Deep residual learning for image
recognition. In CVPR 2016.
- Colab
notebook on implementing a bare-bones object detector.
- HW1 released on ODTUClass.
Week 6
Week 7
Week 8
- Lecture topics: self-supervised learning, predictive modeling,
agreement-based learning, next word prediction, next sentence
discrimination, masked language modeling, masked image modeling,
constrastive, non-contrastive, clustering-based, distillation-based SSL
methods (slides).
- Recommended reading: Caron et al. Emerging properties in
self-supervised vision transformers. In ICCV 2021.
Week 9
Week 10
- Lecture topics: deep generative modeling, energy-based models,
(restricted) boltzmann machines, autoregressive modeling, variational
autoencoders, generative adversarial networks, diffusion (slides).
- Recommended reading: Ho et al. Denoising diffusion
probabilistic models. In NeurIPS 2020.
Week 11
- Lecture topics: autoregressive modeling, decoder only Transformer,
causal self attention, GPT family (1, 2, 3, InstructGPT, 3.5, ChatGPT),
low rank adaptation (LoRA), retrieval augmented generation (RAG), chain
of thought (CoT), open LLMs (Llama, Qwen) (slides).
- Recommended reading: Ouyang et al. Ouyang et al. Training language
models to follow instructions with human feedback.. In NeurIPS
2022.
Week 12
- Lecture topics: foundation models, multimodal models, conditioning
diffusion on text, edge maps, semantics maps, human pose; ways of
combining modalities, linear projection in LaVA, querying transformer in
BLIP-2, gated cross-attention in Flamingo, Janus-Pro, large multimodal
models (slides).
- Recommended reading: Liu et al. Visual instruction tuning.
In NeurIPS 2023.
Week 13
Week 14
Frequently
Asked Questions about taking the course
I am receiving too many e-mails about taking the course.
Unfortunately, I cannot reply them one by one. Below are answers to
common questions.
Q1: Can I take the course?
A1: There is a huge demand for the course from all kinds of
backgrounds. Thanks. However, I have the responsibility to evaluate your
learning outcomes and grade you. Therefore, I need to limit the number
of seats. Based on my previous years’ experience, this limit will be
around 35-40.
Since this is a graduate METU CENG course, I need to give priority to
the graduate students in our department. Here is the priority order that
I will use to accept students to the class. From high to low
priority:
- Graduate students from METU CENG, Robotics, AIX, DDS programs,
- A limited number of 4th year undergraduate students from METU
CENG,
- Graduate students from other METU departments,
- Special students (see http://oidb.metu.edu.tr/ozel-ogrenci,
you need to be a grad student in some other university to be
eligible).
I must note that the first two categories almost fill up the whole
capacity. So, unfortunately, there might not be much room for the
remaining two categories.
Also, precedence will be given to students who are actively doing
research in machine learning and related areas. This course is not a
PyTorch or Keras tutorial, we intend to go beyond the “user” level.
You might want to check out the other three DL courses offered in
METU: MMI727,
EE543,
and CENG403.
Machine learning background is required. If you have not taken a
machine learning course before, please do not take this course.
Fluency in Python is required.
This course assumes that the student has taken already a course on
the fundamentals of deep learning and is familiar with conventional
models such as Multi-Layer Perceptrons, Convolutional Neural Networks,
Recurrent Neural Networks and Long-Short Term Memory Networks.
Q2: How can I register for
the course?
A2: Come to the first lecture. In the first lecture, I will collect
information from the participants and then, will decide (based on my
answer A1 above) on who will be able to register. This enrollment list
will be announced in a couple of hours following the first lecture.
Students listed in this enrollment list will be able to add the course
during the add-drop period.
Q3:
I was able to take the course during the regular interactive
registration. Should I worry about not being accepted?
A3: No worries. You will stay.
Q4: Can I take
this course as a special student?
A4: Possible but unlikely. Please see my answer A1 above.
Q5:
Even if I don’t officially register for the class, can I audit it?
A5: Yes, definitely.