| Code |
16685
|
| Year |
3
|
| Semester |
S1
|
| ECTS Credits |
6
|
| Workload |
PL(30H)/T(30H)
|
| Scientific area |
Informatics
|
|
Entry requirements |
To successfully complete this course, students must possess specific prior knowledge:
Programming: Proficiency in Python is essential, as it is the primary language used for instruction and libraries like PyTorch.
Mathematics: A strong foundation in Linear Algebra (matrix operations, vectors) is required for understanding tensor manipulations. Calculus, specifically derivatives and the chain rule, is necessary to understand gradient descent and backpropagation algorithms.
Machine Learning: Familiarity with basic Machine Learning concepts (supervised vs. unsupervised learning, regression vs. classification) is highly recommended to understand the transition to Deep Learning paradigms.
|
|
Learning outcomes |
The primary objective is to provide students with a deep theoretical and practical understanding of Deep Learning (DL). Students will be able to: Understand Fundamentals: Grasp the mathematical foundations of neural networks, including perceptrons, activation functions (Sigmoid, ReLU), and the backpropagation algorithm for optimization. Design Architectures: Engineer complex models such as Convolutional Neural Networks (CNNs) for computer vision, Recurrent Neural Networks (RNNs) and Transformers for sequence data, and Deep Q-Networks (DQN) for reinforcement learning. Diagnose and Optimize: Identify training issues like overfitting/underfitting and apply regularization techniques (Dropout, Early Stopping) and appropriate optimizers (SGD, Adam). Apply Technologies: Use Python and PyTorch to solve real-world problems.
|
|
Syllabus |
Introduction: From Biological to Artificial Neurons, Perceptrons, and Activation Functions (ReLU, Sigmoid) . Training Neural Networks: Loss Functions (MSE, Cross-Entropy) , Backpropagation, and Gradient Descent variations (Stochastic, Batch). Optimization & Regularization: Learning Rates, Optimizers (Adam, RMSProp), Dropout, and Early Stopping. Computer Vision: Convolutional Neural Networks (CNNs), Pooling, Feature Extraction, and architectures like VGG, ResNet, and DenseNet. Sequence Models: RNNs, LSTMs, and Introduction to Transformers (Attention mechanisms). Reinforcement Learning (RL): Agents, Environments, Q-Learning, DQNs, and Policy Gradients. Advanced Topics: Neural Architecture Search (NAS) and Multimodal Fusion.
|
|
Main Bibliography |
Goodfellow, Ian, et al. Deep Learning. Vol. 1. No. 2. Cambridge: MIT press, 2016 Course Slides and Lecture Notes Documentation and Tutorials from PyTorch.org (e.g., "Deep Learning with PyTorch") Sutton, Richard S., and Barto, Andrew G. Reinforcement Learning: An Introduction (implied for the RL section) "Attention Is All You Need" (Vaswani et al.) for Transformers
|
|
Teaching Methodologies and Assessment Criteria |
The assessment strategy is designed to evaluate both theoretical understanding and practical engineering skills. The final grade is calculated using the following weighted formula: Nota=0.5×F+0.2×TP1+0.3×TP2
Detailed Criteria: F (50%): A written exam scheduled for December 9th, covering theoretical concepts such as backpropagation derivation, architecture design, and algorithmic logic. TP1 (20%): The first practical project, due by November 7th. This assignment focuses on implementing basic neural networks (e.g., Perceptrons, Shallow NNs) and optimization pipelines. TP2 (30%): The second practical project, due by December 12th. This is a more advanced project involving Deep Learning architectures (CNNs/RNNs), Reinforcement Learning, or applied tasks like defect detection.
|
|
Language |
Portuguese. Tutorial support is available in English.
|