You need to activate javascript for this site.
Menu Conteúdo Rodapé
  1. Home
  2. Courses
  3. Artificial Intelligence and Data Science
  4. Interaction with Large-Scale Models

Interaction with Large-Scale Models

Code 16676
Year 2
Semester S1
ECTS Credits 6
Workload PL(30H)/T(30H)
Scientific area Informatics
Entry requirements For success in this unit, students are recommended to have the following basic profile: - Basic programming knowledge, preferably in the Python language. This foundation is essential for automating tests and integrating prompts programmatically through the APIs of Large-Scale Language Models (LLMs). - Introductory understanding of Artificial Intelligence and Machine Learning, fundamental for understanding how models process context and generate text probabilistically. - Strong logical reasoning and structured critical thinking skills to evaluate and iterate on outputs. - Prior experience with Natural Language Processing (NLP) or Deep Learning frameworks (e.g., PyTorch) is highly beneficial for understanding the underlying architecture, but is not mandatory. The focus of the unit is on interaction, so full training of networks from scratch will not be required.
Learning outcomes By the end of the curricular unit, students will be able to design, evaluate, and improve prompts for Large-Scale Language Models (LLMs). They will understand prompt structure, context management, hallucinations, and submissions, mastering techniques such as zero/few-shot, chain-of-thought, and role-based prompting. They will develop critical thinking to validate responses and iterate prompts in real-world tasks.

These objectives align with the adopted teaching method, focused on a predominantly practical approach (learning by doing). The theory behind the mechanics of LLMs is consolidated in laboratory classes through direct interaction with state-of-the-art models. With immediate engineering exercises and applied projects, students continuously test and refine their interactions. This integration ensures a smooth transition between theoretical understanding and execution, empowering students to extract maximum value from complex AI systems.
Syllabus The curricular unit program is progressively organized into the following modules:
- Fundamental principles, basic mechanics of LLMs, overview of applications and common challenges (e.g., hallucinations).
- Components of an effective prompt, clear formulation of guidelines, task delimitation, and rigorous output formatting.
- Optimized context window management and role-playing strategies for domain adaptation.
- Zero-shot and Few-shot approaches. Implementation of Chain-of-Thought and iterative meta-instructions.
- Qualitative and quantitative evaluation metrics. Identification and mitigation of biases and ethical considerations in generative AI.
- Workflow automation, data extraction, code generation, creative stimulation, and decision support.

Programmatic integration via LLM APIs (e.g., OpenAI, open-source models) and introduction to fine-tuning strategies.
Main Bibliography Brian Roemmele, The Art of Prompt Engineering
OpenAI, GPT Best Practices Guide (online resource)
Lilian Weng, Prompt Engineering Techniques and Applications (blog articles)
Scientific articles on NLP and prompt strategies at conferences such as ACL, NeurIPS, and others related to artificial intelligence.
Language Portuguese. Tutorial support is available in English.
Last updated on: 2026-03-25

The cookies used in this website do not collect personal information that helps to identify you. By continuing you agree to the cookie policy.