Retrieval-based dialogue systems choose from a fixed set of responses; generative models produce new replies. Sequence-to-sequence (seq2seq) models — encoder-decoder networks that map a source sequence (the user turn) to a target sequence (the system reply) — are a classic approach to generative dialogue.
In this lab you build an end-to-end seq2seq dialogue system: you will use the provided conversation data, implement or use an encoder-decoder architecture (e.g. LSTM or transformer), train it to generate responses, and test it on general chat and on a more factual question-answering domain. You will see how generative models can produce more varied, context-sensitive replies than retrieval, at the cost of possible incoherence or hallucination — which motivates the RAG lab in Week 4.
The lab notebook (Lab3) walks through data loading, model definition, training loop, and inference. Complete the exercises to compare different architectures or decoding strategies.