19th
10:00AM
Description:
High level overview of LLMs, their applications, limitations and implications.
19th
11:15AM
Description:
In this module we will go over the basic transformer architecture, and try to understand the design decisions and relevant factors influencing the architecture.
19th
2:00PM
Description:
In this module we will explore multi-turn conversational question answering with the CoQA dataset, and implement a BERT based question answering system.
20th
10:30AM
Description:
In this module we will explore the two dominant strategies involved in language modelling, AR and MLM training. Additionally, we will go over other details involved in training transformers.
20th
2:00PM
Description:
In this module we will learn the basics of reinforcement learning followed by RLHF - a powerful technique for introducing human feedback into language models.
20th
3:45PM
Description:
In this module we will explore several strategies to alleviate computational demands of large language models.
21st
10:00AM
Description:
Large language models pose a series of serious challenges with regard to practical implementation. In this module we will explore some of these issues like hallucinations and explainability.
21st
11:15AM
Description:
Discussion and brain-storming related to ongoing projects.
21st
1:30PM
Description:
Large language models pose a series of serious challenges with regard to practical implementation. In this module we will explore these issues.
21st
2:45PM
Description:
In this module we will explore the AlphaCode system by DeepMind, which generates competitive solutions to coding contest problems using LLMs.
22nd
11:00AM
Description:
In this module we will explore various techniques for adapting LLMs to specific use cases, like fine-tuning, prompting, etc.
22nd
1:30PM
Description:
In this module we will learn to implement various prompting techniques with the Falcon-7b model.
22nd
3:45PM
Description:
Question answer session to clarify doubts and discuss other aspects.