Diachronic Language Models
Module Description
Course | Module Abbreviation | Credit Points |
---|---|---|
BA-2010 | AS-CL, AS-FL | 8 LP |
BA-2010[100%|75%] | CS-CL | 6 LP |
BA-2010[50%] | BS-CL | 6 LP |
BA-2010[25%] | BS-AC, BS-FL | 4 LP |
Master | SS-CL-TAC, SS-SC-FAL | 8 LP |
Lecturers | Wei Zhao, Yi Fan |
Module Type |
|
Language | English |
First Session | 15.10.2024 |
Time and Place | Tuesday, 15:15-16:45, INF 329 / SR 26 |
Commitment Period | tbd. |
Prerequisites for Participation
- Introduction to Computational Linguistics or similar introductory courses
- Introduction to Neural Networks and Sequence-To-Sequence Learning (or equal)
- Completion of Programming I
Assessment
- Active Participation
- Presentation
- Term Paper Writing
Content
Despite huge progress in Large Language Models (LLMs), which have revolutionized the way we consume text-based information through conversational interface, their ability to understand knowledge and language at different points in time from the past to present remains unclear. This capability may seem nascent at first glance, but it has the potential to impact people’s lives at the greatest. For instance, approximately 30% of search engine queries worldwide (over 2 billion Google queries every day) are time-sensitive, making it crucial for LLMs to ensure the temporal relevance of model responses. Additionally, this capability is essential for the automatic induction of previously non-existent low-resource and bilingual dictionaries that reflect language change over time; LLMs, when temporally grounded, would provide a means of dictionary induction to support the EU’s multilingualism policy and globalization efforts beyond the EU. In this course, we will explore the following topics:
- Evaluating the temporal grounding of LLMs
- Demystifying model delusionality and rationality over time
- Manipulating LLMs to force model responses to be time-relevant
- Applications of LLMs in automatic dictionary induction
Course resources are available here.