Project Details
Description
Current AI systems are trained slowly on a mix of everything—leading to worse performance in languages with less data. This project constructs targeted, human-inspired training interventions to train models faster, and merges models with complementary skills to create models with better reasoning capabilities for smaller languages, while providing causal insights into how these models learn.
Layman's description
Compared to humans, current AI systems learn slow: requiring multiple thousand lifetimes worth of data to train. They also struggle to combine different skills, being worse at the same math problem if asked in a 'smaller' language. To uncouple what AI models can do from the language they work in, we need a modular approach that learns as fast as humans do.
| Acronym | LFSAI |
|---|---|
| Status | Active |
| Effective start/end date | 01/12/2025 → 30/11/2027 |
Collaborative partners
- IT University of Copenhagen (lead)
- The University of Tokyo
Funding
- Carlsberg Foundation: DKK1,673,658.00
Keywords
- Artificial Intelligence
- Machine Learning
- Natural Language Processing
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.