Cross-task and sequential transfer learning for euphemism detection
Presentation Type
Abstract
Faculty Advisor
Anna Feldman
Access Type
Event
Start Date
25-4-2025 1:30 PM
End Date
25-4-2025 2:29 PM
Description
Euphemism detection is a challenging task for language models due to its subtle, context-dependent, and pragmatically rich nature. We investigate how related classification tasks, such as sentiment, politeness, and sensitivity, interact with euphemism detection through cross-task and sequential fine-tuning. We show that while models fine-tuned on related tasks rarely outperform single-task euphemism baselines, the degree of forgetting or transfer in sequential setups depends on task alignment and label semantics. Training on polite or sensitive data before euphemism detection yields more robust performance than the reverse order, suggesting asymmetry in representational overlap. These findings highlight when and how pragmatic features support effective transfer learning for euphemism detection.
Cross-task and sequential transfer learning for euphemism detection
Euphemism detection is a challenging task for language models due to its subtle, context-dependent, and pragmatically rich nature. We investigate how related classification tasks, such as sentiment, politeness, and sensitivity, interact with euphemism detection through cross-task and sequential fine-tuning. We show that while models fine-tuned on related tasks rarely outperform single-task euphemism baselines, the degree of forgetting or transfer in sequential setups depends on task alignment and label semantics. Training on polite or sensitive data before euphemism detection yields more robust performance than the reverse order, suggesting asymmetry in representational overlap. These findings highlight when and how pragmatic features support effective transfer learning for euphemism detection.
Comments
Poster presentation at the 2025 Student Research Symposium.