Using Natural Language Processing to Predict Job Satisfaction and Turnover Intention

Presentation Type

Poster

Faculty Advisor

Michael Bixter

Access Type

Event

Start Date

26-4-2024 11:15 AM

End Date

26-4-2024 12:15 PM

Description

This study leverages advanced Natural Language Processing (NLP) techniques, particularly the state-of-the-art BERT large language model (LLM), to predict job satisfaction—a construct traditionally measured by Likert-scale questionnaires or interviews (Jura et al., 2022). Recognizing language as the most common and reliable measure for conveying internal thoughts (Kjell et al., 2023), we utilize text responses to predict corresponding rating scales. Data was collected via MTurk to ensure representation from a diverse working population, with additional validation data collected from SONA samples. Our chosen LLM, BERT's transformer architecture proves invaluable for interpreting contextual text meanings, essential for accurate prediction. Our approach forecasts self-reported job satisfaction for convergent validity. Additionally, we assess NLP's capacity to predict turnover intentions after controlling for self-reported job satisfaction scores. Analysis of MTurk data validates the use of open-ended text responses in measuring job satisfaction, evidenced by a strong positive correlation (r = 0.67). While these responses did not enhance incremental validity beyond the job satisfaction rating scale (f1 accuracy comparison: 0.63 vs. 0.60), they did reveal an f1 accuracy of 0.49 in predicting turnover intentions independently. Our research confirms the efficacy of using NLP methods in organizational research, while also offering insights into the theoretical understanding of job satisfaction.

This document is currently not available here.

Share

COinS
 
Apr 26th, 11:15 AM Apr 26th, 12:15 PM

Using Natural Language Processing to Predict Job Satisfaction and Turnover Intention

This study leverages advanced Natural Language Processing (NLP) techniques, particularly the state-of-the-art BERT large language model (LLM), to predict job satisfaction—a construct traditionally measured by Likert-scale questionnaires or interviews (Jura et al., 2022). Recognizing language as the most common and reliable measure for conveying internal thoughts (Kjell et al., 2023), we utilize text responses to predict corresponding rating scales. Data was collected via MTurk to ensure representation from a diverse working population, with additional validation data collected from SONA samples. Our chosen LLM, BERT's transformer architecture proves invaluable for interpreting contextual text meanings, essential for accurate prediction. Our approach forecasts self-reported job satisfaction for convergent validity. Additionally, we assess NLP's capacity to predict turnover intentions after controlling for self-reported job satisfaction scores. Analysis of MTurk data validates the use of open-ended text responses in measuring job satisfaction, evidenced by a strong positive correlation (r = 0.67). While these responses did not enhance incremental validity beyond the job satisfaction rating scale (f1 accuracy comparison: 0.63 vs. 0.60), they did reveal an f1 accuracy of 0.49 in predicting turnover intentions independently. Our research confirms the efficacy of using NLP methods in organizational research, while also offering insights into the theoretical understanding of job satisfaction.