Analyzing the Linguistic Characteristics of Undergraduate Student Writing and AI Generated Assignments

Presentation Type

Poster

Faculty Advisor

Larissa Goulart da Silva

Access Type

Event

Start Date

26-4-2024 2:15 PM

End Date

26-4-2024 3:15 PM

Description

Since the public release of ChatGPT in November 2022, language teachers and writing instructors have contended with its possible pedagogic applications and the challenges it brings to assessment practices. Casal and Kessler (2023) analyze whether reviewers in the field of applied linguistics can distinguish AI and human written abstracts. The blossoming literature of AI in Applied Linguistics has veered towards two broad directions: (1) applications of AI into teaching and (2) whether AI can simulate original pieces of writing. In this study, we bring both themes together by exploring the extent to which ChatGPT assignments approximate the linguistic profile of assignments written by undergraduate students in an English as a Foreign Language context. In this study, we use two corpora, one from the linguistics major, and another using ChatGPT 3.5, ChatGPT was given the same prompts as the undergraduate students. We conducted an additive Multidimensional Analysis (MDA) using Biber’s (1988) dimensions, allowing for a broader comparison between the linguistic profile of both corpora. In dimension 1, student writing loads on the positive side (1.34), while ChatGPT assignments load on the negative side (-22.1). This suggests that ChatGPT approximates more formal writing, while student writing is more informal and personal. Similarities between the corpora were also revealed, such as narrative features and persuasion. These results highlight the extent to which ChatGPT can simulate the linguistic features characteristic of student writing.

This document is currently not available here.

Share

COinS
 
Apr 26th, 2:15 PM Apr 26th, 3:15 PM

Analyzing the Linguistic Characteristics of Undergraduate Student Writing and AI Generated Assignments

Since the public release of ChatGPT in November 2022, language teachers and writing instructors have contended with its possible pedagogic applications and the challenges it brings to assessment practices. Casal and Kessler (2023) analyze whether reviewers in the field of applied linguistics can distinguish AI and human written abstracts. The blossoming literature of AI in Applied Linguistics has veered towards two broad directions: (1) applications of AI into teaching and (2) whether AI can simulate original pieces of writing. In this study, we bring both themes together by exploring the extent to which ChatGPT assignments approximate the linguistic profile of assignments written by undergraduate students in an English as a Foreign Language context. In this study, we use two corpora, one from the linguistics major, and another using ChatGPT 3.5, ChatGPT was given the same prompts as the undergraduate students. We conducted an additive Multidimensional Analysis (MDA) using Biber’s (1988) dimensions, allowing for a broader comparison between the linguistic profile of both corpora. In dimension 1, student writing loads on the positive side (1.34), while ChatGPT assignments load on the negative side (-22.1). This suggests that ChatGPT approximates more formal writing, while student writing is more informal and personal. Similarities between the corpora were also revealed, such as narrative features and persuasion. These results highlight the extent to which ChatGPT can simulate the linguistic features characteristic of student writing.