Presenter

Ashley Kulp

Document Type

Poster

Publication Date

2026

Abstract

AI-generated text is becoming increasingly common in educational and research settings, leading to a greater reliance on AI detection tools. While prior research has assessed the performance of AI detection tools, few studies have examined their performance at determining humanized and/or hybridized text This study intends to evaluate the accuracy of five publicly accessible AI detectors in distinguishing between human-written, AI-generated, humanized, and hybrid academic text. Discussion sections from 200 Alzheimer's disease research articles were collected. Of these articles, four versions were created: human only, AI only, humanoid, and hybrid (30% rewrite), resulting in 800 total samples. In the primary analysis, detectors were presented with all human-only and AI-only text, while the secondary analysis used a randomized subset of humanoid and hybrid texts. All detectors evaluated the same final set of 500 samples. Detector outputs were recorded as numeric scores and performance was assessed using area under the receiver operating characteristic curve (AUROC). The study's findings emphasize the need for further development of AI detection tools to properly distinguish AI from human writing and to increase their reliability. Further, given the rapid development of LLMs, detectors should be continuously revised to keep up with more advanced text transformations.

Faculty Mentor

Samantha Rosenthal, Ph.D., M.P.H.

Academic Discipline

College of Arts & Sciences

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.