Document Type
Article
Department
Institute for Educational Development, East Africa
Abstract
The use of Generative Artificial Intelligence (AI) tools, such as Gemini and ChatGPT, is on the rise, and it has revolutionised content generation in professional and academic domains. However, their increasing sophistication presents challenges in distinguishing AI-generated texts from human-written ones, raising concerns about integrity, especially in academic writing. This study evaluated the effectiveness of three AI text humanising tools: Writesonic, QuillBot paraphraser, and WriteHuman in refining texts generated by ChatGPT and Gemini. The study employed a comparative experimental design. By employing quantitative analysis, the study compares baseline AI detection rates with those after successive humanisation iterations. Comparison was also made across different humanising tools. The AI detecting tools that were used in the study are: QuillBot, ZeroGPT, and Scribbr. Findings show variations in tool effectiveness. While WriteHuman achieved consistent success in masking AI text origins with an Average Detection Rate (ADR) of 1.98%, Writesonic and QuillBot exhibited inconsistent performances in their humanising processes with Average Detection Rates of 64.39% and 93.56%, respectively. Across all iterations and AI detectors, Writesonic and QuillBot detection rates fluctuated significantly, sometimes improving and then regressing unpredictably, highlighting that these tools are not reliable. Further, QuillBot performed poorly with its 15 out of 18 outputs from 18 iterations remaining flagged as AI-generatedin 15 out of 18 iterations across all AI text detectors. This study highlights ethical implications, with emphasis on the need for users to use these tools with caution, since their effectiveness is not guaranteed. Further, policies governing the usage ofthese tools in academic writing should be put in place by academic institutions. This study makes a significant contribution to understanding the interplay between Generative AI tools, AI content detection technologies, and humanising strategies, fostering informed discourse on academic integrity in the era of AI
AKU Student
no
Publication (Name of Journal)
International Journal of Advanced Research
DOI
https://doi.org/10.37284/ijar.9.1.4683
Recommended Citation
Epaphras, S. N.,
Mtenzi, F.
(2026). Evaluating the Effectiveness of AI Text Humanising Tools in Reducing AI Detection in AI-Generated Texts by AI Detectors. International Journal of Advanced Research, 9(1), 107-125.
Available at:
https://ecommons.aku.edu/eastafrica_ied/258
Creative Commons License

This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.