Abstract
Generative models, especially large language models (LLMs), have shown remarkable progress in producing text that appears human-like. However, they often exhibit patterns that make their output easier to detect than text written by humans. In this paper, we investigate how explainable AI (XAI) methods can be used to reduce the detectability of AI-generated text (AIGT) while also introducing a robust ensemble-based detection approach.
We begin by training an ensemble classifier to distinguish AIGT from human-written text, then apply SHAP and LIME to identify tokens that most strongly influence its predictions. We propose four explainability-based token replacement strategies to modify these influential tokens.
Our findings show that these token replacement approaches can significantly diminish a single classifier's ability to detect AIGT. However, our ensemble classifier maintains strong performance across multiple languages and domains, showing that a multi-model approach can mitigate the impact of token-level manipulations. These results show that XAI methods can make AIGT harder to detect by focusing on the most influential tokens. At the same time, they highlight the need for robust, ensemble-based detection strategies that can adapt to evolving approaches for hiding AIGT.
Key Contributions
XAI-Based Token Identification
Application of SHAP and LIME explainability techniques to identify tokens that most strongly influence AI text detection predictions.
Token Replacement Strategies
Four novel explainability-based token replacement strategies designed to modify influential tokens in AI-generated text.
Ensemble Detection Approach
A robust ensemble-based detection classifier that maintains strong performance across multiple languages and domains.
Multi-Domain Evaluation
Comprehensive evaluation demonstrating that ensemble approaches can mitigate the impact of token-level manipulations.
Methodology
Ensemble Classifier Training
Train an ensemble classifier to distinguish AI-generated text from human-written text across multiple domains and languages.
Explainability Analysis
Apply SHAP and LIME to identify tokens that most strongly influence the classifier's predictions for detecting AI-generated content.
Token Replacement
Implement four explainability-based token replacement strategies to modify the most influential tokens identified in the previous step.
Robustness Evaluation
Evaluate both single classifiers and ensemble approaches against token replacement attacks to assess detection robustness.
Key Findings
Token replacement approaches can significantly diminish a single classifier's ability to detect AI-generated text, demonstrating that XAI methods can effectively identify and target the most influential tokens.
The ensemble classifier maintains strong performance across multiple languages and domains, showing that a multi-model approach can mitigate the impact of token-level manipulations.
These results highlight the need for robust, ensemble-based detection strategies that can adapt to evolving approaches for hiding AI-generated text, rather than relying on single-model detectors.
Citation
@article{mohammadi2025explainability,
title={Explainability-Based Token Replacement on LLM-Generated Text},
author={Mohammadi, Hadi and Giachanou, Anastasia and Oberski, Daniel L. and Bagheri, Ayoub},
journal={arXiv preprint arXiv:2506.04050},
year={2025}
}