Text AI detection is such a crapshoot. When you consider what it's doing, it's just reproducing plain language. It's not going like, "He walked through the rubble [beep], his boots crunching on the remnants of what was [boop]." The closest you tend to get is tells like a no-proofreading "As an AI language model" slipped into an academic paper. Or AI-isms associated with certain models and datasets, like a high tendency to talk about "shivers down the spine" and such things. But even AI-isms are not themselves evidence of AI use. They become model tropes because of how common they are in human writing.
Image AI detection can be more reliable, depending on the tool, because (if I understand right) diffusion models produce "noise" in a very particular way, that an automated tool could be trained to detect. And it's in a way that a human is highly unlikely to accidentally produce.
But that's the crux of it, as far as I can tell. For it to be reliably detectable, it needs to have distinctly non-human characteristics embedded in the output. If it doesn't, good luck.