Because if you’ve spent real time crafting something — a proposal, a report, even a thoughtful email — you know exactly how that question lands. Badly. Not because it’s offensive on the surface, but because of what it implies: that if a tool touched your work, your thinking must be diluted. That you didn’t really write it. That the ideas aren’t quite yours. That’s not curiosity; it’s a quiet challenge to authorship.
And it deserves a pushback.
The “AI-Sounding” Problem (Yes, It Exists. Sort Of.)
Let’s be honest. AI writing can feel a little… polished. Too polished. Smooth in a way that borders on sterile. Balanced to a fault. Careful where it should be bold. Structured where it should breathe.
But here’s the thing: that’s not a diagnostic. It’s a vibe. Humans write like that too. All the time. Corporate, cautious, slightly lifeless prose didn’t arrive with AI — it’s been haunting inboxes for decades. No one ever asked if Excel wrote those.
Not All AI Use Is the Same
There’s a difference between:
- dumping a prompt and hitting send
- shaping a rough draft into something sharp
- tightening language
- pressure-testing ideas
- or just fixing grammar
We lump all of that into “AI wrote it,” which is about as precise as saying “Word wrote it” because autocorrect kicked in. Tools don’t erase authorship. They expose how you use them.
The Word Processor Argument Still Wins
We don’t question someone’s thinking because they used Excel to model a budget. We don’t question a proposal because Word fixed the commas.
AI is more powerful, sure. It can generate language. But so can a ghostwriter, or a comms consultant, or a good editor who rewrites half your sentences. Authorship was never about typing every word yourself. It’s about owning the thinking — the argument, the decisions, the intent.
That hasn’t changed.
What the Detectors Don’t Tell You
AI detection is unreliable. Full stop. It exhibits false positives, hybrid text confusion and bias against non-native English writers. The idea that there’s a clean, objective way to tell is comforting, but mostly wrong.
What’s being measured is pattern, not authorship. Surface, not substance. And frankly, nobody funds a proposal because it passed an AI detector.
What Actually Matters
If you’re doing serious work, particularly in mission-driven organizations, the real questions are simpler:
- Is it accurate?
- Is it clear?
- Does it reflect what actually happened?
- Does it sound like you?
Those are answerable.
“Was AI involved?” mostly isn’t.
In the Meantime…
If someone asks whether AI wrote something you created, it’s fair to smile and push back. “Did Excel write your budget?” works nicely. Because the truth is straightforward: The ideas were yours. The judgment was yours. The responsibility is yours. That’s authorship.
The Better Question
Not: “Did AI write this?”
But: “Is it any good?”
Because in the end, nobody cares how the sentences were produced. They care whether the thinking holds up. And that part — still, stubbornly, beautifully — belongs to you.




0 Comments