The AI Exoskeleton:

Writing With a Tool, Not Hiding Behind One

The AI Exoskeleton:

Voice & Vision | Writing, Technology & Trust

There’s a quiet contract on Substack. When you read an essay, you assume there’s a human on the other end. A real person thinking, struggling, choosing words, and taking responsibility for the ideas on the page. That expectation matters. It’s also why the rise of AI-generated writing has created so much backlash here. Much of that criticism is deserved.

The anger usually starts with theft. Large language models were trained on oceans of human writing, much of it scraped without consent. That’s not a small ethical footnote. It’s foundational. Add to that the flood of low-effort AI content, the endless posts that feel interchangeable, bloodless, and oddly confident about nothing in particular. People call it slop because that’s what it feels like. Volume without thought, output without cost.

There’s also the deeper objection. Writing is thinking. When someone hands the thinking to a machine, something essential is lost. The false starts. The discomfort. The slow wrestling with an idea until it sharpens or collapses. Many readers can sense when that work didn’t happen. The prose may be clean, but it’s hollow. No lived experience. No risk. No soul.

All of that critique lands. I agree with more of it than I disagree with. But I don’t think the story ends there.

There’s a difference between using AI as a shortcut and using it as an exoskeleton, a form of iterative collaboration that amplifies effort rather than replacing it. One replaces thinking. The other pressures it. In my own work, AI isn’t a ghostwriter. It’s a sparring partner. I bring the premise, the structure, the lived experience, and the point of view. Then I push back. Hard. I ask it to challenge, everything. I reject clean but shallow phrasing. I force revisions until the voice sounds like me again, not like a median of the internet.

This process is not necessarily faster, it’s often slower. Recursive dialogue exposes gaps in thinking that a solo draft might miss. Tone refinement takes work because the default output is rarely good enough. What AI is actually doing here is cognitive offloading. It handles the mechanical load so more mental energy can go into coherence, nuance, and meaning.

The thinking still belongs to the human. The responsibility does too.

Tools can assist, but accountability cannot be delegated.

That distinction matters because the real value of an essay isn’t who typed the words. It’s who directed the thinking. We don’t judge a photographer by whether they mixed their own chemicals or a musician by whether they built their own instrument. The lead author in a scientific research study often didn’t do much of the actual writing, or any of it. We judge the work. Its clarity. Its honesty. Its insight. Writing shouldn’t be different just because the tool feels unfamiliar.

That said, transparency matters. Trust matters. I believe writers who use AI should say so. Not with shame and not with bravado. Just honesty. A simple statement of use signals respect for the reader and preserves the human-to-human contract that makes this platform work. Hidden automation erodes trust far faster than open collaboration ever could.

The future of writing here doesn’t have to be a choice between purity and slop. There’s a middle ground where rigor matters more than provenance and where effort is visible in the result, even if the process looks different than it used to. AI can cheapen writing. We’ve all seen that. But in disciplined hands, it can also force better thinking, sharper arguments, and fewer lazy conclusions.

The question isn’t whether a machine was involved. The question is whether the writer was.

And that, in the end, is still something a reader can feel.

Thanks for reading! This post is public so feel free to share it.