Slop Or Not Logo

AI Humanizer for Claude, Codex, Hermes Agent, OpenClaw

Published

There is a free, open-source AI humanizer that works with the agent subscription you already pay for. It is called Agentic Humanizer, and it closes the rewrite loop with an on-device AI detector instead of guessing.

If you have been pasting drafts into a $10-30 a month "AI humanizer" wrapper, you can stop. Agentic Humanizer is a public skill in the SlopOrNot plugin bundle. It runs the rewrite step inside Claude Code, Codex CLI, Hermes Agent, OpenClaw, Cursor, Gemini CLI, or OpenCode. Install once, keep it. The voice-matching path can also steer rewrites toward a writing sample you provide, so the output sounds less like a generic humanizer and more like your normal prose.

What Agentic Humanizer Does

Agentic Humanizer is a humanizer skill that revises AI-generated text the way a careful editor would. It asks four rewrite questions before it touches a single sentence:

  • Dialect
  • Tone
  • Reading level
  • Length

Those answers feed the prompt with the audience, the target reading level, the tone, and the length, so the rewrite is opinionated instead of generic. A teacher reads differently from a hiring manager, and the skill writes for the reader you name.

When voice matching is available, it can ask a fifth question: whether to mimic a writing sample of yours. That sample is optional. If you say yes, the skill extracts a compact style fingerprint before the rewrite loop begins.

Then it runs the rewrite loop. Each pass reads the draft, identifies what makes it sound generated (repeated sentence shapes, hedged confidence, vague attributions, rule-of-three flourishes, em-dashes used for rhythm), and rewrites with specific targets. After the rewrite, it scores the result against Slop or Not's on-device AI detector and readability analyzer. If the score is still high, it changes tactics on the next pass: longer sentences, fewer transitions, a more specific opening, a different argument shape. Same prompt twice rarely solves the problem; a different strategy might.

How Agentic Humanizer Differs From Other AI Humanizer Tools

Most paid humanizers route your draft through a generic LLM, prompt it once with "make this sound human", and return the output. There is no strategy change between passes, no detector loop, and no audience model. The score either flips or it does not, and you pay for the privilege of finding out.

Agentic Humanizer does five things those wrappers do not:

  1. Asks first, rewrites second. The four setup questions turn a generic rewrite into one tuned for a real audience and a target reading level. Most wrappers skip this entirely.
  2. Closes the loop with a real detector. The skill calls Slop or Not's on-device AI detector and readability analyzer between passes. The score is a feedback signal, not a final exam.
  3. Changes tactics when the score does not move. A second pass is not a louder version of the first pass. The skill instructs the agent to try a different sentence rhythm, a different opening, a different argument shape.
  4. Matches your voice when you provide a sample. A cached fingerprint can steer register, contractions, hedges, openings, and paragraph rhythm toward your own writing.
  5. Open-source and free. The whole workflow is on GitHub at numen-tech/slopornot. Fork it, change anything that does not fit, run it on the agent subscription you already have.

It works with Claude Code, Codex CLI, Hermes Agent, OpenClaw, Cursor, Gemini CLI, and OpenCode. Without Slop or Not Pro, the skill can still do a rewrite pass. With Pro active in the Mac app, it runs the measured loop with detector and readability feedback between passes. That detector loop is why the writing it produces is less like output from a one-shot rewrite wrapper: the skill can revise, measure, and change tactics.

How Voice Matching Changes the Loop

Voice matching gives the rewrite a style target before the detector loop starts. Agentic Humanizer can read ~/.agentic-humanizer/voice.txt, use voice=/path/to/file.txt for one run, or ask for a writing sample when no saved sample exists.

The sample policy is deliberately plain:

  • 200 or more words is recommended.
  • 50 words is the hard minimum.
  • The first 3000 words are used for extraction.
  • voice=off or voice-skip disables voice matching for one call.

On first use, the skill extracts a stylometric fingerprint and asks you to approve it. The fingerprint covers style traits such as register, contractions, hedges, function words, punctuation quirks, signature openings, and paragraph rhythm. It should not copy facts, names, or anecdotes from the sample.

The fingerprint then feeds two parts of the existing five-iteration loop. Iteration 2 uses register, contractions, hedges, and function-word habits during tone alignment. Iteration 5 uses openings, idioms, and paragraph rhythm when adding concrete phrasing. The loop schedule itself does not change.

You can inspect or reset the saved voice state:

/agentic-humanizer show voice
/agentic-humanizer reset voice
/agentic-humanizer set voice=/path/to/file.txt

If your main question is voice rather than detector setup, read the deeper guide: How to Humanize AI Text to Sound Like Your Own Voice.

The Detector Half: Why Slop or Not

The rewrite is the easy half. The detector is the part most workflows get wrong.

Online detectors like GPTZero and Originality.ai run checks on their servers and often meter usage by credits, characters, or word count. That is why long drafts can run into caps. That is why sensitive text leaves your device. That is why a single score from one of them is one signal, not a verdict.

Slop or Not on Mac is built differently. Its text detector is a custom-trained classifier built specifically for AI text, running locally on Apple silicon. Internal tests put text accuracy at 95%. (Accuracy is based on internal tests; results can vary with new AI models and advanced methods designed to trick detectors.) Because there is no per-token bill to a third-party LLM provider behind every check, Slop or Not does not charge per word the way many web tools do. A classroom can check full essays without splitting them into fragments.

Two things follow from running the detector locally.

First, privacy. The draft never leaves your Mac. If the draft is a job application, a legal brief, a clinical note, or a student essay, "we run it through GPTZero" means "we sent it to a third-party server." The Slop or Not detector runs on-device. There are no per-document word caps either: paste the full draft.

Second, the loop. A detector you check by hand once at the end is not a feedback signal. A detector your agent can call between every revision is. That is the difference between a writing pass and a measured rewrite.

Slop or Not Pro ships the slop CLI and slop mcp MCP server inside the Mac app. Both are how Agentic Humanizer reads detector and readability scores between passes.

Set Up Agentic Humanizer With MCP

Two parts. Link the binary first:

mkdir -p ~/.local/bin
ln -sf '/Applications/Slop Or Not - AI Fake Detector.app/Contents/MacOS/slop' ~/.local/bin/slop
slop status

Then register the server with your agent. Claude Code:

claude mcp add --transport stdio --scope user SlopOrNot -- slop mcp

Codex CLI (~/.codex/config.toml):

[mcp_servers.SlopOrNot]
command = "slop"
args = ["mcp"]

Hermes Agent, OpenClaw, Cursor, Gemini CLI, and OpenCode follow similar patterns. Full setup details live in the Slop or Not CLI guide and Slop or Not MCP guide. For a deeper walkthrough of the underlying detector, readability, and cleanup commands, read the Slop or Not CLI and MCP overview.

Install the SlopOrNot plugin bundle from github.com/numen-tech/slopornot, and the agent will call detect_text, analyze_readability, and clean_text itself between passes. Claude Code plugin installs use /slopornot:agentic-humanizer; direct skill installs and most non-plugin clients still use /agentic-humanizer.

Does Agentic Humanizer Work in Hermes Agent and OpenClaw?

Yes, when the client can load the skill or run an equivalent prompt and can call the Slop or Not MCP server. Hermes Agent and OpenClaw use MCP-style local tool configuration, so they can point at the same slop mcp command as Claude, Codex, and Cursor. The runtime name stays agentic-humanizer; the repository and plugin bundle now live at numen-tech/slopornot.

The Manual CLI Version

If you would rather build your own AI humanizer instead of installing the skill, the same loop works as shell commands plus a prompt. No custom code, no extra subscription. The same template works as a ChatGPT humanizer, a Claude humanizer, or a Codex humanizer. Model differences mostly affect tone defaults, not the structure of the loop.

You need:

  • An agent subscription you already have: ChatGPT, Claude, Codex CLI, Hermes Agent, OpenClaw, Cursor, Gemini CLI, or OpenCode.
  • A Mac with Apple silicon and Slop or Not Pro for the local AI detector.

The minimum loop in shell:

pbpaste | slop text --json
pbpaste | slop readability --json
pbpaste | slop cleanup --json

Paste the JSON output back into your agent. Ask for one focused revision per pass. A useful prompt looks like this:

Here is a draft and a Slop detector result.
 
[draft]
 
[detector JSON]
 
Rewrite for a hiring manager at an 8th-grade reading level. Keep every
claim and the structure. Vary the sentence rhythm. Remove repeated
sentence openings. Cut hedged confidence ("might", "could potentially",
"aims to"). Then return the rewrite.

Run the Slop or Not detector again. If the score moved in the right direction, keep iterating. If it did not, change tactics on the next pass: longer sentences, fewer transitions, a more specific opening, a different argument shape.

Why Most Paid AI Humanizers Are Not Worth the Subscription

Cloud humanizers like the rewrite forms inside Originality.ai and the long tail of "AI-to-human" sites generally do three things:

  1. They send your text to a generic LLM, often the same OpenAI or Anthropic model you already have access to.
  2. They prompt that LLM to "rewrite this to sound human."
  3. They charge $10–30 a month for the convenience.

The rewrite quality reflects the underlying model, not the wrapper. If you were going to ask Claude to "make this less robotic" anyway, you have already paid for the rewrite step. The detector half is messier. Cloud detectors like GPTZero and Originality.ai charge for credits and word counts, upload your draft to a server, and disagree with each other on borderline text.

So you are paying for a rewrite you can already do, plus a score you should not fully trust. Agentic Humanizer plus Slop or Not replaces both halves on tooling you already own, with optional voice matching when your own sample should guide the rewrite.

What About Bypassing GPTZero or Originality.ai?

No tool can promise to bypass AI detectors. Detection is probabilistic. Different detectors run different models. A draft that scores high on Slop or Not might score low on GPTZero, and vice versa. Treating any single score as proof of authorship is wrong in either direction.

Agentic Humanizer also works without Slop or Not. The skill is a humanizer first. It can revise a draft using only the ChatGPT, Claude, Codex, Hermes Agent, OpenClaw, Cursor, Gemini CLI, or OpenCode subscription you already pay for. What Slop or Not adds is the verification loop: every rewrite is scored against an on-device AI detector trained specifically for this job, so the agent can change strategy between passes instead of running the same prompt and hoping the score moves.

The honest claim is narrower than "bypass": revising with intent, for a real audience and a target reading level, produces clearer writing. Clearer writing tends to move detector scores. It also tends to be the kind of writing a human reader actually finishes. That is the real win, and it does not require another wrapper subscription.

Privacy Notes

Slop or Not runs every check on your Mac. Detection, readability, cleanup, and image checks all stay on-device.

The agent you choose for the rewrite step has its own privacy model. ChatGPT, Claude, Codex, Hermes Agent, OpenClaw, Cursor, and Gemini may receive the text you ask them to revise. Voice matching has the same caveat: the sample and cached fingerprint live under ~/.agentic-humanizer/, but fingerprint extraction runs through the host LLM. If the draft or writing sample cannot leave your machine, run the rewrite step against a local model and keep Slop or Not in the same local workflow.

FAQ

Is there a free AI humanizer?

Yes. The Agentic Humanizer skill on GitHub is open-source and free to install. The rewrite step uses the ChatGPT, Claude, Codex, Hermes Agent, OpenClaw, Cursor, Gemini CLI, or OpenCode subscription you already pay for. The local detector loop requires Slop or Not Pro on Mac because the CLI and MCP server are Pro features. There is no separate AI humanizer subscription to buy.

Can Agentic Humanizer sound like my own voice?

Yes, if you provide a writing sample. The optional voice-matching path extracts a style fingerprint from your sample and uses it during the tone and concrete-phrasing passes. It needs at least 50 words, works better with 200 or more, and should never be treated as a perfect clone.

How do I humanize ChatGPT or Claude text?

Paste the draft into ChatGPT, Claude, or Codex with three things attached: the audience, the target reading level, and a focused list of patterns to fix (repeated openings, hedged confidence, em-dash overuse, rule-of-three flourishes). Ask for one revision pass at a time. Score the result with Slop or Not's local AI detector and readability check, then revise again with a different strategy if the score did not move.

Do I need Claude or Codex specifically?

No. Hermes Agent, OpenClaw, Cursor, Gemini CLI, and OpenCode all work. Any MCP client that can call a stdio server can use slop mcp. ChatGPT (the consumer chat app) does not call MCP servers directly, but you can run the manual CLI version with copy and paste.

Can it bypass GPTZero or Originality.ai?

Nothing promises that. Verifying every rewrite against Slop or Not's on-device AI detector, which scored 95% on internal text-detection tests and runs on Apple Neural Engine, gives you measured feedback before you ship instead of a single guess. Detection is probabilistic. A draft that scores low on Slop or Not might still score high on GPTZero, and vice versa. The point of the loop is that you ship with a score from a classifier trained specifically for AI text, not a coin flip. Accuracy is based on internal tests; results vary with new AI models and methods designed to trick detectors.

Does Slop or Not upload my draft?

No. Detection, readability, cleanup, and image checks all run on your Mac. The agent you use for rewriting has its own data policy. If you enable voice matching, the writing sample follows that agent's normal local or cloud request path during fingerprint extraction.

Can I run this on Windows or Linux?

The humanizer part of the skill still works. The rewrite questions and the one-shot rewrite run anywhere your Claude, Codex, Hermes Agent, OpenClaw, Cursor, Gemini CLI, or OpenCode subscription runs, so you use the same skill on Windows or Linux. What you lose is the local detector loop. Slop or Not is Mac and iPhone only, and the slop CLI and MCP server ship inside the Mac app, so between-pass scoring against an on-device AI detector requires a Mac.

Try It

Install Slop or Not for Mac, activate Pro, then set up the Slop or Not CLI or MCP server. Install the Agentic Humanizer skill or write your own prompt, then let Claude, Codex, Hermes Agent, OpenClaw, or Cursor run the AI humanizer loop with a local detector score in it. If you want the rewrite to follow your writing style, add a voice sample and read the own-voice AI humanizer guide.

Get Slop or Not - The AI Content Detector for iPhone and Mac
Free text, image & deepfake detection with industry-leading accuracy and 100% privacy.

Follow us to stay informed about new features and improvements, plus uncover the latest AI slopified content.