Exploring the em dash, negative parallelisms, emojis, tonal balance, and other previous signs of life now mistaken for inauthenticity. A look into the hesitation forming around tone, voice, and the risk of sounding too human or too machine.
December 2025

Option + Shift + Minus & Delete
Before the Machines Learned to Write
Emily Dickinson is an obvious touchstone since she wrote with dashes the way other poets wrote with metaphor. They were everywhere in her work. Short ones, long ones, slanted up or slanted down. Scholars have spent more than a century arguing about what they meant and even how to print them. Some believed the dashes indicated pauses for breath when reading aloud. Others argued they marked emotional intensity, the places where language strained against what could be said. Paul Crumbley suggested the dash represented disjunction itself, the fractured quality of selfhood in language. Deirdre Fagan called them markers of “the unutterable,” moments where words failed and silence had to carry meaning.
Consider this fragment from one of her poems: “The Brain— is wider than the Sky—”. The dash following “Brain” produces a momentary suspension that withholds classification, while the dash following “Sky” blocks syntactic closure, indicating that the comparison remains open-ended beyond the limits of the line. They make you stop and reconsider what you just read. They create a visual interruption that mirrors the conceptual shock of the claim itself. Ben Lerner describes Dickinson’s dashes as “markers of the limits of the actual, vectors of implication where no words will do.”
For Dickinson, the dash represented what could not be spoken cleanly. A mind in motion. A thought interrupting itself. An emotion too large for the sentence containing it. The dash was human expressiveness at its most raw. While the symbol is shared, the underlying logic is typically unrelated. Dickinson’s dashes reflect intentional interruption; model-generated dashes that we see in today’s LLM output more likely reflect statistical preference.
The em dash is only one example of punctuation in peril. Many writers have used punctuation as a way to capture the stutter and lurch of thought, or the way feeling appears in spikes rather than in smooth arcs. Think of confessional poets who let line breaks fall in jarring spots. Think of diarists who trail off mid-sentence. Think of experimental novelists who leave whole pages unpunctuated and then suddenly drop a swarm of commas into a single paragraph. Punctuation, specifically the irregularity of it, has always been appreciated in writing. It signals that the writer is wrestling with something alive. It is a record of thinking in progress. We have always used these small anomalies as clues that a real person is behind the page. When a sentence sounds slightly wrong in the right way, we lean in. We assume there is a mind at work that is not simply following programming of large-language models.
That assumption is now under strain. The same marks and rhythms that once read as human texture are being re-coded as possible evidence of a machine design.
The New(ish) Marks of Suspicion
In 2023, Wikipedia editors launched Project AI Cleanup to address the flood of AI-generated submissions entering the encyclopedia. By late 2024, they had compiled a public field guide titled “Signs of AI Writing,” a detailed catalog of patterns observed across thousands of instances of machine-generated text and documents what LLM output typically looks like in practice.
The five signals below represent some of the most reliable and consequential markers. Each one was once a legitimate rhetorical choice. Each one is now grounds for suspicion.
↳ The Em Dash
Large language models use the em dash more frequently than nonprofessional human writing in the same genre, and they deploy it in places where humans would typically choose commas, parentheses, or colons. More importantly, LLMs use em dashes in a formulaic way, often mimicking “punched up” sales language by overemphasizing clauses or creating false parallelisms.
The em dash, once seen as a sign of mature writing in English, became an irritant. Wikipedia states it plainly for us: “While human editors and writers often like em dashes, AI loves them”.
|
In November 2025, OpenAI CEO Sam Altman publicly acknowledged the issue. He posted on X: “Small but happy win: If you tell ChatGPT not to use em dashes in your custom instructions, it finally does what it’s supposed to do!”. Responses to this describing the dash as one of the model’s most recognizable stylistic habits and portraying its reduction as a kind of quality-of-life improvement for users tired of that particular tic. More significantly, the announcement validated what writers had been experiencing: the em dash had become a public liability. ChatGPT’s official Threads account even posted about its, admitting it had “ruined the em dash.” The phrasing was telling. It can argued that a punctuation mark does not get ruined by being used. It gets ruined by being turned into evidence.
The consequences are straightforward. Writers who love the em dash for legitimate reasons—for rhythm, for emotional nuance, for the ability to hold two thoughts in tension—now avoid it. The mark that Emily Dickinson used to signal ambiguity and emotional depth is now treated as a bot signature. Expressive freedom contracts.
↳ Negative Parallelism
Humans use negative parallelism all the time but perhaps never knew what it was called. “It’s about x, not y.” The structure reframes an assumption. It tells the reader: you thought it was this; it is actually that. The contrast creates emphasis. The use of this device traces back to elementary education, long before we became obsessed with the concept of juxtaposition after our first lit class.
LLMs reproduce this structure frequently, and they do so in predictable ways. We’ve noticed that AI-generated text in discussions includes phrases like “That’s not…, that’s…” or “This is not…, this is…” over and over, often without the rhetorical sophistication that justifies the structure in human writing. It feels like this is done for the assumed impact and emphasis, not built from thoughtful intent.
|
Detection culture now reads this structure as suspicious. When a human writer uses negative parallelism to make a sharp rhetorical point, readers conditioned to identify AI signals may classify it as machine-generated. In this environment, intent becomes irrelevant; models and detectors default to surface-level “aura-farming” rather than meaning or syntactic purpose.
This creates a philosophical bind. Negative parallelism is useful precisely because it mimics how humans think through problems. We correct ourselves. We reframe. We say “wait, actually it is this other thing.” Machines learned this pattern from us. We are now being told we sound like machines when we sound like ourselves.
The rhetorical implications matter. If writers avoid negative parallelism to evade suspicion, they lose a tool for clear argumentation. The structure does conceptual work. Removing it flattens the thinking.
↳ The Rule of Three
LLMs have a technical tendency toward balance. This often happens because of how the models are trained: they predict the next token based on probability distributions, and symmetrical structures score well in those distributions. LLMs love threes. Three adjectives in a row. Three short phrases separated by commas. Three examples to prove a point. The pattern appears everywhere in machine-generated text because it creates a sense of completeness without requiring deep analysis.
This can take different forms, from “adjective, adjective, adjective” to “short phrase, short phrase, and short phrase.” The structure makes superficial analyses appear more comprehensive than they actually are. You see it constantly: “The platform is intuitive, powerful, and scalable.” “The team demonstrated creativity, resilience, and strategic thinking.” “We need to innovate rapidly, communicate clearly, and execute flawlessly.” Three beats. Three items. Always three.
|
The rule of three has legitimate rhetorical power. “Life, liberty, and the pursuit of happiness” works because the third element surprises and expands. “I came, I saw, I conquered” works because the progression builds momentum. Classical rhetoric used triadic structures to create rhythm and emphasis.
LLMs use the rule of three differently. They often deploy it as filler, as a way to pad sentences and create false substance. The three items rarely build toward anything. They just sit there, equally weighted, claiming completeness through repetition. Writers who actually think in threes, who use triadic structures to build arguments or create rhythm, now face suspicion. The pattern has been so thoroughly associated with LLM writing that using it well requires extra justification. You have to prove you meant it.
Some writers may have started deliberately breaking their threes. They write four examples instead. They use two adjectives or five. They vary their lists to avoid the telltale pattern. The rule of three, one of the oldest devices in rhetoric, becomes something to hide.
↳ The Emoji Formula ✨
In what is perhaps the most newest pattern of human writing, emojis are used by LLMs in the way we use punctuation. They appear at strategic intervals throughout text, not as spontaneous emotional expression but as programmatic decoration. The sparkle as an icon has become so synonymous with AI that that we see it in Gemini’s logomark or used on platform in places like Notion to highlight AI features.
An analysis of 330,000 ChatGPT messages revealed consistent emoji patterns: the smiling face appears frequently, rockets (🚀) and stars (⭐️) function as bullet points, and the classic sparkles frame important concepts. The pattern is formulaic. Emojis appear at the beginning and end of posts. They punctuate nearly every sentence. They cluster around calls to action. Research from the University of Maryland shows ChatGPT understands emoji semantics and cultural context: it knows that 🐐 means GOAT, that 🔥 signals approval, but this understanding produces repetitive deployment.
|
Human emoji use differs. People deploy them erratically, inconsistently, often incorrectly - we also have personal preferences and overuse favorites. We discover new meanings through context and often spend time trying to find the perfect emoji combination for whatever occasion. LLMs use emojis like they use the rule of three: as a structural device that signals completion without requiring genuine feeling.
Those who naturally use emojis for warmth and personality now second-guess themselves. What was once mark of human spontaneity existing in modern digital writing is now being reconsidered despite being only popularized in this century.
↳ Emotional Evenness
One of the subtler but more damaging signals is tone. LLM output tends toward a specific emotional register: warm but restrained, helpful but measured, polite without intimacy. The models are fine-tuned to avoid extremes. They stay in a narrow emotional band designed to feel safe and pleasant.
This creates a tonal signature. When writing maintains the same level of emotional temperature throughout, never spiking into frustration, never dropping into vulnerability, it starts to feel synthetic. Readers sense the lack of range even if they do not consciously identify it.
|
Humans write differently, especially when stakes are high. Emotional range shifts. A paragraph might start calm and analytical, then turn sharp with irritation, then soften into reflection. Tone follows thought. When tone stays perfectly modulated, it signals control that feels inhuman. Writers now flatten their own emotional range under scrutiny. They worry that showing too much feeling will seem artificial, that expressing frustration or joy too openly will read as manufactured warmth. They edit themselves into blandness.
This may be the most significant loss. Emotional range makes writing worth reading. It signals another person behind the words. When fear of suspicion drives writers to suppress their own tonal variation, the result goes beyond duller prose. Voice itself gets erased, seemingly void of ego, persona, identity.
Shame and What It Changes in Writing
AI shame is different from other forms of writing anxiety. Research by Advait Sarkar defines AI shaming as “a social phenomenon in which negative judgements are associated with the use of Artificial Intelligence,” including comparing someone’s work to AI output as disparagement, voicing suspicion to undermine reputation, or blaming poor quality on AI use. Being mistaken for a machine is treated as a comment on your character: your effort, your honesty, your seriousness. You do not want to be seen as lazy, sneaky, or hollow. So you install a detector inside your own head and let it judge in advance.
You type a line that feels a little too smooth and immediately wonder whether it sounds like that tone. You imagine a teacher deciding you cheated. You imagine an editor assuming you pasted from a model. You imagine a friend reading your message and deciding it feels off in a way that suggests you did not really care enough to write it yourself. So you adjust, again and again.
You type a line that feels a little too smooth and immediately wonder whether it sounds like that tone. You imagine a teacher deciding you cheated. You imagine an editor assuming you pasted from a model. You imagine a friend reading your message and deciding it feels off in a way that suggests you did not really care enough to write it yourself. So you adjust, again and again.
This is rational behavior in an environment where authenticity is constantly questioned. The rationality makes the problem worse. When millions of writers police their own style to avoid shame, the cumulative effect is a fundamental rewriting of what counts as acceptable human expression. Here are some mechanisms by which AI shame is changing how humans write:
↳ Stylistic Risk Aversion
Writers now edit out elements once associated with expressiveness: em dashes, parallel structures, balanced clauses, vivid emotional phrasing, distinctive rhythmic patterns, high polish, even emojis. These tools still work. They are still effective. Writers avoid them because they have been flagged as AI tells.
You trim the parts of your style that now feel risky. You bring your vocabulary down a notch. You avoid parallel phrasing, even when it would land nicely. You remove emojis because you saw a thread that said “only AI still uses those like that.” Human expressiveness collapses into a narrow, risk-averse median style.
↳ Downshifting Intelligence and Vocabulary
Writers self-limit. Softer vocabulary. Simpler clauses. Fewer syntactic turns. Intentionally imperfect phrasing. Deliberate rough edges. The goal is no longer clarity. The goal is plausible imperfection.
This is a new aesthetic: the performance of fallibility. A model of humanity based on strategic underperformance. You flatten emotional intensity because you have started to read strong feeling as corny machine mimicry, even when it comes from you.
↳ Preemptive Conformity to Detection Norms
As detection norms spread through public document lists, public platforms, internet posts, classroom warnings, people begin applying these norms to themselves. Sarkar identifies this as boundary work: knowledge workers construct boundaries between acceptable and unacceptable practice to acquire intellectual authority and career opportunities while denying those resources to others. The mechanism serves professional self-protection: maintaining status by policing who gets to claim legitimacy.
AI floods the environment. People identify machine features. Lists of features circulate. Humans avoid those features. Human writing converges toward avoidance behavior. You are no longer trying to say what you mean in the way that feels true to you. You are trying to say what you mean in a way that will pass other people’s silent tests. Authenticity becomes defined by what you carefully do not express.
↳ The Collapse of Emotional Register
Strong feeling, once a mark of human intensity, now reads as artificial. Writers dampen affect. Avoid warmth. Avoid enthusiasm. Avoid lyricism. Avoid intensity. Emotion becomes suspicious because models can simulate it convincingly. People learn to mute the very signals that once marked humanity.
The em dash that used to signal breath and emphasis now signals possible inauthenticity. The negative parallelism that used to create rhetorical contrast now signals possible laziness. Emotional warmth now signals possible LLM generation. What remains is prose that sounds like someone trying very hard not to sound like anything in particular.
|










