Olwyn Patterson was scrolling LinkedIn when she spotted the profile of someone she thought would be the perfect person to help promote an upcoming event put on by her company, a platform that connects startups with VCs.
She typed up a quick introduction in her usual efficient staccato.
“I run a biannual demo day that reaches 4,000 startups a year, one of which I noticed is also in your program. We also have a 15k-plus founder, VC, and angel investor newsletter. There seems to be a natural crossover between our communities. It’d be great to share opportunities with each other.”
Moments later, her inbox pinged with a reply: “Very impressive AI-driven outreach.”
Patterson was taken aback. She prided herself on her clear, professional writing, even if it could come off as stilted.
Flummoxed and more than a little offended, she turned to her LinkedIn community of tech-industry peers to make sense of the interaction. “To anyone who thought I was a bot, I (humanly) apologise,” she wrote in a post. “I’m just a confused human trying to write some emails.”
Together, they pondered the question that’s haunting much of the professional community right now: what, exactly, does it mean to write like a human? “I used to suck at grammar and really worked on it as I became a writer,” one of her contacts, a tech founder, wrote. “Now I’m nervous that I come off as a bot.”
ChatGPT writing is flooding LinkedIn. The platform estimates that more than half of the long-form posts on LinkedIn are AI-generated.
“It’s like microplastics,” says Annette Vee, an English professor at the University of Pittsburgh who studies the intersection of writing and technology. “Whether you realize it or not, and whether you’re using it or not, it’s already in the bloodstream.”
But with no consensus over how ChatGPT, Claude, and other large language models should be used — and whether their use should be flagged — armies of users have taken on the role as the platform’s self-appointed AI police.
People who don’t know will just assume, ‘Yeah, this person is using AI.’ Meanwhile, you’ve been writing this way for 25 years.
It’s made the basic act of writing incredibly fraught. “AI is now a specter hanging over everything we write,” says Vee.
The fear of being accused of not being the author of our own words — let’s call it imbotster syndrome — is reshaping how people write. And LinkedIn, once a place for hustle brags and TED-talk-flavored self-improvement stories, has become a staging ground for a subtler kind of performance: proving you’re human.
Imbotster syndrome is usually stirred up by matters of style, rather than substance.
Across comment sections on LinkedIn and threads on Reddit, users swap lists of suspect words and patterns, debate punctuation habits, and joke about the em dash as if it were ChatGPT’s unofficial watermark.
Cliché phrases like “in today’s fast-paced world” or the neat cadence of a three-part list are enough to set people off. A line that aims for impact will get flagged as an “AI tell.”
The stress that this has unleashed is something Cheril Clarke, a ghostwriter for finance and healthcare executives, knows well.
Clarke has built a career helping powerful people sound like the best versions of themselves. But in the era of ChatGPT, that task is more daunting: ensuring her clients don’t get confused for robots.
“There are certain patterns that are completely natural for most of us when we’re talking. And the frequency with which AI uses these is really killing them,” says Clarke. “People who don’t know will just assume, ‘Yeah, this person is using AI.’ Meanwhile, you’ve been writing this way for 25 years.”
Clarke freely admits that ChatGPT is built into her process. She uses it to map out her ideas and generate outlines and rough drafts. She then rewrites the speech, op-ed, or LinkedIn post in her own words and style.
But as people became attuned to the telltale rhythms of AI-generated text, Clarke has added a final step to her process: stripping out any words or rhetorical flourishes that might add flair and persuasive force to her writing, but are now overly associated with AI.
This means she pulls apart phrasing that used to flow smoothly and avoids the breathless pacing that makes AI-generated content feel overstuffed with pauses and oddly emphatic.
“AI writes like it’s running a marathon at the same pace the whole time,” she explains. “That’s not how you run. You slow down, speed up, breathe. The machine doesn’t.”
The em-dash, which lets a thought pivot without a hard stop; the triplet list cadence, a satisfying three-part rhythm that makes ideas memorable; and the classic “not X but Y” structure, which writers and speakers often deploy to add a note of surprise and contrast, have all been dropped from her repertoire.
In this new economy of style, polish has become a liability, and the typo has turned into a kind of authenticity badge.
That loss of the last one — as in, This isn’t just about efficiency. It’s about trust — is especially galling to Clarke. The sharp rhetorical flip builds tension by pointing the reader one way before pivoting to the payoff, the point where you really want to land. But AI has run it into the ground. Once a clever flourish, the move now reads like a template stamped out by a bot.
“I’ve been using these things for 20 years, and they used to be second nature,” Clarke told me. “Now I have to stop and think about it. Of all the things that are going to have to evolve because of AI, that one probably hurts the most. They’re a powerful device but AI ruined it.”
The belief that a reader can reliably spot AI-generated writing is often wishful thinking.
Vee warns that the earliest and most obvious giveaways — stiff, robotic sentences or bizarre hallucinations — are already fading, and detection technology has not kept up with how quickly language models are learning how to mimic human style.
“There’s a general assumption that you can tell whether something is written by AI. I think that’s not right,” she says.
But the thing about imbotster syndrome is that it triggers second-guessing.
Some professionals say they deliberately degrade their own writing to sound less professional and prove they’re human. They skip commas, lean on casual slang, or even insert mistakes. In this new economy of style, polish has become a liability, and the typo has turned into a kind of authenticity badge.
“I can’t tell you how many social media posts I’ve seen from people who seem to think that because you use formal punctuation or formal language, that means you’re a bot,” says Casey Fiesler, an associate professor of information science at the University of Colorado Boulder. “People start rewriting themselves in this panic, trying to avoid anything that might look ‘too perfect.'”
After her LinkedIn DM was flagged, Patterson’s company experimented with ways of attaching more obvious human fingerprints to their cold outreach.
One colleague, she says, even suggested opening a message with “Hope all good.” But the idea was ultimately nixed,
“I don’t think if I got a message that said ‘Hope all good’ I’d go, yep, definitely human,” Patterson says.
It’s like the more careful you are, the more suspicious you look.
The thing is, there is no getting — or writing — around AI anymore. It’s here, and it’s everywhere. The anxiety over being mistaken for a bot is baked into our writing habits now, whether we use the tools or not.
“You can’t make any writerly decision without taking into account AI at this point,” says Vee. “So, you’re either like, ‘I’m going to lean into it’ or ‘I’m going to avoid it.'”
People are calibrating their style with AI in mind, second-guessing familiar words and punctuation, and even reshaping their reading expectations around the possibility that a piece of text might be synthetic.
AI has blurred the line so thoroughly that any piece of writing is judged in its shadow.
But there is real human DNA in every “AI tell.” A lot of writing that’s flagged as AI writing is merely the language of corporate America that’s been refined over millions of PowerPoint presentations, press releases, and speeches — and then absorbed and spat back not just by ChatGPT and other AI models, but by all of us real-life communicators.
LinkedIn posts didn’t suddenly start sounding like inspirational keynotes the moment ChatGPT showed up. The language — earnest, self-important, carefully optimized for impact and, yes, spliced with em-dashes — was already there. It had been honed over years of blog posts, marketing copy, company manifestos, and social media updates. If the outputs feel familiar, it’s because they are. The patterns people now flag as synthetic were, until recently, just standard professional voice.
That’s what makes the shift so disorienting. The suspicion crept in slowly, and now it’s everywhere. People aren’t avoiding a specific tone because they’ve decided it’s not working for them anymore. They’re trying to stay ahead of whatever might get flagged next.
“Just the other day I saw someone say in a comment, ‘I can’t believe you used AI to write this,'” says Fiesler. “And I was like, why? Because the language was a little formal? But that was enough for them to assume it came from a bot.
“It’s like the more careful you are, the more suspicious you look,” she says. “And for some reason, everyone’s paying attention and looking for it.”
Or, as ChatGPT suggested I put it:
“The more flawless your style, the more suspicious it looks. And in the end, not clarity but credibility is the ultimate goal.”
Jack Buehrer is a freelance journalist based in Ohio.
Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.