How To Detect AI Writing: 10 Useful Tips To Help You Spot AI Text

How To Detect AI Writing: 10 Useful Tips To Help You Spot AI Text
11 October, 2024 • ... • 141 views
Diana Kussainova
by Diana Kussainova

“Caution: This text was written by AI.” Wouldn’t it be nice to see this disclaimer before articles or social media posts? And how many texts would turn out to be partially or completely generated by artificial intelligence? 

As more and more people get access to Large Language Models (LLMs) like ChatGPT, AI texts spread everywhere. In some cases, only generated videos and images are labeled, not text. So AI content detection becomes an essential skill for search optimization, factual accuracy, and integrity of a business or academia. In this article, we’ll teach you how to detect AI writing and share 10 signs that a text might not be human-made.

Why should you check the written content for AI involvement?

The content marketing industry has been hugely affected by generative AI. From AI in email marketing to any other channel, the impact of artificial intelligence has been huge. Whether you reject it or embrace it in your industry, it’s useful to understand the origin of a piece of writing you work with and the degree of AI involvement in it.

You may think that AI involvement is bad for ethical reasons only. However, there are many practical situations when you may need AI checks and real consequences you may face if you don’t do it. Here are some reasons why you should at least contemplate AI content detection:

  • To comply with regulations. Currently, there are no major requirements to label AI-generated texts. But seeing as Meta has introduced AI image, audio, and video labeling on its platforms, this may affect texts in the future, too. Plus, there may be industry or platform-specific rules you need to follow, so it’s best to know whether the content is human or AI-generated. As for Google’s perspective on things, the search engine continuously prevents spammy, low-quality content from appearing in search results. One of Google’s employees also stated that AI-generated content is considered spammy. So despite intentionally vague official positions, the search engine will actually rank your content lower if it is suspected of being AI-generated or even deindex it altogether.
  • To maintain trust and transparency. This point is more about your organization’s stance on artificial intelligence. If you pride yourself, say, on your writers’ expertise and the texts turn out to be AI-generated, this makes your claim misleading. So you have to use content detection to make sure it adheres to your standards and delivers on your promises.  
  • To combat misinformation and disinformation. Artificial intelligence is known to hallucinate — make up facts and present them as true. If you assume that an article is entirely human-written, you may end up paying less attention to it when editing or reviewing. But knowing that a piece has traces of AI’s involvement can make you more vigilant, preventing false statements.
  • To ensure human oversight. We’d argue every piece of writing should be read and reviewed by another person before publication, but in reality, you may not have enough resources to hire an editor or a proofreader. If you can’t fully analyze everything you publish, an AI detection step will at least show whether you have to read one particular piece more closely or not. Then you can focus your attention on the suspicious content and not everything all at once.
  • To protect intellectual property. This point has two correlating issues. First, there are many lawsuits claiming that certain AI models use copyrighted materials. If you publish something written by an AI and not human, it can potentially have traces of those protected works. This, in turn, opens your business up to potential legal issues. And second, if your line of work depends on confidentiality, the fact that a text is AI-generated may suggest a data breach. Essentially, it may mean that the text’s author shared your information with an AI tool which is contrary to information security principles and NDAs.

Last but not least, AI checks and detection are important to uphold academic integrity. They ensure the originality of research, reports, and other works and their credibility.

For many of these reasons, Selzy blog editors check each article for AI content before publication. Over a year after the ChatGPT launch, we developed a set of guidelines to navigate the increasingly murky digital content space. We use a combination of close reading and content detection tools, and many of the recommendations below are based on our experience with AI-manipulated texts.

As for the ethics of this question, we are aiming to publish human content for human benefit.

Can AI even be detected?

According to a recent study, only 51.50% of US citizens accurately identified AI-generated texts. Another one on the same subject found that when it came to AI-written texts, people had an accuracy rate of only 24%. However, the participants were probably not professionals in the content marketing or writing fields. Plus, we’d argue that with proper training in AI detection (i.e. reading this article), you can at least identify suspicious content or parts of it.

At this stage of technological development, AI models still have a recognizable style of output. It’s true that some people train their Large Language Models more extensively or use more nuanced prompts, but many still rely on “raw” Gemini or ChatGPT-generated copy. So it is possible to detect AI-written texts even without dedicated tools, at least in some cases.

10 telltale signs of AI-generated writing

In this article, we will focus on ways to detect AI content “with your bare eyes”, but you should know that the easiest approach is to simply use content detection tools. These are services catered specifically to those who want to understand the composition of a content piece and whether it was written by a human or not. Want to learn more? Check out our list of the best AI content detectors. We tested 11 copy variations across 7 detectors in a very thorough report!

If content marketing is one of the pillars of your strategy, you should run all content through a detector. But if you only occasionally use it, subscribing to a dedicated service might be an overkill. The best way to save money on content detection, though, is to manually review all articles you get and then check the most questionable ones with a tool. This way you may even manage to squeeze into the free plan limits. 

With that, let’s see what specific traits may reveal that you are looking at an AI-generated text. For this article, we prompted ChatGPT to generate a text on content marketing for an email marketing blog and used the output to better illustrate each point.

Excessive use of AI typical words

An example of AI-generated text: “In today’s digital age, content marketing plays a vital role in building relationships with potential customers and enhancing the effectiveness of email marketing. By integrating valuable content into your email campaigns, you can significantly boost engagement, drive conversions, and establish trust with your audience.” Words vital, enhancing, valuable, boost, and drive are highlighted.
Source: Author’s dialogue with ChatGPT

Many AI-generated texts are full of the same words used in any context. Of course, some of these are perfectly normal in a human-written text, but if they are used a lot (or if all or almost all of them are present within one article, for example), you should be suspicious. Here are some of the ones we consistently find across AI-generated texts:

  • Essential
  • Impressive
  • Robust
  • Valuable
  • Vital
  • Boost
  • Delve
  • Drive
  • Enhance
  • Ensure
  • Implement
  • Leverage
  • Provide
  • Unleash

Overall, AI-written texts usually have generic words and phrases, lack unique vocabulary, and interesting analogies or comparisons that are common in human-written texts.

Repetitive phrasing

An example of AI-generated text: “By regularly testing and optimizing their emails based on subscriber behavior, the company continually improves the performance of their content marketing efforts.” The word “by” is highlighted.
Source: Author’s dialogue with ChatGPT
An example of AI-generated text: “Content marketing is a powerful complement to email marketing. By focusing on creating valuable, personalized, and conversion-driven content, you can not only increase engagement but also build trust and drive sales.” Words “by”, “not only”, and “but also” are highlighted.
Source: Author’s dialogue with ChatGPT
An example of AI-generated text: “Integrating content marketing with your email strategy ensures that your campaigns resonate with subscribers, providing them with meaningful experiences that go beyond promotional messages.” Words “integrating” and “providing” are highlighted.
Source: Author’s dialogue with ChatGPT

Another sure sign of an AI-generated text is repetitive sentence structures. In our experience, those are sentences beginning with “by doing so and so” and -ing forms in general, and sentences with “not only but also”. AI-generated texts also tend to have adjectives or other qualifiers before almost every noun in a sentence. Human-made texts, on the contrary, usually have a high variety of words and phrases.

Beyond these examples, if a text you’re reviewing has several similarly structured sentences or other instances of unjustified repetition, it might be AI-written.

Lack of coherence

An example of AI-generated text: “Content marketing allows companies to position themselves as thought leaders, offering insightful, educational, or entertaining material. When recipients consistently receive emails filled with informative content that adds value to their lives, they begin to associate your brand with authority and trust.” Words “insightful”, educational, or entertaining” and “informative content” are highlighted.
Source: Author’s dialogue with ChatGPT
An example of AI-generated text: “According to the Content Marketing Institute, 70% of consumers prefer to learn about products through content rather than traditional advertisements. This shift towards content-centric decision-making highlights the importance of building trust through educational email campaigns.” Words “learn about products” and “educational” are highlighted.
Source: Author’s dialogue with ChatGPT

On the surface, you can understand what the two paragraphs above are trying to explain: valuable content helps to establish brand authority. Try reading it more attentively, though, and you will see inconsistencies. For example, although educational and insightful (what does this even imply here, by the way?) content could be called “informative”, entertaining content cannot. Plus, does entertaining content actually “bring value” to one’s life, the kind of value you would link to authority? 

It’s the same in the second paragraph: product information isn’t necessarily educational, it’s promotional. So the statistics the artificial intelligence added here don’t correlate with the point the text is trying to make.  

With that, people are prone to automatically fill in the jumps of thought and interpret the text in the most coherent way possible (a smaller example of this is the ability to read a word even without vowels). But if you want to detect AI writing, you need to consciously resist this urge. When performing AI content detection, put a vigorous editor hat on and try to really follow the logic behind each paragraph and part of the text. If you see coherence issues like the ones we described above, the text is likely generated. 

Broad explanations without details

An example of AI-generated text: “At the heart of any effective email marketing campaign is the ability to engage subscribers. Personalization is a powerful tool in email marketing, and when combined with tailored content, it becomes even more effective. Content that speaks directly to a subscriber’s specific interests, needs, or behaviors increases the likelihood of interaction.”
Source: Author’s dialogue with ChatGPT

AI-written texts often lack details or in-sentence examples. The broad language and generic sentences are trademarks of generated content. These may also mean that a human author isn’t well informed on a topic, but in that case, there will probably be errors and not encyclopedia-esque definitions without much substance.

Factual errors

An example of AI-generated text: “The content within your emails determines whether your audience will open, click, or ultimately engage with your brand.”
Source: Author’s dialogue with ChatGPT

At the current stage of AI content generation tools, they are likely to make up facts or not fully grasp concepts users ask them about. These will, of course, differ from text to text and will be industry and topic-specific. Sometimes, this may even result in completely false information. To illustrate, there is a subtle nuance missed in the sentence we showed above. Did you spot it?

The content inside an email doesn’t help your open rate: your subscribers actually only see a subject line and a preheader when they receive a campaign, not the entire message. Another strange error in this sentence lies in the “ultimately engage” part. Opens and clicks are engagement metrics, but the AI seems to say that there’s something else, the “ultimate” form of email engagement. And we don’t think it means conversion here. Well, ChatGPT, if you ever discover “the ultimate engagement”, do let us and the rest of the email marketing industry know about it!

If you ask AI to write about your company, it can also make sloppy mistakes. For example, ChatGPT explained that Selzy can help manage social media campaigns which is incorrect:

An example of AI-generated text: “Selzy is a marketing automation platform designed to help businesses manage their email marketing, social media campaigns, and other digital marketing efforts.”
Source: Author’s dialogue with ChatGPT

Lack of personal experience

This one might be obvious, but if the AI wasn’t specifically prompted on it, a generated text wouldn’t have any examples based on lived experience or personal opinions. This, of course, may differ based on the text’s genre and style guidelines in your organization. 

Personal experience can mean more than just statements like “I did that and achieved this”. AI text will probably lack sentences and paragraphs that show more than a surface-level understanding of concepts (this is especially true for practice-oriented subjects like HTML development, for example) and demonstrate that the author actually engaged in behavior, tried actions described, etc.

Another aspect of this is the lack of emotion. People are sensitive and may use explicitly negative or positive language to depict a subject they have a strong opinion about. If a text on a topic that requires emotional investment reads neutrally or even “robotically”, it’s an AI tool that could’ve written it.

Inconsistent tone and style

You might feel confident in identifying AI writing at this point, but in reality, not all texts are completely generated which makes the detection harder. Some people may use AI for certain parts and fill in the rest themselves. We’d argue that this is most often the case. 

In these situations, it’s useful to look for style and tone discrepancies. If in some parts of a text, the sentences are complex and rich with vocabulary, and in others, you see word or phrase repetition, this may also mean that it is at least partially AI-written.

Outdated content

An example of AI-generated text: “According to Campaign Monitor, segmented email campaigns experience a 760% increase in revenue.”
Source: Author’s dialogue with ChatGPT

Large Language Models (or LLMs) require training and specifically prepared data sets in order to operate. And both of these take time. That is why AI tools usually don’t have the most recent information. For example, ChatGPT only has data up until September 2021 and is aware of some “updates and trends” up to September 2023. 

A ChatGPT prompt asking how recent is the data set the AI tool uses and an output explaining that the training data includes information up until September 2021 and some events and trends up to September 2023 and no real-time updates beyond that time
Source: Author’s dialogue with ChatGPT

So one of the signs of an AI-generated text is outdated content which is especially apparent when it comes to statistics. If a user didn’t prompt a chatbot to research something on the internet (which many are capable of), chances are, the output will rely on training data that isn’t the most recent. 

In our example, the statistics ChatGPT refers to are from a Campaign Monitor guide. The guide itself doesn’t have a publication date, but one of the other numbers presented there is from 2015, so we can assume (given that this information wasn’t updated) that the guide itself was written around the same time. As a consequence, ChatGPT may have provided us with information that is almost 10 years “fresh”.

Mismatching purpose or search intent

A ChatGPT prompt asking to provide an outline for an article on how to write effective subject lines and output including three parts: introduction, the role of subject lines, and characteristics of effective subject lines
Source: Author’s dialogue with ChatGPT

If you are reviewing an article and it just doesn’t make sense given the brief or task description, this might be because it was AI-generated. An AI assistant can sometimes “understand” something incorrectly and provide text that doesn’t reflect its purpose or, in the case of SEO-oriented texts, search intent. 

For example, when asked to provide an outline for an article on how to write effective subject lines, ChatGPT didn’t opt for a practical, strategy-oriented structure. The chatbot instead created an outline that is more so catered to an article explaining subject lines in general, not a how-to one.

Unnatural word choices

An example of AI-generated text: “As a result, you’ll see stronger engagement, better customer loyalty, and ultimately, higher ROI. By regularly testing and optimizing their emails based on subscriber behavior, the company continually improves the performance of their content marketing efforts.” The words “stronger engagement” and “improves the performance of efforts” are highlighted.
Source: Author’s dialogue with ChatGPT

This one is a more subtle and rare sign of AI-written text. To spot generated content, be on the lookout for word pairings that do not sound human or just don’t go well together.

In our example, “stronger” probably isn’t the first word a person would’ve come up with to describe a better email engagement. And “improving the performance of efforts” doesn’t make a ton of sense either. 

Other signs

As you may have noticed in all the AI-text examples in this article, a generated text is often too complex in style and tone to the extent that it even feels academic. Sure, real human writers can produce similar texts, but AI is notorious for it, especially if it wasn’t prompted otherwise.

Another sign of AI-written content refers to its format. If a writer generated something and didn’t edit the text, it can have capitalized headings inside lists or even a totally different font and font size from the rest of the article. In our experience, parts of the text that were likely generated are in Roboto.

Conclusion

It seems like AI is now capable of almost anything (we even have a quiz based on this assumption!), including writing a human-sounding text. However, we still can separate an author’s creation from a chatbot’s output with a degree of accuracy. In fact, a recent test in the Spanish language proved that when it comes to creative writing, LLMs are far behind top human writers

And though it’s getting more and more difficult to decipher AI texts from human-written, it is essential to maintain the quality of content and have confidence in it. We recommend you use a combination of content detection tools and manual review. Look out for these signs of AI-written content:

  • Excessive use of AI-typical words
  • Repetitive language
  • Lack of coherence
  • Broad explanations without details
  • Factual errors
  • Lack of personal experience
  • Inconsistent tone and style
  • Mismatching purpose or search intent
  • Unnatural word choices
11 October, 2024
Article by
Diana Kussainova
Writer, editor, and a nomad. Creating structured, approachable texts and helping others make their copies clearer. Learning and growing along the way. Interested in digital communications, UX writing, design. Can be spotted either in a bookshop, a local coffee place, or at Sephora. Otherwise probably traveling. Or moving yet again.
Visit Diana's
Latest Articles
Selzy Selzy Selzy Selzy