Content Quality: A Very Detailed Analysis

Content Quality: A Very Detailed Analysis

What Does "Quality" Mean in This Context?

When we talk about content quality, it's easy to think of it as a checklist: correct grammar, proper structure, and all the right keywords in place. And yeah, that's part of it.

But content quality is a bit like beauty: it's in the eye of the beholder. For some, it’s about depth and originality; for others, it’s about fast facts and scannability.

Now, here’s where things get tricky: "quality" means something different depending on who (or what) your audience is.

If we’re talking search engine spiders, content quality revolves around meeting technical SEO guidelines: clear structure, meta tags, keyword density, and crawlability. AI-generated content excels here. Search engines love content that checks off every SEO box.

But for real readers — the ones with buying power and decision-making authority — it’s a somewhat different story. In addition to adherence to the formal criteria, they want content that speaks to them, that’s insightful, creative, and provides real value.

Here’s the truth: AI-driven content can nail the formalities. It's great for creating SEO-friendly, technically sound content, but human intervention is still essential for delivering the nuance, insights, and creativity that real people crave.

AI alone can only produce content with the level of quality that the human behind it can achieve on their own. In short, it can’t create something genuinely meaningful without a human touch guiding it.

Components of Quality Breakdown

More Formal Criteria (Objective, Rule-Based, Measurable)

When it comes to formal criteria, things are a bit more straightforward. Why? Because search engine spiders are limited in their ability to judge content quality the way humans do. They can’t recognize creativity, emotional impact, or nuanced insight, but they can definitely spot structural mistakes or poorly optimized formatting.

And that’s where we can play to our advantage. By adhering to formal, measurable guidelines, we ensure that AI-generated content can check off all the SEO-friendly boxes, giving it a competitive edge in search rankings. Here’s a breakdown of the key formal components to focus on:

Attention to Details

  • To Meet Criteria: No inaccuracies or inconsistencies in dates, figures, or data points.
  • Why It Matters: Spiders can’t cross-check facts like a human would, but glaring mistakes —like conflicting numbers or obvious errors in data — can hurt credibility, both for search engines and readers. Ensuring precision here boosts the perceived professionalism of the content.

Clarity and Structure

  • To Meet Criteria: Hierarchical structure of headings, logical paragraph division, linear flow, readability score below postgraduate level (as tested by Hemingwayapp.com).
  • Why It Matters: Search engines love content that's easy to crawl and categorize. Clear heading hierarchies (H1, H2, H3) guide both search spiders and human readers through the content. Logical divisions, short paragraphs, and a linear flow create scannable, digestible content that ranks well and keeps readers engaged.

Consistency

  • To Meet Criteria: Consistent use of terminology and formatting, no significant discrepancies in style or guidelines for the target audience.
  • Why It Matters: Whether you're targeting tech-savvy CEOs or mid-level managers, consistency is crucial. Search engines rely on uniformity to categorize content accurately, while readers expect a cohesive tone and style. Using the right terminology without deviations helps build trust and maintain a strong brand voice.

Web Writing Best Practices

  • To Meet Criteria: Substantial use of bullet points, no more than 300 words between headers, and concise paragraphs with a single, finished thought.
  • Why It Matters: Web readers and search engines alike love structure. Bullet points break up text, making it easier for readers to skim, while frequent headers improve scannability and allow spiders to better understand the content’s structure. Short paragraphs with clear, singular points prevent readers from feeling overwhelmed, keeping engagement high.

An illustration of the text with adherence to formal criteria that isn't good for people
When formal criteria adherence don't mean high quality

Accuracy

  • To Meet Criteria: Avoid frequent factual mistakes — while search engines won’t typically detect small errors, glaring inaccuracies can damage credibility.
  • Why It Matters: Google won’t fact-check your work, but that doesn’t mean accuracy doesn’t count. One or two minor factual mistakes might slip through unnoticed by search engines, but consistent errors will eventually undermine your site’s reputation, affecting rankings in the long run.

Meets the Expectations

  • To Meet Criteria: All concepts defined in the title and introduction must be covered in the body of the content.
  • Why It Matters: Spiders analyze your content to see if you’ve delivered on the promise of your title and intro. If the body doesn’t align with the intent laid out at the start, rankings can suffer. More importantly, readers will quickly lose interest if the content fails to deliver what they came for.

Use of Relevant Entities

  • To Meet Criteria: No misuse of industry-specific terms or terms uncommon for the target audience.
  • Why It Matters: Search engines have gotten pretty good at identifying whether your content uses industry terms correctly. Misusing or omitting key entities like product names, relevant companies, or technical terms can lead to a drop in rankings, as the algorithm may not understand the relevance of your content to its target keywords.

Entity is a uniquely identifiable thing or concept that Google recognizes and understands as a distinct object in its Knowledge Graph.

Freshness of Content

  • To Meet Criteria: References should be no older than one year, and events must be described up to a year before the current date — avoid irrelevant dates.
  • Why It Matters: Google prioritizes fresh content. If your article references outdated statistics or talks about events from more than a year ago, it may be deprioritized in search results. Keeping information up-to-date and relevant boosts both your search visibility and credibility with readers.

2. More Real Criteria (Subjective, Requiring Human Judgment)

Now we dive into the tough stuff. These are the subjective elements that AI, even advanced tools like Google’s Gemini, struggle to evaluate accurately. While AI can tick boxes for formal criteria, real content quality — the kind that resonates with human readers — requires far more nuanced judgment. AI can’t fully assess expertise, generate creative insights, or detect original ideas.

Here’s where the human touch makes all the difference, and it’s vital not to make the mistake of focusing solely on the measurable aspects. Let's break down these harder-to-achieve, real criteria.

Expertise

  • Definition: Proven author expertise in the current field and alignment with the general discourse on the topic.
  • Challenge for AI: AI struggles to verify real-world credentials. It might rely on what it can scrape online—like LinkedIn profiles or self-claimed expertise on a website—which can be fabricated or overstated.
  • Human Solution: Leverage the "collective of authors" approach. Create an Our Authors page on your website, featuring experts from relevant fields. This gives your content a cumulative expertise, where you can sign articles under a collective name. Even when AI helps draft content, human SMEs (subject matter experts) are key in ensuring that depth and expertise are reflected.

Insights

  • Definition: Offering unique, actionable insights, especially through novel approaches or by combining existing frameworks with knowledge from different fields.
  • Challenge for AI: AI isn’t great at generating truly innovative insights — it usually repackages existing information. AI struggles to introduce “homebrew” findings or new interpretations based on cross-disciplinary knowledge.
  • Human Solution: To meet the real criteria of insightful content, humans must step in to inject fresh ideas, offer a novel spin on well-known strategies, or combine insights from various fields that AI may not connect.

Sufficient Context

  • Definition: All special terms, industry jargon, or complex concepts used must be explained adequately for the intended audience.
  • Challenge for AI: AI might insert technical terms but fail to provide enough context for readers unfamiliar with those terms. While AI understands broad patterns in language, it can’t gauge whether a specific audience will grasp a term’s full meaning.
  • Human Solution: A human reviewer must ensure that every specialized term is explained. This is crucial for making the content accessible to all readers, not just experts in the field.

Originality

  • Definition: Creating content that isn’t a simple rehash or compilation of previously published articles, but something genuinely fresh and innovative.
  • Challenge for AI: AI tends to compile information from its training data, leading to content that may lack originality. It’s not equipped to produce groundbreaking work or push the envelope in terms of creativity.
  • Human Solution: The human role is essential in injecting originality into the content. This means avoiding the trap of producing content that’s just a lightly reworded version of what’s already out there. Human input ensures creative thinking and adds a unique voice.

Audience and Purpose Alignment

  • Definition: Aligning the content with the search intent and journey of the reader, ensuring it meets both business goals and audience needs.
  • Challenge for AI: AI can recognize search intent based on keywords but struggles to deeply align content with the buyer's journey or the nuances of audience expectations. It might fulfill the technical aspect of search intent without fully engaging the user.
  • Human Solution: Strategic human input is required to make sure the content not only ranks for keywords but also resonates with readers at each stage of their decision-making process. Whether it’s awareness, consideration, or decision-making, humans must tailor the content to the journey.

How Google Evaluates E-E-A-T (Expertise, Experience, Authoritativeness, Trustworthiness) of Your Content

Google’s Gemini, an advanced AI model used for evaluating content, plays a critical role in assessing E-E-A-T. However, like all AI, it has limitations. It struggles with subjective evaluation—recognizing subtle expertise or assessing the originality and depth of insights.

April's article issued in The Verge uncovers situation in detail. And it's still actual.

Good for robots, bad for people
Fragment of the article

Gemini’s Limitations:

  • Expertise: It may over-rely on easily accessible credentials, missing out on real professional nuances or evolving industry practices.
  • Authoritativeness: Gemini can flag authority based on frequency of appearance or volume of content, but it can’t distinguish between truly reputable sources and websites with inflated online presences.
  • Trustworthiness: While it can detect transparency signals, it often fails to gauge subtle cues, such as hidden affiliations or conflicts of interest, which humans can detect through deeper investigation.

I conducted an experiment to explore the limits of both my custom GPT model and Google’s Gemini in assessing content quality, particularly focusing on their ability to detect expertise and authenticity. Initially, I fed both models a general article without providing any references or authorship. As expected, the results were poor — neither AI gave the content high marks for quality, credibility, or authority.

Then, I decided to see how easily these systems could be deceived. I fabricated a quote from a nonexistent startup expert and mentor, mentioned this fictional individual again in the body of the article, and even created two fake surveys, citing them as legitimate sources.

Finally, I added a byline attributing the article to a made-up yet “famous” serial entrepreneur and hybrid team-building expert, with a fabricated history of being featured in top-tier publications like The Harvard Report, WIRED, Business Insider, and The Verge.


To my surprise, both AI systems fully accepted these made-up signals. The content was evaluated as credible, authoritative, and worthy of higher ranking. The fake surveys were treated as legitimate data, and the non-existent expert was hailed as a valuable source of insight. The fabricated authorship by an “industry expert” significantly boosted the perceived value of the content in both AI models' assessments.


Going Wild: How Deep the Rabbit Hole Is?

I didn't stop at this point and decided to check how deep is the rabbit hole of Gemini's naivity. And it is definitely unlimited!

Achieving all 5/5 by faking all the pieces of evidence

This experiment highlights a significant flaw in current AI-driven content evaluation systems. Despite their advanced algorithms, models like Google’s Gemini and custom GPT tools are still heavily reliant on superficial signals, like the presence of quotes, citations, and recognized publication names, without the ability to cross-check or validate the authenticity of these elements. They reward the appearance of expertise rather than the real thing.

You can access and review the full experiment log.

Putting It Together: Achieving Quality for Both AI and Human

When it comes to producing high-quality content, it’s not just about ticking boxes. It's about balancing what AI can do best — formal criteria adherence — with the unique touch humans bring—creativity, expertise, and insight. The goal is to achieve the highest possible content quality using both AI and human input, but without overextending your resources. Let's define the approach for making this work in a way that ensures efficiency and sustainability.

The strategy is clear: AI handles the heavy lifting for formal adherence (such as SEO, structure, and clarity) while humans focus on subjective criteria (creativity, originality, and expert insight).

But here’s the catch: formal and subjective criteria sometimes clash. Strictly following formal rules (like Web Writing Best Practices) might make content look technically sound, but it can strip away the narrative and personality, leaving a bland piece that lacks the depth real readers crave.

To solve this, the process involves leveraging Evergreen content — content that remains relevant and continuously updated. First, use AI to generate drafts that fully adhere to formal criteria, getting the content indexed quickly and attracting traffic. Later, human experts can refine the piece by adding subjective quality, such as deeper insights, originality, and more engaging narratives. This strategic delay means you balance both criteria while maximizing resource efficiency.

Phase 1: AI-Driven Content Generation for Formal Criteria Adherence

  • Objective: Produce content that ranks well and meets formal quality standards quickly.
  • AI’s Role: AI ensures adherence to measurable, technical aspects, such as:
  • Output: A publishable draft optimized for indexing and initial rankings.

Phase 2: Post-Publishing Subjective Refinement by Humans

  • Objective: Elevate content quality after the AI-generated draft is published and indexed, enhancing subjective elements.
  • Human Expert’s Role: Review and improve the content based on subjective, creative, and expert-level criteria, including:
  • Outcome: A well-rounded, engaging article that retains its technical quality but offers human creativity and expertise.

Phase 3: Continuous Improvement Based on Performance Data

  • Objective: Use data to inform ongoing content refinements.
  • Data-Driven Adjustments: Monitor performance metrics (such as time on page, bounce rate, and engagement) to evaluate if the content resonates with the audience. This helps in making informed tweaks that improve subjective quality over time.
  • Evergreen Updating: Regularly update the content with new insights, trends, or developments, ensuring it stays relevant and high-performing in the long run.


I hope you find this newsletter helpful. Feel free to ask whatever you want within the topic. Or, schedule a call with me if you want to get consultation or order my services.


Mikita Cherkasau

Helping tech brands pivot their content from quantity to quality — and achieve greater impact with less spend in 2025 | Marketing agency owner | Content strategist | Advocate against mindless use of AI

4 个月

As always, I like to get a bit philosophical about this. What is good content? Content that hits the KPIs? Content that that makes the stakeholder feel good? Or content that readers find useful and engaging? One could argue, all of those things. But in practice, it’s not so straightforward.

回复
Mikita Cherkasau

Helping tech brands pivot their content from quantity to quality — and achieve greater impact with less spend in 2025 | Marketing agency owner | Content strategist | Advocate against mindless use of AI

4 个月

Vlada Korzun Vlada, have you met Egor? You two might want to connect with each other, if you haven’t already. Based on your interest in AI developments.

Alex Romanenko

Custom Software Development Expert | CTO | Product Manager | Digital Marketing Consultant

4 个月

Insightful!

要查看或添加评论,请登录

Egor Kaleynik的更多文章

社区洞察

其他会员也浏览了