We Stopped Using AI Detection - Built Our Own Trust Score Model

We Stopped Using AI Detection - Built Our Own Trust Score Model

In the past 3 years, after AI came into limelight. Almost everyone moved to using AI in one way or the other for their content requirements.

We did the same.

The only problem was... how could we ensure that AI content ranks. Most of fellow SEOs were reporting that AI content was getting de-ranked after sometime.

This was true until the start of this year. In the start of this year, Google released spam updates for search. You can read more about this debate here.

Long story short. It didn't had a lot of impact on websites that were churning out AI content - if it was quality.

What do we mean by quality?

Search engines have a long held notion when it comes to writing. That is: Content written online should add value. Until you are providing that, your website or any other online asset will not derank.

Everything else depends on backlinks (Which spam updates cover at length)

So when we talk about quality - we look at two things:

Relevance. Value.

What happened with HubSpot?

In the start of this year, right after the update, HubSpot took a big hit and lost almost 40% of its traffic on most of its keywords. I have written about it at length in another article published here.

Coming back to the topic.

So one thing was certain.

AI content was not getting deranked without ay special excuse if you are following relevance parameters and providing value.

I observed that my teams were writing one content piece in an hour with AI but spending two hours or more revising it so that it can pass AI detection tools.

This was an overkill.

  • First, the language that we generated even by writing the content ourselves was also being marked as AI content.
  • Second, if we made the grammar incorrect, it passed the AI score - this was absurd as it reduced readability.

So now we are letting go of AI detection tools

The reason being... it wastes more time. And these tools are not offering us any quality that we were not already having in our articles.

In fact, we tested Surfer SEO AI Humanizer, Undetectable AI tool and even others. If you buy their paid versions and humanize the content, it still cannot pass the same tools.

This begs the question: Why use a parameter that is not even an official parameter and is not adding any value?

If you see online, all top publications like BBC, Aljazeera, CNN, and various others are already using AI for content generation. (I know because I have worked for a few).

So why should you be writing content from hand in 2025? this didn't made sense.

Our new Trust Score Model

Since we already know what search engines prefer.

Relevance and value.

And we also know that Google uses two concepts for this purpose - E-EAT and YMYL

And since our SaaS business fall under the first one, we can dissect how the model should work.

Features:

  1. It should focus on Relevance - Which is similar to T or Trust in EAT model. Obviously you need to read the piece and match it to see if it is relevant.
  2. Value - which is a new parameter and not directly covered by current models explicitly. This will include parameters like depth of the piece.
  3. Expertise - This is the expertise of the writer. Is he someone you can trust the content process with?
  4. Topical Authority - I have covered this as part of domain. Since there is a clear evidence that if the site has authority in that particular domain, you can easily rank your article. E.g. HubSpot can easily rank for CRM based pieces while it was deranked for all others that were not in its domain.

We have assigned around 3 to 4 parameters for each of these features and we give the most weightage to Relevance in this format.

Trust?Score?= (Relevance?Score?× 0.4) + (Expertise?Score?× 0.35) + (Authority?Score?× 0.25)

To make it less complicated for the next few months, I have merged Relevance and Value in to one. Our SEOs are currently reviewing these to see if we need to keep them separate or not.

How this helps the SEO community?

I have been in the industry for over 15 year. We have been using tools like SEMRush, Ahrefs, Majestic SEO, Moz and various others. Each has their own tool for website and page authority.

However, the gap in all of these tools is that none of these tools has a comprehensive tool that checks content quality. Trust score answers that.

If you create a page, how will you know that it fits the search indexing concepts and also help with traffic? There are no projection numbers you can have. You just publish content blindly with optimization and wait for the results to happen.

Trust score now ensures that what you have written and published is going through a solid process - which means there is no room for gap.

Second Reason

We are no longer tied to AI detection score. We have already introduced a parameter 'Tone' that comes under Expertise and automatically covers what AI detectors were offering.

If we see the content is passing Trust score - it automatically passes our quality checks.

And if you think about it, AI detection tools not cannot even differentiate between your content and your AI agent's content, so why keep using this model?

Since the world is not going to keep revising AI detections in the coming years, it is better to let go of parameters that are just limiting our process efficiencies.

Future Improvements & Projections

The Trust score is currently in testing and I plan to test it for next 45 days on our commercial website to see how it performs.

We have been testing it for the last month and we have seen performance improvement already.

If you have any questions on this, or need to know more about the process, feel free to comment or message me.


Very practical advice

回复

要查看或添加评论,请登录

Muhammad Sharjeel Ashraf的更多文章

社区洞察