The problem of non-unique content

We all know that there’s one way we can improve our websites, appear more strongly in the search engine results and attract more visits: generate great content. But what makes great content?

For the reader, it’s usefulness, with a compelling title, good references and a selection of further reading (which could be calls to action). Search engines are learning to identify this programatically, but they still mainly lean on the old signal of the content having links to it, which they interpret as recommendations by humans.

But the other factor required by search engines is uniqueness, i.e. the words have to be different, and in a different order, than anything seen on the web before. I ran a huge test a few years ago, automatically firing dozens of queries an hour at Google for several months (until I got banned), to determine how much of an article had to be not “unique” for it to be ignored by the search engines. The result was a very low percentage indeed. Just lob in a few sentences which already appear on the web, and search engines may simply ignore your article.

In practical terms, if you try to re-word something which has already been written, such as when you’re trying to describe a product and you only have the original manufacturer’s writeup in front of you, there will be problems ahead. It takes tremendous skill and far too much time to deconstruct a piece as comprehensively as the search engines require.

Tomorrow I’ll describe a solution.

Leave a Reply

Your email address will not be published. Required fields are marked *