Last week the head of Google’s Webspam team, Matt Cutts explained the algorithm changes they made to prevent content scrapping.
It was long overdue. Google has been under a lot of pressure to penalize content thieves who benefit from higher search rankings at the expense of content creators.
But there are two things that worry me.
Firstly, exactly how does Google determine the original source of content? Matt Cutts explained on his blog that Google will be:
“evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.”
That’s great! But I want to be certain that the system is fool-proof. That it does not inadvertently penalize the wrong website. I would imagine that the originator would need to authenticate his content using some kind of validation mechanism.
I’m also worried about the impact this will have on content curators, permission-based syndication sites and guest-bloggers who re-publish their own content on third-party sites. Will they be lumped together with scrappers and thieves or will they be recognized as a different category of content producers?
The immediate result of this change is that a lot of brands will need to re-evaluate their content strategy. I think the message that Google is sending is that pure content creation is the way to go.
It will be interesting to see how quickly brands adapt to the new ‘gatekeeper’.
Do you have any concerns about Google’s new algorithm?