Beginner's Guide for WordPress / Start your WordPress Blog in minutes

Beginner’s Guide to Preventing Blog Content Scraping in WordPress

Are you looking for a way to keep spammers and scammers from stealing your WordPress blog content using content scrapers?

It is very frustrating as a website owner to see that someone is stealing your content without permission, monetizing it, outranking you in Google, and stealing your audience.

In this article, we’ll cover what is blog content scraping, how you can reduce and prevent content scraping, and even how to take advantage of content scraping for your own benefit.

How to Prevent Content Scraping in WordPress

What Is Blog Content Scraping?

Blog content scraping is when content is taken from numerous sources and republished on another site. Usually this is done automatically via your blog’s RSS feed.

Content scraping is so easy now that anyone can start a WordPress site, put a free or commercial theme, and install a few plugins that will go and scrape content from selected blogs.

Why Are Content Scrapers Stealing my Content?

Some of our users have asked us why are they stealing my content? The simple answer is because you are AWESOME. The truth is that these content scrapers have ulterior motives. Below are just few reasons why someone would scrape your content:

  • Affiliate commission – There are some dirty affiliate marketers out there that just wants to exploit the system to make few extra bucks. They will use your content and other’s content to bring traffic to their site through search engine. These sites are usually targeted towards a specific niche, so they have related products that they are promoting.
  • Lead Generation – Often we see lawyers and realtors doing this. They want to seem like industry leaders in their small communities. They do not have the bandwidth to produce quality content, so they go out and scrape content from other sources. Sometimes, they are not even aware of this because they are paying some scumbag $30/month to add content and help them get better SEO. We have encountered quite a few of these in the past.
  • Advertising Revenue – Some folks just want to create a “hub” of knowledge. A one-stop-shop for users in a specific niche. Often we notice that our site content is being scraped. The scraper always replies, I was doing this for the good of the community. Except the site is plastered with ads.

These are just a few reasons why someone would steal your content.

How to Catch Content Scrapers?

Catching content scrapers is a tedious task and can take up a lot of time. The are few ways that you can catch content scrapers.

Search Google with Your Post Titles

Yup that is as painful as it sounds. This method is probably not worth it especially if you are writing about a very popular topic.


If you add internal links in your posts, you will notice a trackback if a site steals your content. This way is pretty much the scraper telling you that they are scraping your content.

If you are using Akismet, then a lot of these trackbacks will show up in the SPAM folder. Again, this will only work if you have internal links in your posts.


If you have access to an SEO tool like Ahrefs, you can monitor your backlinks and keep an eye out for stolen content.

How to Deal with Content Scrapers

There are few approaches that people take when dealing with content scrapers: the Do Nothing Approach, Take Down approach, or Take Advantage of them approach.

Let’s take a look at each one.

The Do Nothing Approach

This is by far the easiest approach you can take. Usually the most popular bloggers would recommend this because it takes A LOT of time fighting the scrapers.

Now obviously if it is a well-known blog like Smashing Magazine, CSS-Tricks, Problogger, or others, then they do not have to worry about it. They are authority sites in Google’s eyes.

However, we know some good sites that have gotten flagged as scrapers because Google thought their scrapers were the original content. So this approach is not always the best in our opinion.

Take Down Approach

This is the exact opposite of the “Do Nothing Approach”. In this approach, you simply contact the scraper and ask them to take the content down.

If they refuse to do so or simply do not reply to your requests, then you file a DMCA (Digital Millennium Copyright Act) with their host.

In our experience, majority of the scraping websites do not have a contact form available. If they do, then utilize it. If they do not have the contact form, then you need to do a Whois Lookup.

Whois Lookup

You can see the contact info on the administrative contact. Usually the administrative, and technical contact is the same.

It will also show the domain registrar. Most well-known web hosting companies and domain registrars have DMCA forms or emails. You can see that this specific person is with HostGator because of their nameservers. HostGator has a form for DMCA complaints.

If the nameserver is something like, then you have to dig deeper by doing reverse IP lookups and searching for IPs.

You can also use a third party service for for takedowns.

Jeff Starr in his article suggest that you should block the bad guy’s IPs. Access your logs for their IP address, and then block it with something like this in your root .htaccess file:

Deny from 123.456.789

You can also redirect them to a dummy feed by doing something like this:

RewriteCond %{REMOTE_ADDR} 123\.456\.789\.
RewriteRule .* [R,L]

You can get really creative here as Jeff suggests. Send them to really large text feeds full with Lorem Ipsum. You can send them some disgusting images of bad things. You can also send them right back to their own server causing an infinite loop which will crash their site.

The last approach that we take is to take advantage of them.

How to Take Advantage of Content Scrapers

This is our approach of dealing with content scrapers, and it turns out quite well. It helps our SEO as well as help us make extra bucks.

The majority of scrapers use your RSS Feed to steal your content. So these are some of the things that you can do:

  • Internal Linking – You need to interlink your blog posts a lot. When you have internal links in your article, it helps you increase pageviews and reduce bounce rate on your own site. Secondly, it gets you backlinks from the people who are stealing your content. Lastly, it allows you to steal their audience. If you are a talented blogger, then you understand the art of internal linking. You have to place your links on interesting keywords. Make it tempting for the user to click it. If you do that, then the scraper’s audience will too click on it. Just like that, you took a visitor from their site and brought them back to where they should have been in the first place.
  • Auto Link Keywords with Affiliate Links – There are few plugins like ThirstyAffiliates that will automatically replace assigned keywords with affiliate links,
  • Get Creative with RSS Footer – You can use the All in One SEO Plugin to add custom items to your RSS Footer. You can add just about anything you want here. We know some people who like to promote their own products to their RSS readers. So they will add banners. Guess what, now those banners will appear on these scraper’s website as well. In our case, we always add a little disclaimer at the bottom of our posts in our RSS feeds. By doing this, we get a backlink to the original article from scraper’s site which lets Google and other search engines know we are authority. It also lets their users know that the site is stealing our content..

Check out our guide on how to control your RSS feed footer in WordPress for more tips and ideas.

How You Can Reduce and Prevent WordPress Blog Scraping

Considering if you take our approach of lots of internal linking, adding affiliate links, RSS banners and such chances are that you will reduce content scraping to good measure. If you take Jeff Starr’s suggestion of redirecting content scrapers, that too will stop those scrapers. Aside from what we have shared above, there are a few other tricks that you can use.

Full vs. Summary RSS Feed

There has been a debate in the blogging community whether to have full RSS feed or summary RSS feed. We are not going to go into much details about that debate, however one of the PROS of having a Summary Only RSS feed is that you prevent content scraping.

You can change the settings by going to your WordPress admin panel and going under Settings » Reading. Then change the setting For each article in a feed show: Summary.

Trackback SPAM

Trackbacks and Pingbacks definitely had great uses however, they are now constantly being abused.

Often themes display trackbacks and pingbacks under or among the comments. This gives the spammer an incentive to scrape your site and send trackbacks. If you mistakenly approves it, then they get a backlink and mention from your site. Here is how you can disable Trackbacks on all future posts.

Here is an article that will show you how to disable trackbacks and pings on existing WordPress posts as well.

Is Content Scraping Ever Good?

It can be. If you see that you are making money from the scraper’s site, then sure it can be. If you see a lot of traffic from a scraper’s site, then it can be.

In most cases however, it is not. You should always try to get your content taken off. But you will realize as your blog gets larger, it is almost impossible to keep track of all content scrapers. We still send out DMCA complaints, however we know that there are tons of other sites that are stealing our content that we just cannot keep up with.

We hope this article helped you prevent blog content scraping in WordPress. You might also want to see our guide on how to prevent image theft in WordPress.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

Disclosure: Our content is reader-supported. This means if you click on some of our links, then we may earn a commission. See how WPBeginner is funded, why it matters, and how you can support us.

The Ultimate WordPress Toolkit

Get FREE access to our toolkit – a collection of WordPress related products and resources that every professional should have!

Reader Interactions

84 CommentsLeave a Reply

  1. Thanks for the post.
    But can I even remove the or disable RSS feed totally or is there any special benefit in it.
    Then if I want to disable RSS feed totally, how will I do it.

  2. We hear so much about getting site content by doing content curation. Is content scrapping the same as content curation? If not what’s the difference between the two?

    • Content scraping is taking content from other sites to place on your site without permission, content curation is normally linking to other content within content you have created


  3. I am facing these issues, i had 20+ for one of our brands, then we moved elsewhere and they are back again.

  4. I found a realy bad content scaper from by blog, not only they steal my content, used the same name for they spam blog only separatedwith a – and all description, tag, basicly trying to be me, is used links in rssfeed with my blog, youtube channel, facebook, twitter, pinterest & google plus, which shows up on there spam blog, also found that png images shows up on the front page but jpeg dose not, but that maybe just on blogger.

  5. I absolutely love the interlinking-idea. Will have to look at the RSS suggestion, since I forgot how that works exactly, having focussed on writing Kindle e-books for a while (talk about content scraping – zero protection there!.. hence my return to website writing) but I feel I have really got a place to start with protecting my content! Thanks!

  6. WOW! So much to take into consideration when starting a blog. My blog is only 2 weeks old. I have used mainly WP Beginner to set up my blog. So much good info set out in a way a newbie can follow.

    I don’t know if this works for content scraping but I have installed a plugin called Copyright Proof. It disables right click so that people can not copy and paste your content.

    I decided to use this plugin as it was a recommended plugin for author sites.

  7. Another great article, I work as a freelance journalist so I sell a lot of articles and it’s up to the people who buy it to decide on their policies.
    But I also have a couple of blogs and affiliate websites so I think I might need to take a look at what’s happening with my content.

  8. Does not giving credit where it’s due count as “content scraping”?

    Because Jeff Starr wrote this same post at Perishable Press over 5 years ago:

    Check the structure and terminology of your article and compare it to the original.

    Just sayin.

  9. i has just develop a theme for blogger and that theme need a full feed to work, i worry about scrapping content, i think if many scrapper use my content on their blogger site, which have the same content with my site, backlink point to mysite, my blog will be spam in Google ‘s eye and will be deleted.

  10. Thanks for this amazing article with useful tips! I actually just got a “Thin Content” penalty from Google. I asked an SEO expert for help, they told me to stop scraping content. They sent me a link of an article I wrote yesterday and thought I had stolen it from another website. The crappy thing is, they were stealing from me, not just that article, but probably a couple thousand articles! They are still in Google search, and I am not. I am being the one penalized! Turns out there are at least three websites scraping my content, not even sure what to do.

  11. Awesome article.

    I sort of agree with most of the points you have discussed. Actually few of the points are pretty awesome.

    But if your sole business is based on content in your website, shouldn’t we be more careful about scrapers?

    I don’t think content theft would ever be good to the owner of the content.

    I guess we all should think of opting some preventive measure rather than reactive measure. You can consider using ShieldSquare, a content protection solution to stop content scraping permanently.

  12. I know this is an old article, but the one source that is NOTORIOUS for allowing content scaping is WordPress with their “Press This” feature. They are basically encouraging this.

  13. I think I may have finally found the answer to my problem. I have been thinking someone has been stealing my stories and making them into “new” stories. I thought either someone is out to get me or I am losing my mind. I was almost losing my mind over thinking like this. Paranoid. Concerned someone was listening to my private phone calls. When really, all the information has come directly from my blog! This article may have saved my life. Literally. I am not even joking because I have been so afraid that I was going crazy and very selectively trying to talk about it with friends, to get feedback or support and being looked at like I am nuts and need to go to the psych ward for a while. This article makes what has been happening to me, make total sense. Thank you! I am so overwhelmed with relief.

  14. Thanks for some tips but a good chunk of this article is not very helpful. Most scrappers are not blind scrappers, the content is generally sucked, looked at by a human eye and then published. Which means that even by taking a minute to look at an article the spam kid is able to publish hundred of copied article a day. Backlinks problem is very easy to circumvent for content scrapper as the feed importers have pre-process options and they generally set it to delink the body. Also I do not see how turning rss into summary may help at all, the feed importers only use the rss to grab the new content link and from there they follow the skeleton of your html, which you have nicely set with proper image, title, link etc tags for the convenience of Google and very easily extract the content.

    Obviously blocking the IP is a very good solution. DMCAs are generally a waste of time; they take time to formulate and stupid hosts take time to respond (since spammers choose these host specifically because they’re lax on spam-like activity). Of all, Google is the most frustrating; no matter how many reports you file with them they never take action on any of the stolen content on which they’re showing ads and still rank the crap-spam site well on the search results despite it being easy for their systems to detect copies

    • John, I couldn’t agree with you more. Google got mad at me stating that I was the person stealing my own content. This person stole my content and put it on blogger. The nerve. There needs to be a solution for this. At this point, I just block!

  15. Then perhaps the best way for you is to change the licensing and aggressively send take down notices to content scrappers. Meanwhile keep focusing on creating quality content.

  16. Hi there,
    I just stumbled upon your article while looking for answers to some of my concerns.
    I, together with some friends, launched a website about DIY in Italy, few months ago, which is working unexpectedly well, rankings are high, lots of traffic, etc. Still, PR is yet 0. Our content has a Creative Commons 4.0 license, because we realyl believe it’s a good way to share contents. HOWEVER:
    Some time ago we noticed a PR4 site with lots of traffic copying our top articles, linking back to our homepage (which is not what you’re supposed to do with a CC license, but it’s still ok). The problems are these:
    1. there’s a whole lot of smaller sites scraping their (our) content and linking back to them instead of our site
    2. the PR4 site and some of the smaller sites somehow rank better than our site
    3. there’s strong suggestions that a Google penalty to OUR content has taken place, as it has lower PR than most of the other pages (which have been online for a long time).

    We’re in contact with the PR4 site and it’s ok for us if they use our content, as long as they link back to the original article (that’s the whole point of the CC license), BUT we’re trying to find a solution to avoid getting Google penalties: would rel canonical do the job? What is your opinion? Whould we change our license and be more aggressive towards content copying?
    Thank you!

    • Philipp, If you have not already done so, then you should create a webmaster tools account for your site and submit your sitemap. It helps you figure out if there is a problem with your site, how your site is doing on search, and you can use lots of other tools. It also helps Google better understand where some content first appeared.

      We don’t think changing the license will stop content scrappers from copying your content.


      • hi! Yes, we set up a webmaster tools account, linked the site to our google+ page, and most of the authors to their google+ profiles using publisher and author tags. authorship seems to be working fine in search snippets, but so far it doesn’t seem to make much difference in case of scraped content. Higher PR pages scraping our content are still on top…

  17. One of the best ways not to be effected by this is to ping effectively. Pinging, and manually submitting pages to Google and Bing gets spiders on your site FAST. They index the pages ASAP, then when they find duplicate content on other sites consider you as the authority.

    I do however have the sneaky suspicion this might have to do with PageRank though… But Matt Cutts (webspam team @ Google) has advocated using pinger’s on this very topic. I’m just not sure how much I can trust what he says though.

    To add more services, go to Settings -> Writing Settings -> Update Services -> Open the “Update services” link in a new tab and copy all the update services. Back in WordPress paste them in the ping list and click save.

    Open account in Bing Webmaster tools for manual URL submission for fast indexing.

  18. I recently discovered a guy that can taking an RSS feed from my blog – bear in mind that my blog is a summary feed with Yoast’s ‘This post was found first on’ line. I sent the guy a thank-you message, basically telling him that he’s giving me backlinks, AND telling Google he’s copying my website (since they can look at the timestamps to see which was published first).

    Checked out 2 days later, and all my stuff was mysteriously gone…

    • You can definitely use that plugin. It blocks right clicks, keyboard shortcuts for copying, ip blacklist etc. Those all prevent manual scraping however most content scrapers use automatic tools. So none of those would be super helpful.


    • Thanks for your reply – the pro version states it protects you from bot attacks so I assume that means scrapper bots? the price puts me off installing it on all my sites, but I may use it on one just to see how well it works

  19. This is one of, if not the best, “beginner” article I’ve ever come across on the web.

    After reading it I feel like I just had a meeting with a security consultant.

    I’m applying these techniques right frickin now!

    Thanks. I’m now a follower of this site.

  20. Its only happened to me a few times. Some blogger from outside the USA has taken my post word-for-word and posted to their site as if it were their own. Since it was just a single post with my YT video embedded, I didn’t sweat the details too much, since my channel CTR saw a nice spike it visits anyway.

  21. Just want to say thanks, thanks, and thanks!

    I just today discovered your website, only read 3 articles so far (including this one)… but I’m extremely impressed.

    I’ve only been blogging now for 5 weeks, but finding it addictive, especially seeing the growing traffic and user engagement as a result of my efforts. Seeing 100 visitors to my blog site in one day, and being able to see who’s referring them, motivates me to learn all I can to increase the social media marketing and interactions with new visitors.

    Best regards,

  22. I love your Website and was floored to read about content scraping! Is there and way to create a watermark somehow which is not distracting to your readers but to the scraper’s site is dead obvious?

  23. Is it legal to post the complete article from another website and writing source website name at bottom of article?

  24. Is there any way / plugin

    someone is copy my fashion blog picture and post it at their forum

    but when i click on image at that forum . its open in new window

    i want any plugin or script that if he copy my images when someone click on that images, then that person redirect to my blog post related to that images ?

    any plugin yet ? link with post images ?

      • I’ll done it just change

        when someone upload any picture on right side it shows url link

        default setting is media file
        u have to change it in attachment url

        then done!

        when someone copy your blog images .that give backlink to your posted page

  25. If someone takes an article written in English and translate it, using their heads and not google translate, into some other language, lets say because the majority of the people in the country of that other language doesn’t understand English. Would you point them out as scrapers anyway? Or what is your opinion on that?
    For me personally I don’t find it extremely problematic, of course I believe the “author” should link bank to the original article while clarifying that his article is translated.

  26. This is a tremendous article. After reading it I hope you do not see me as a content scraper. I have used excepts from you (curated), I always have the ‘Read the Full Article” and have your page link there and also many of my posts are tweeted and I include your twitter account in there. If you do not want this please let me know and I will gladly remove it. I am very appreciative of your work and want to share it with my visitors. it is not intended to steal your visitors but to be able to give good value to mine and send them on to you for more.

    • Greg, as long as you only display an excerpt and send the user over to our site to read the full article, then it is not scraping. As you said, it is curation. Tons of popular sites do that (i.e reddit, digg, etc).


  27. My site has a lot of original security articles and a couple have been scraped. The site that scraped me was in yahoo! News with my article and had people commenting on it. I dealt with the issue by commenting and saying I was the original author and replied to a few comments. I had internal links, that’s how I found out so quickly. A trick I am going to write about is getting people who come from a scrapers site and have a banner or image appear telling them what happened. The never ending request suggestion sounds illegal under the computer fraud and abuse act. I am not a lawyer. I just write about security, so I have to know the security laws for computers.

    I Do not like it that your form didn’t take my companies email as a valid email.

  28. nice and informative writeup i like your approach of taking advantage of the scrappers however blocking an ip may not always work; a serious scrapper would often use a list of anonymous or free proxies in that case blacklisting one ip might not be an effective solution as the scrapper would change it often. One solution is to write a small script that will detect any abnormal traffic from a given ip, say more than 20 hits/sec and challenge it with a captcha if no reply, put the ip in a temp blacklist for about 30 mins. you can hardened it with another javascript that detects mouse, touch or keyboard movement after few hits, if no keyboard, mouse, or touch is detected you can again put the scrapper in the temp blacklist, worked like a charm for us.

  29. Your solutions are good enough for content scrapers.
    But what if people are manually coping and pasting content into their Facebook pages.
    We have implemented tynt but they remove the link back to original article, any ideas on how you can handle this kind of situation.

      • Actually there’s a plugin created by IMWealth Builders, probably the only one of their plugins I like, the rest are pretty trashy and involve scraping Ecommerce sites (CB,Azon,CJ etc) for affiliate commisions.

        It’s called “Covert Copy Traffic” is actually allows you to set any text pre or post a set number of words. So say I set it to post “This content was taken from” after 18 words. Then anytime someone copied/paste more than 18 words from the website it would add that text at the bottom, 17 words or less it would do nothing.

        These were just example settings. Pretty useful plugin, works a charm. I’ve tried just about every way I could think of to bypass the text insertion but it seems to be impossible. Plugin is to stronk.

        • Yeah, that’s right. You can just use that script to say “Content came from” rather than “Read More”.

        • Is this true that their amazon etc programs are scrapers – if that is the case – I have made whopper of mistake on a purchase from them – luckily, I have not used it yet.

        • Yeah Jennae, it’s legal in terms of Amazon allow you to copy content from their pages. It helps there sales, affiliates are the reason Amazon is Amazon.

          However Google and other search engines (that matter) just consider it a “thin affiliate site” as in no original content. Therefore they don’t rank unless there’s a certain percentage of original content on the site as well.

          A scraper, is nothing more than a spider/crawler generally it runs in socket mode, however some run in browser.

          Just because it’s labeled as a scraper doesn’t make it bad per say, I use scrapers and spiders regularly to check my site for unnatural links, I check others for competition analysis, and keyword research and a variety of other tasks that do not harm anyone, but benefit me.

          However I don’t like or condone anyone scraping for the purpose of copyright infringement. Which is what this discussion is really about.

          Google uses the spider “Google Bot” to index the web along with 100’s of other search engines, there’s thousands, hundreds of thousands of spiders crawling the web for a variety of purposes. Google also scrapes websites to “cache” them. As do a lot of important services we need such as the historical web archives.

  30. I’m about to begin aggressively searching for sites that are copying my content and have the content removed. I no it is impacting how my site ranks so I have to do something about it. Any idea how much has to be copied before you can deliver DMCA notices? Is a paragraph in an article enough to legally be able to call it plagiarized?

  31. You fail to mention that any self respecting autoblogger will strip out links and insert their own affiliate links rather than using your content as it comes, so your approach to getting links from them will usually fail.

      • Agreed! There’s a very special “Hot Place” near the center of the Earth for Spammers, Scrapers and Auto-Bloggers…

  32. I think that the best idea is to include affiliate links.
    After the last Pinguin update, my website was penalized. I started to analyze it and I’ve discovered that many other sites copied my content. I don’t know why, but those websites rank better than me in search engines, using my content.

    • Not just affiliate links. Include as many internal links. Because if those sites are linking back to your other pages, then Google will KNOW that you are the authority site.


      • Hi Team. I really appreciate this article, but have one question in regards to having internal links in your pages/posts.

        I suppose you mean ‘absolute’ links?? Otherwise this may not work in your favour, once the content has been scraped… Well, so far I have always been going along with relative links, as you do I suppose. Which is the best method? Cheers!

  33. first of all your tutorial is just fantastic..hats off! just one doubt how to know if a site is a scraper site? i used your method and found out that Google Webmaster Tools is reporting that there are 262 links to my site and there are many sites which dont know of…thus i am in a confusion….how to check if a site is a scraper site or an authoritative site?? is der a tool available for that? thanks in advance!

      • yes that is true…but what if i dont want to find my article on those scraping sites…i know my article is there as it is being reported by GWT and i just want to block that IP address by inserting those rewritecond rules in the htaccess file…i dont want to waste my time searching those bad sites for my article or requesting them to takedown my article…

  34. Thank you for this article – and for your site in general!. I like this so much that I had wondered how I would keep track of this resource. And now I see the subscriptions options below. What a way to get a comment!

  35. Preventing content scraping is almost impossible. I don’t think content scrapper does hurt me any way. They are just voting me that i have got some high quality contents. Google is smart enough to detect the original publishers. No-one should worry.

  36. really informative, if you use cloudflare, there is new apps called ScrapeShield, and you can easily protect and track/monitor your site contents free.

    • wow dats great man…do you use cloudflare? i just wanted your review because i have never used that cdn service..i know it is free and all but i think my site load time is already gr8 that i didnt require it…now that scrapeshield is there i think i will definitely check it out…what all other apps will we get if we start using cloudflare?? thanks

      • Hello,
        IMO @cloudflare really is awesome. I have two sites on it (both mine and my wife’s blog) and it really is incredibly fast, but that’s not to mention all of the security, traffic analysis, app support (automatic app installs) that they provide.

        I know that all hosting setups are different, but I have both of our sites running on the Media Temple (gs)Grid Service. I can honestly say that our sites run faster now than they did when I was using W3 Total Cache and Amazon S3 as my CDN. Actually, I still use W3TC on my site to minimize & cache my content, but I use CloudFlare for CDN, DNS, and security services.

        Highly recommend… Actually, I would really appreciate it if someone at WPBeginner would give us their in-depth, experienced opinion of the CloudFlare services. To me, they have been awesome!

  37. You can also get a plugin whose name eludes me at this time that does the google search for you. It also adds a code into your RSS that the app searches for

  38. Great post, I know there are many autoblogs fetching my content. Although after penguin update my site is getting 3 times more traffic from google than before. But after reading about many disasters or original content generators I’m worried about future penalties by google. 
    Its my experience that usually google respect high PR sites with good authority backlinks. but site is just one year old and PR is less than 5. 
    I try to contact scrappers but most of them don’t have contact forms. so I think I’ll try that htaccess method to blog the scrappers ip addresses. But only the other hand some of them can use feedburner. 

    • Personally I don’t bother with RSS as most users don’t use it. Instead supply a newsletter feed. It does the same trick + you get emails to market to (if done correctly). Majority of people are more likely to subscribe to a blog rather than bookmark a RSS in my experience. So it’s better to turn off RSS. You can do this using WordPress SEO by Yoast, and various other plugins.

      Then if you also implement above mentioned strategies, you should be good. Remove all unnecessary headers RSD WLM etc.

      There will be a couple still able to scrape effectively but those tricks will diminish a great deal of them.

Leave A Reply

Thanks for choosing to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and your email address will NOT be published. Please Do NOT use keywords in the name field. Let's have a personal and meaningful conversation.