Ask a Question

Knowledge Base

Categories Rss

Using a robots.txt file to prevent search engines from spidering dynamic variables and pages

Reference Number: AA-00289 Views: 3633 Created: 2011-06-29 14:43 Last Updated: 2011-06-29 14:50 100 Rating/ 1 Voters


By default, when a search engine spider visits your web site it follows all variable links, queries, search boxes and dynamic pages on your web site This can end up in an endless loop causing excessive load on your web site and generating high traffic.


The problem can be solved by uploading a robots.txt file to your web site which will prevent the search engines from following any URLs that begin with /? (these are the common dynamic page references).

Insert this text into your robots.txt file:

User-agent: *
Allow: /*?$
Disallow: /*?

Info Ask a Question