Web Hosting - robots.txt

The Web Hosting service implements a robots.txt file by default with a crawl-delay function within the /httpdocs folder for each web hosting domain.

Example: 
User-agent: *
Crawl-delay: 10

This robots.txt is in place to help stagger the load on individual web sites who are indexed by Google, and the like.  It can be especially helpful when multiple bots maybe indexing a web site at the same time, which can make sites less responsive.  Additionally it will help increase the overall capacity on the server(s) as most sites are indexed frequently and staggering this traffic helps to that end.  

We recommend that you keep this file in place containing the crawl-delay but but feel free to edit it to suite your needs. 

If you have further questions, email webhosting@doit.wisc.edu



Keywords:
search, google, bots, rate, crawl, engines, exclusion, security, allow, disallow, User-agent, sitemap, noindex
Doc ID:
62214
Owned by:
Jake S. in DoIT Web Hosting
Created:
2016-03-25
Updated:
2025-03-06
Sites:
DoIT Web Hosting