
Robots.txt
A file at the root of your website that tells search engine crawlers which pages they can and cannot access on your site.
Why It Matters for Your Business
Robots.txt is a small file with big consequences. It's the first thing search engines check when they visit your site. One wrong line in this file can make your entire website invisible to Google.
For Space Coast businesses, the most common issue is launching a new website with the development robots.txt still in place. A Viera medical practice that goes live with Disallow: / in their robots.txt is telling Google "don't look at anything on this site," and Google will obey.
How It Works
The robots.txt file sits at yourdomain.com/robots.txt and contains simple rules for search engine crawlers:
A Rockledge plumbing company should allow crawlers to access all public service pages and blog content while blocking admin pages, staging environments, and internal search results. The file should also point to their XML sitemap.
After every website launch or redesign, immediately check your robots.txt file at yourdomain.com/robots.txt. If it contains "Disallow: /" you're blocking all search engines from your entire site. This is the single most damaging technical SEO mistake, and the easiest to fix.
Tap a question to expand.