1. Don’t cloak to Googlebot
- Use “feature detection” and “progressive enhancement” techniques to make your content available to all users.
- Avoid redirecting to an “unsupported browser” page. Consider using a polyfill or other safe fallback where needed.
- The features Googlebot currently doesn’t support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.
2. Use the rel=canonical attribute
Use rel=canonical when serving content from multiple URLs is required. Further information about the canonical attribute can be found here.
3. Avoid the AJAX-Crawling scheme on new sites.
Consider migrating old sites that use this scheme soon. Remember to remove “meta fragment” tags when migrating. Don’t use a “meta fragment” tag if the “escaped fragment” URL doesn’t serve fully rendered content.
4. Avoid using “#” in URLs (outside of “#!”).
Googlebot rarely indexes URLs with “#” in them. Use “normal” URLs with path/filename/query-parameters instead, consider using the History API for navigation.
5. Check your web pages
Use Search Console’s Fetch and Render tool to test how Googlebot sees your pages. Note that this tool doesn’t support “#!” or “#” URLs. (You can also use the website audit tool in SEOprofiler to check your pages more thoroughly.)
6. Check your robots.txt file
7. Do not use too many embedded resources
The tools in SEOprofiler help you to get high rankings on Google and other search engines. If you haven’t done it yet, try SEOprofiler now: