Technical SEO
Lucas Torres
Marketing Manager @365Scores | Strategic Growth | Data Analysis | Performance | Media Optimization | CRM Strategies
Eight of twelve articles in the CXL Minidegree In writing an overview about implementing technical SEO could be the difference between ranking on Page 1…or never being indexed. You need to know how to audit your site for technical SEO needs and improvements so you can get more visitors, leads, sales and signups for your business.
Know what is important in a technical audit and how to start one yourself.
Search engines give preferential treatment in search results to websites that display certain technical characteristics —? for example a secure connection, a responsive design or a fast loading time — and technical SEO is the work you need to do to ensure your website does so.
Our guide outlines the most common technical SEO issues with recommended solutions. Follow one lists the most common technical SEO issues:
I'm talking more about some this topics listed
Rankings
URLs are a minor ranking factor search engines use when determining a particular page or resource's relevance to a search query. While they do give weight to the authority of the overall domain itself, keyword use in a URL can also act as a ranking factor.
Robots.txt file
You can give robots directions on your site by using the robots.txt file. It’s a powerful tool, which should be handled carefully. As we mentioned in the beginning, a small mistake might prevent robots from crawling (important parts of) your site. Sometimes, people unintentionally block their site’s CSS and JS files in the robot.txt file. These files contain code that tells browsers what your site should look like and how it works. If those files are blocked, search engines can’t find out if your site works properly.
The meta robots tag
If you want search engine robots to crawl a page, but to keep it out of the search results for some reason, you can tell them with the robots meta tag. With the robots meta tag, you can also instruct them to crawl a page, but not to follow the links on the page. With Yoast SEO it’s easy to noindex or nofollow a post or page. Learn for which pages you’d want to do that.
领英推荐
XML sitemap
Simply put, an XML sitemap is a list of all pages of your site. It serves as a roadmap for search engines on your site. With it, you’ll make sure search engines won’t miss any important content on your site. The XML sitemap is often categorized in posts, pages, tags or other custom post types and includes the number of images and the last modified date for every page.
Bonus: SEO best practices for URLs
Keeping URLs as simple, relevant, compelling, and accurate as possible is key to getting both your users and search engines to understand them (a prerequisite to ranking well). Although URLs can include ID numbers and codes, the best practice is to use words that people can comprehend.
URLs should be definitive but concise. By seeing only the URL, a user (and search engine!) should have a good idea of what to expect on the page.
When necessary for readability, use hyphens to separate words. URLs should not use underscores, spaces, or any other characters to separate words.
Use lowercase letters. In some cases, uppercase letters can cause issues with duplicate pages. For? example, moz.com/Blog and moz.com/blog might be seen as two distinct URLs, which might create issues with duplicate content.
Avoid the use of URL parameters, if possible, as they can create issues with tracking and duplicate content. If parameters need to be used (UTM codes, e.g.), use them sparingly.
A URL specifies a web page’s location on the internet, and this web address allows an internet browser to retrieve that page’s content from a server and show it to a user [source].?
There are various parts included in the readable text of a URL (Uniform Resource Locator). Each element of the URL structure provides specific information to tell an internet browser where to find the webpage content and how to retrieve that information.?
An easy way to think about a URL is to compare it to a street address. Humans use street addresses to find a house in real life, and internet browsers use URLs to see where a website’s file is stored on the internet. A street address’ parts provide detailed information about where to find your destination, including the house number, street, city, zip code, state, and country. Similarly, a webpage has a protocol, subdomain, domain, top-level domain, folder/path, page, and anchor. At the bottom of this article, we’ve included a glossary defining each of these elements of URL structure in more detail.?
Why are URLs important??
A well-crafted URL provides both humans and search engines an easy-to-understand indication of what the destination page will be about. For example, the DPReview URL below is what we call a "semantically accurate" URL (it accurately describes its destination).
Even if the title tag of this page were hidden, the human-readable, semantically accurate URL would still provide a clear idea of what the destination page is about, and would provide visitors with an improved user experience by making it clear what they'll see if they click the link.