Updated: February 24, 2023.

15 examples of technical SEO issues together with tips on how to fix them and further explanations. 

In my work as a technical SEO consultant and website auditor, I come across a lot of various technical SEO issues every day. Some of these issues are clearly more common and more serious than others. 

Your knowledge of these technical SEO issues will not only make you a better SEO but may be a game-changer for the site that you are auditing. 

How do you check if a site has these common technical SEO issues? 

To be able to detect if these issues are present on the site, you will need the following SEO tools:

Make sure to check my list of SEO auditing tools to discover even more tools that can help you detect technical SEO issues on sites.

❓Looking to hire someone to audit your website or do SEO? Make sure to check the SEO services I offer including SEO consultations and monthly SEO services.
👉 Contact me for more information or if you have any questions or learn more about why you want to hire me as your SEO consultant.

15 Technical SEO Issues

These examples of bad technical SEO issues are not in any special order. Some of these issues clearly have a higher priority than others but it’s definitely worth knowing about all of them. 

Below each example, you will also learn how to fix these technical SEO issues.

☝️ Make sure to check the list of SEO best practices from Google, my list of SEO tips, and the list of SEO mistakes.

Technical SEO issue #1: Core Web Vitals not passed in the field but passed in the lab

If the site passes Core Web Vitals in the lab (Google Lighthouse) but still has orange or red colors for the field data (Chrome User Experience Report), then the site is considered to be failing the CWV assessment. 

It is not uncommon to see that a site passes Core Web Vitals in the lab but does not do so in the field. Good Google Lighthouse scores can give less experienced SEOs a false impression that the site is doing OK while in fact, it is not. 

Note that it can also be the other way round, i.e. the site has poor lab scores but good scores in the field.

Field and lab data in Google PageSpeed Insights
This is an example of a site that has poor lab scores but passes Core Web Vitals in the field.

Here is a quick reminder for you:

  • Core Web Vitals are one out of four Google page experience signals (which are ranking factors). These include HTTPS, mobile-friendliness, and no intrusive interstitials in addition to Core Web Vitals.
  • Field data and lab data are different. Google – when assessing sites in terms of Core Web Vitals – only takes into account the field data, i.e. the real-world data coming from actual users of the site.
  • That’s why when optimizing for Core Web Vitals, you should focus on field data (the CrUX data) which are available in Google PageSpeed Insights and the Google Search Console Core Web Vitals report (if the site receives a meaningful amount of traffic).
Core Web Vitals report in Google Search Console
This is the GSC Core Web Vitals report showing field data.
  • The PageSpeed Insights tool is great for checking how one specific page is doing while the GSC Core Web Vitals report will let you identify groups of pages with similar issues. 
Field data in Google PageSpeed Insights
These are the field data scores for the homepage of SEOSLY.

How do you fix this technical SEO issue?

The reasons for the issue may differ a lot so it’s not possible to provide one simple fix. However, here are a few things you may try:

Technical SEO issue #2: Robots.txt disallows website resources

If the site’s resources, such as images, JS files, or/and CSS files are disallowed in robots.txt, then the crawler may not be able to render the page correctly. 

Googlebot and other search engine crawlers do not only crawl but also render the pages visited to be able to see all of their content even if the site has a lot of JavaScript.

Here is an example of such incorrect implementation:

User-agent: *
Disallow: /assets/
Disallow: /images/

However, by disallowing specific website resources in robots.txt, you will make it impossible for bots to crawl these resources and render the page correctly. This can lead to undesired consequences, such as lower rankings or indexing problems.  

How do you fix this technical SEO issue?

The solution is quite simple here. All you need to do is remove all the disallow directives that block the crawling of website resources. 

  • Most content management systems allow for modifying robots.txt or setting up the rules of how this file is generated. 
  • You can also modify the robots.txt file by simply connecting to the server via SFTP and uploading the modified file. 
  • If you are using WordPress, you may check my article on how to access and edit robots.txt in WordPress

Technical SEO issue #3: XML sitemap contains incorrect entries

An XML sitemap should only contain the canonical indexable URLs that you want to be indexed and ranked in Google. Having lots of incorrect URLs in the XML sitemap is simply a waste of the crawl budget. 

Here are the examples of incorrect XML sitemap entries:

  • URLs that return error codes like 5XX or 4XX, 
  • URLs with a noindex tag,
  • canonicalized URLs, 
  • redirected URLs,
  • URLs disallowed in robots.txt, 
  • including the same URL multiple times or in multiple XML sitemaps. 
Sitemap analysis in Screaming Frog
This is the analysis of sitemaps in Screaming Frog.

Having a few incorrect sitemap entries here and there is not a serious issue but if the sitemap has hundreds of thousands of incorrect URLs, then it can have a negative impact on the site’s crawl budget. 

How do you fix this technical SEO issue?

All you need to do is remove the incorrect entries from the sitemap so that it only contains canonical URLs. 

  • In most cases, the sitemap is generated automatically, so you only need to adjust the rules that are used for XML sitemap generation. 
  • In WordPress, it’s very easy to adjust the settings of the XML sitemap with a plugin, such as Rank Math. 
  • Any website crawler will tell if the XML sitemap contains incorrect URLs. If you don’t know where the sitemap of the site is, check my guide on how to find the sitemap of the site.

Technical SEO issue #4: Incorrect, malformed, or/and conflicting canonical URLs

Unfortunately, there are many ways to implement canonical URLs incorrectly. The best-case scenario with incorrect canonical URLs is that the search engine will simply ignore them and on its own choose the canonical link element of a given page.

Here are the examples of what can go wrong with the implementation of canonical URLs:

  • Canonical link element is specified outside of the head (e.g. in the body section)
  • Canonical link element is empty or invalid
  • Canonical URL points to the HTTP version of the URL
  • Canonical URL points to a URL with a noindex tag
  • Canonical URL points to a URL that returns error code 4XX or 5XX
  • Canonical URL points to a URL that is disallowed in robots.txt
  • Canonical URL is not found in the source code but only in the rendered HTML
  • Canonical link element points to a canonicalized URL
  • Conflicting canonical links in the HTTP header and in the head
  • Canonical tags are not used at all
Canonicals in Screaming Frog SEO Spider
This is the overview of canonicals in Screaming Frog.

How do you fix this technical SEO issue?

Fixing this is relatively easy. All you need to do is update the canonical links elements so that they point to actual canonical URLs. 

If you crawl the site with Sitebulb or Screaming Frog, you will be able to see exactly what pages need optimization in this regard.

Technical SEO issue #5: Conflicting nofollow and/or noindex directives in HTML and/or the HTTP header

If the site has multiple and conflicting nofollow and/or noindex directives in the HTTP header and/or HTML, then Google will most likely choose the most restrictive directive.

This can be a serious issue if the more restrictive directive, such as “noindex” has been added accidentally. This applies to multiple nofollow/noindex directives in either HTML or HTTP the header, or both.

Here is an example of such incorrect implementation:

  • The content of the HTTP header says that the page should be indexed and followed.
HTTP/... 200 OK
...
X-Robots-Tag: index,follow
  • The content of the meta robots tag in the head says the page should not be indexed.
<head>
   <title>SEO</title>
   <meta name="robots" content="noindex,follow">
    ...
 </head>

The noindex/nofollow directives should be stated just once in either HTML or the HTTP header.

How do you fix this technical SEO issue?

Fixing this issue is relatively easy. All you need to do is remove the extra directive and leave only the one that you want Google and other search engines to abide by.

  • Just like above, you need to use a crawler to extract these problematic URLs.
  • If the issue relates to a relatively small number of pages, you can update them manually.
  • If, however, it’s about thousands or millions of web pages, then you need to customize the rule that is adding these multiple nofollow and/or noindex tags.

Technical SEO issue #6: Nofollowed and/or disallowed internal links

Nofollowing or/and disallowing an internal URL may prevent it from ranking because Google will not be able to read the content of the URL if it is disallowed in robots.txt) or no link equity will be passed to the URL (if it’s internally nofollowed). 

  • If you don’t want Google to index a specific URL, simply add a noindex tag to it. 
  • Disallowing the URL in robots.txt will not prevent it from being indexed. 
  • Unless you have a very good reason, nofollowing internal links is usually not a good idea in terms of SEO. 
  • SEOs used to nofollow internal links pointing to terms & conditions or privacy policy pages. However, Google confirmed many times that this is not necessary.

How do you fix this technical SEO issue?

All you need to do to fix that issue is to remove the disallowed URLs from the robots.txt and remove the “nofollow” attribute from internal links. 

  • Use a website crawler, such as Sitebulb to show you all the nofollowed internal URLs and their placement.
  • You can also use the NoFollow Chrome extension that will mark nofollowed links on any page you visit. This is especially useful if you are doing a manual review of the site.
NoFollow Chrome Extension
This is how the NoFollow Chrome extension highlights nofollow links on a page.

Technical SEO issue #7: Low-value internal followed links

Low-value internal links carry no SEO information about the URLs to which they point. This is a huge waste of SEO potential.

Internal links are one of the strongest signals providing information about the URLs linked. That’s why SEOs should always strive to make the most of internal linking. 

Here are the examples of low-value links:

  • Text links with anchor text, such as “Read more”, “Click here”, “Learn more” and so on.
  • Graphic links with no ALT attribute. The ALT attribute in graphic links acts as anchor text in text links.

While it is a serious issue if all or the majority of your internal links have the “Read more” anchor text, it is definitely less of an issue if there are two internal links pointing to a specific page and one link has relevant anchor text while the other is of the “Learn more” type. 

Technical SEO issues: low value links
Here the issue is not very serious as there is both a relevant text link and a low value “Read more” link.

How do you fix this technical SEO issue?

In an ideal SEO world, you want to only have high-value text and graphic internal links. 

  • The easiest way to fix that is to simply remove all the “Read more” links and replace them with text links with relevant anchor texts. 
  • If you cannot remove the “Read more” links, at least make sure to add another high-value text link pointing to the same URL. 
  • For example, in the case of the blog title, you may have two links, one with the “Read more” text and the other with descriptive anchor text like “technical SEO audit guide”.

Technical SEO issue #8: No outgoing and/or incoming internal links

❗If a given URL does not have any outgoing and/or incoming internal links, then it is not passing/receiving any link equity to/from other web pages.

If the URL in question does not “aspire” to rank in Google and/or is just a landing page, then it’s not an issue. In that case, it’s usually the best idea to simply add a “noindex” tag to such a URL.

However, if the URL is an important web page that you want to bring organic traffic and have high rankings, then the page may have difficulty being indexed and/or ranked in Google.

Nofollowed internal links
This is how Sitebulb reports on internal links that do not receive link equity. In this case, it is OK because these URLs are part of the premium section of SEOSLY that is available only after logging in.

How do you fix this technical SEO issue?

In order to fix this issue, you should add text links (with relevant anchor texts) both from and to that URL. 

  • The incoming links should ideally come from other thematically-related pages. 
  • The outgoing links – similarly – should point to other related web pages. 
  • For example, my SEO audit page should link to and be linked from a similar page like my Core Web Vitals audit.

Technical SEO issue #9: Internal or/and external redirects with issues

Both internal and external redirects can lead to a bad user experience and can be misleading for search engine robots (especially if these redirects do not work correctly). 

Similar to canonical URLs, there are a lot of things that can go wrong with redirects on the website.

Here are some of the most popular issues in this regard:

  • The internal URL redirect returns error status codes like 4XX or 5XX.
  • The external URL redirect returns 4XX or 5XX.
  • The URL redirects back to itself (a redirect loop). 

All of the above example issues – if they relate to a huge number of URLs on the site – can have a negative impact on the site’s crawlability and user experience. Both users and search engine robots may abandon the site if they come across a faulty redirect.

How do you fix this technical SEO issue?

Fortunately, any crawler will show you exactly what URLs have this issue. Here is how you fix it: 

  • In the case of internal URLs, you simply need to update the target URLs so that they return status 200 (OK). 
  • For external redirected URLs, you simply have to remove the links pointing to these redirected URLs or replace them with other URLs returning status code 200 (OK).   

Technical SEO issue #10: Internal links to redirected URLs

If the site has URLs that are redirected to other internal URLs, then it should not link to the redirected URLs but to target URLs. 

While you don’t have control over the external URLs that you link to and whether they become redirected at some point, you have full control over your internal URLs. 

That’s why you should make sure that your site does not link to internally redirected URLs. Instead, all internal links should point to the target URLs. 

Example technical SEO issue
Here Sitebulb shows the internally redirected URLs and on what pages these URLs are placed.

For example, if A is redirected to B, you should not place internal links to the A URL but instead to the B URL. This is not a fatal mistake but a very good SEO practice regarding the crawlability of the site.

How do you fix this technical SEO issue?

Both fixing and diagnosing this is very easy if you use a crawler like Sitebulb or Screaming Frog. Once the tool shows you the redirected URLs, your task is to:

  • Prepare the list of these redirected URLs together with their target URLs and the URLs on which they are placed.
  • Change all the redirected URLs with target URLs.
  • Depending on the scale of this issue on the site, you may either do it manually or automate it in some way.

Technical SEO issue #11: Invalid and/or incorrect hreflang tags 

If an international website has issues with the hreflang implementation, then it may not be able to correctly communicate the target language and region of its URLs to Google and other search engines.  

Hreflang tags are yet another SEO element that is vulnerable to many different issues the most important of which include:

  • Hreflang annotations are invalid (either the language or region codes are invalid)
  • Hreflang annotations point to noindexed, disallowed, or canonicalized URLs
  • Hreflang tags point to URLs returning error codes like 4XX or 5XX
  • Hreflang tags point to redirected URLs
  • Hreflang tags conflict with each other
  • Hreflang tags are indicated using multiple methods (in the head, in the HTTP header, and/or in the XML sitemap)
  • Hreflang tags are missing in the case of a multilingual website
  • Return tags are missing 
  • The X-default language is not indicated

How do you fix this technical SEO issue?

✅ To fix these issues, you need to edit hreflang annotations so that none of the above issues exists and hreflangs are valid, point to correct canonical URLs that return status 200 (OK), contain return tags, and have the X-default language specified.

  • Depending on the size of the site, it can be done manually or automatically.
  • Fortunately, every website crawler will tell you if there are issues in the hreflang implementations of the site. 
  • In addition, you can use the International Targetting report in Google Search Console to check if hreflang tags work correctly.
International targetting report
This is the GSC International Targetting report showing errors in hreflang implementations.

Technical SEO issue #12: The <head> section containing invalid HTML elements

❗Putting invalid HTML elements in the head may break the <head> or close it too early, which may lead to search engine crawlers missing some important head elements, such as meta robots or canonical link elements.

Here is what to watch out for:

  • If the site contains a <noscript> tag in the head, then it can only contain elements, such as <meta>, <link>, and <style>.
  • Putting other elements like <h1> or <img> into the <noscrip> tag that is placed in the <head> is invalid.
  • If the <noscript> tag is placed in the body, then you can put other elements like <img> into it.

How do you fix this technical SEO issue?

You need to modify the <head> section of the site and remove all the invalid elements from it.

Depending on the type of the site and whether it uses a popular CMS like WordPress, editing the <head> may differ.

Technical SEO issue #13: URLs available at both HTTP and HTTPS

❗It is a serious SEO and security issue if the site is available both at HTTP and HTTPS. It can cause both the users and search engines to mistrust the site. In addition, browsers will display warnings that the site is being loaded over HTTP.

It’s awesome that a site has an SSL certificate and loads over HTTPS. However, it’s also vitally important to make sure that all of the HTTP URLs are permanently (301) redirected to the HTTPS version.  

How do you fix this technical SEO issue?

✅ If there are, your task is to make sure they are all permanently redirected (301) to the HTTPS version.

  • Any website crawler will show you if there are URLs available at the HTTP version.
  • The best way to implement redirects is to add them in the .htaccess file.

Technical SEO issue #14: Mixed content and/or internal HTTP links

❗If the site has mixed content and/or internal links to HTTP URLs, then browsers may display red warnings notifying users that the site is not fully secure.

If the site has an SSL certificate, then all of its URLs, resources, and internal links should be HTTPS. If they aren’t, then the site may be distrusted by both users and search engine crawlers. 

Here Sitebulb reports that there are no mixed content issues on this website.

How do you fix this technical SEO issue?

Make sure that all site’s resources and URLs are 301-redirected to the HTTPS version and replace any HTTP links with the HTTPS versions.

Technical SEO issue #15: Technical duplication of content

Technical duplication of the content may result in the creation of thousands or even millions of URLs with identical content. This can negatively influence the crawl budget of the site and Google’s ability to efficiently crawl and index the site.

Technical content duplication takes place when there are multiple indexable and canonical URLs with identical content. These usually include URLs in which the letter case is insignificant and which contain parameters that do not influence the content of the URL

Google – most of the time – knows how to deal with the technical duplication of content but it’s still a great practice to make sure there are no technically duplicate URLs.

How do you fix this technical SEO issue?

The fix, in this case, is usually very simple and all you need to do is add the canonical link element pointing to the canonical “main” version of the URL (without any parameters and in lowercase) on all the technically duplicated URLs. 

Once it’s done, all the technically duplicate URLs will appear under Excluded in the GSC Coverage report unless they are not already there. 

Olga Zarr is an SEO consultant with 10+ years of experience. She has been doing SEO for both the biggest brands in the world and small businesses. She has done 200+ SEO audits so far. Olga has completed SEO courses and degrees at universities, such as UC Davis, University of Michigan, and Johns Hopkins University. She also completed Moz Academy! And, of course, has Google certifications. She keeps learning SEO and loves it. Olga is also a Google Product Expert specializing in areas, such as Google Search and Google Webmasters.
Show 5 Comments

5 Comments

  1. Mohd Vasim

    Nice information about SEO. thank u sir keep updating your site.

  2. Boldizar

    One of the most detailed articles I’ve read lately. I especially like the solution for duplicate content issue, so simple… Thanks a lot.

  3. Pravin Patel

    Now that’s what we call a good reference to learn something. I have applied these techniques to one of my websites. let’s see how google picks its ups.

    How you find Sitebulb Vs Screaming frog?

  4. Cyrus Mistry

    Great list on technical SEO. Thank you for sharing the same in detail with great examples.

Comments are closed