Rewriting the Beginner’s Guide to SEO, Chapter 7: Measuring, Prioritizing, & Executing SEO

Posted by BritneyMuller

It’s finally here, for your review and feedback: Chapter 7 of
the new Beginner’s Guide to SEO, the last chapter. We cap off the
guide with advice on how to measure, prioritize, and execute on
your SEO. And if you missed them, check out the drafts of our
outline,
Chapter
One
, Chapter
Two
, Chapter
Three
, Chapter
Four
Chapter
Five
, and Chapter
Six
for your reading pleasure. As always, let us know what you
think of Chapter 7 in
the comments
!

Set yourself up for success.

They say if you can measure something, you can improve it.

In SEO, it’s no different. Professional SEOs track everything
from rankings and conversions to lost links and more to help prove
the value of SEO. Measuring the impact of your work and ongoing
refinement is critical to your SEO success, client retention, and
perceived value.

It also helps you pivot your priorities when something isn’t
working.

Start with the end in mind

While it’s common to have multiple goals (both macro and
micro), establishing one specific primary end goal is
essential.

The only way to know what a website’s primary end goal should
be is to have a strong understanding of the website’s goals
and/or client needs. Good
client questions
are not only helpful in strategically
directing your efforts, but they also show that you care.

Client question examples:

  1. Can you give us a brief history of your company?
  2. What is the monetary value of a newly qualified lead?
  3. What are your most profitable services/products (in
    order)?

Keep the following tips in mind while establishing a website’s
primary goal, additional goals, and benchmarks:

Goal setting tips

  • Measurable: If you can’t measure it, you can’t improve
    it.
  • Be specific: Don’t let vague industry marketing jargon water
    down your goals.
  • Share your goals:
    Studies have shown
    that writing down and sharing your goals
    with others boosts your chances of achieving them.

Measuring

Now that you’ve set your primary goal, evaluate which
additional metrics could help support your site in reaching its end
goal. Measuring additional (applicable) benchmarks can help you
keep a better pulse on current site health and progress.

Engagement metrics

How are people behaving once they reach your site? That’s the
question that engagement metrics seek to answer. Some of the most
popular metrics for measuring how people engage with your content
include:

Conversion rate – The number of conversions (for a single
desired action/goal) divided by the number of unique visits. A
conversion rate can be applied to anything, from an email signup to
a purchase to account creation. Knowing your conversion rate can
help you gauge the return on investment (ROI) your website traffic
might deliver.

In Google Analytics, you can set up
goals
to measure how well your site accomplishes its
objectives. If your objective for a page is a form fill, you can
set that up as a goal. When site visitors accomplish the task,
you’ll be able to see it in your reports.

Time on page – How long did people spend on your page? If you
have a 2,000-word blog post that visitors are only spending an
average of 10 seconds on, the chances are slim that this content is
being consumed (unless they’re a mega-speed reader). However, if
a URL has a low time on page, that’s not necessarily bad either.
Consider the intent of the page. For example, it’s normal for
“Contact Us” pages to have a low average time on page.

Pages per visit – Was the goal of your page to keep readers
engaged and take them to a next step? If so, then pages per visit
can be a valuable engagement metric. If the goal of your page is
independent of other pages on your site (ex: visitor came, got what
they needed, then left), then low pages per visit are okay.

Bounce rate – “Bounced” sessions indicate that a searcher
visited the page and left without browsing your site any further.
Many people try to lower this metric because they believe it’s
tied to website quality, but it actually tells us very little about
a user’s experience. We’ve seen cases of bounce rate spiking
for redesigned restaurant websites that are doing better than ever.
Further investigation discovered that people were simply coming to
find business hours, menus, or an address, then bouncing with the
intention of visiting the restaurant in person. A better metric to
gauge page/site quality is scroll depth.

Scroll depth – This measures how far visitors scroll down
individual webpages. Are visitors reaching your important content?
If not, test different ways of providing the most important content
higher up on your page, such as multimedia, contact forms, and so
on. Also consider the quality of your content. Are you omitting
needless words? Is it enticing for the visitor to continue down the
page? Scroll depth tracking can be set up in your Google
Analytics.

Search traffic

Ranking is a valuable SEO metric, but measuring your site’s
organic performance can’t stop there. The goal of showing up in
search is to be chosen by searchers as the answer to their query.
If you’re ranking but not getting any traffic, you have a
problem.

But how do you even determine how much traffic your site is
getting from search? One of the most precise ways to do this is
with Google Analytics.

Using Google Analytics to uncover traffic insights

Google Analytics (GA) is bursting at the seams with data — so
much so that it can be overwhelming if you don’t know where to
look. This is not an exhaustive list, but rather a general guide to
some of the traffic data you can glean from this free tool.

Isolate organic traffic – GA allows you to view traffic to your
site by channel. This will mitigate any scares caused by changes to
another channel (ex: total traffic dropped because a paid campaign
was halted, but organic traffic remained steady).

Traffic to your site over time – GA allows you to view total
sessions/users/pageviews to your site over a specified date range,
as well as compare two separate ranges.

How many visits a particular page has received – Site Content
reports in GA are great for evaluating the performance of a
particular page — for example, how many unique visitors it
received within a given date range.

Traffic from a specified campaign – You can use UTM (urchin
tracking module) codes for better attribution. Designate
the source, medium, and campaign
, then append the codes to the
end of your URLs. When people start clicking on your UTM-code
links, that data will start to populate in GA’s “campaigns”
report.

Click-through rate (CTR) – Your CTR from search results to a
particular page (meaning the percent of people that clicked your
page from search results) can provide insights on how well you’ve
optimized your page title and meta description. You can find this
data in Google Search Console, a free Google tool.

In addition, Google Tag Manager is a free tool that allows you
to manage and deploy tracking pixels to your website without having
to modify the code. This makes it much easier to track specific
triggers or activity on a website.

Additional common SEO metrics

  • Domain Authority & Page Authority (DA/PA) – Moz’s
    proprietary authority metrics provide powerful insights at a glance
    and are best used as benchmarks relative to your competitors’
    Domain
    Authority
    and Page Authority.
  • Keyword rankings – A website’s ranking position for desired
    keywords. This should also include SERP feature data, like featured
    snippets and People Also Ask boxes that you’re ranking for. Try
    to avoid vanity metrics, such as rankings for competitive keywords
    that are desirable but often too vague and don’t convert as well
    as longer-tail keywords.
  • Number of backlinks – Total number of links pointing to your
    website or the number of unique linking root domains (meaning one
    per unique website, as websites often link out to other websites
    multiple times). While these are both common link metrics, we
    encourage you to look more closely at the quality of backlinks and
    linking root domains your site has.

How to track these metrics

There are lots of different tools available for keeping track of
your site’s position in SERPs, site crawl health, SERP features,
and link metrics, such as Moz Pro and STAT.

The Moz and STAT APIs (among other tools) can also be pulled
into Google Sheets or other customizable dashboard platforms for
clients and quick at-a-glance SEO check-ins. This also allows you
to provide more refined views of only the metrics you care
about.

Dashboard tools like Data Studio, Tableau, and PowerBI can also
help to create interactive data visualizations.

Evaluating a site’s health with an SEO website audit

By having an understanding of certain aspects of your website
— its current position in search, how searchers are interacting
with it, how it’s performing, the quality of its content, its
overall structure, and so on — you’ll be able to better uncover
SEO opportunities. Leveraging the search engines’ own tools can
help surface those opportunities, as well as potential issues:

  • Google
    Search Console
    – If you haven’t already, sign up for a
    free Google Search Console (GSC) account and verify your
    website(s). GSC is full of actionable reports you can use to detect
    website errors, opportunities, and user engagement.
  • Bing Webmaster
    Tools
    – Bing Webmaster Tools has similar functionality to GSC.
    Among other things, it shows you how your site is performing in
    Bing and opportunities for improvement.
  • Lighthouse
    Audit
    – Google’s automated tool for measuring a website’s
    performance, accessibility, progressive web apps, and more. This
    data improves your understanding of how a website is performing.
    Gain specific speed and accessibility insights for a website
    here.
  • PageSpeed
    Insights
    – Provides website performance insights using
    Lighthouse and Chrome User Experience Report data from real user
    measurement (RUM) when available.
  • Structured
    Data Testing Tool
    – Validates that a website is using schema markup (structured data)
    properly.
  • Mobile-Friendly
    Test
    – Evaluates how easily a user can navigate your website on
    a mobile device.
  • Web.dev – Surfaces website
    improvement insights using Lighthouse and provides the ability to
    track progress over time.
  • Tools for
    web devs and SEOs
    – Google often provides new tools for web
    developers and SEOs alike, so keep an eye on any new releases
    here.

While we don’t have room to cover every SEO audit check you
should perform in this guide, we do offer an in-depth Technical SEO
Site Audit course
for more info. When auditing your site, keep
the following in mind:

Crawlability: Are your primary web pages crawlable by search
engines, or are you accidentally blocking Googlebot or Bingbot via
your robots.txt file? Does the website have an accurate sitemap.xml
file in place to help direct crawlers to your primary pages?

Indexed pages: Can your primary pages be found using Google?
Doing a site:yoursite.com OR site:yoursite.com/specific-page check
in Google can help answer this question. If you notice some are
missing, check to make sure a meta robots=noindex tag isn’t
excluding pages that should be indexed and found in search
results.

Check page titles & meta descriptions: Do your titles and
meta descriptions do a good job of summarizing the content of each
page? How are their CTRs in search results, according to Google
Search Console? Are they written in a way that entices searchers to
click your result over the other ranking URLs? Which pages could be
improved? Site-wide crawls are
essential for discovering on-page and technical SEO
opportunities.

Page speed: How does your website perform on mobile devices and
in Lighthouse? Which images could be compressed to improve load
time?

Content quality: How well does the current content of the
website meet the target market’s needs? Is the content 10X better
than other ranking websites’ content? If not, what could you do
better? Think about things like richer content, multimedia, PDFs,
guides, audio content, and more.

Pro tip: Website pruning!

Removing thin, old, low-quality, or rarely visited pages from
your site can help improve your website’s perceived quality.
Performing a content
audit
will help you discover these pruning opportunities. Three
primary ways to prune pages include:

  1. Delete the page (4XX): Use when a page adds no value (ex:
    traffic, links) and/or is outdated.
  2. Redirect (3XX): Redirect the URLs of pages you’re pruning
    when you want to preserve the value they add to your site, such as
    inbound links to that old URL.
  3. NoIndex: Use this when you want the page to remain on your site
    but be removed from the index.

Keyword research and competitive website analysis (performing
audits on your competitors’ websites) can also provide rich
insights on opportunities for your own website.

For example:

  • Which keywords are competitors ranking on page 1 for, but your
    website isn’t?
  • Which keywords is your website ranking on page 1 for that also
    have a featured snippet? You might be able to provide better
    content and take over that snippet.
  • Which websites link to more than one of your competitors, but
    not to your website?

Discovering website content and performance opportunities will
help devise a more data-driven SEO plan of attack! Keep an ongoing
list in order to prioritize your tasks effectively.

Prioritizing your SEO fixes

In order to prioritize SEO fixes effectively, it’s essential
to first have specific, agreed-upon goals established between you
and your client.

While there are a million different ways you could prioritize SEO, we
suggest you rank them in terms of importance and urgency. Which
fixes could provide the most ROI for a website and help support
your agreed-upon goals?

Stephen Covey, author of The 7 Habits of Highly Effective
People, developed a handy time management grid that can ease the
burden of prioritization:

Source: Stephen
Covey, The 7 Habits of
Highly Effective People

Putting out small, urgent SEO fires might feel most effective in
the short term, but this often leads to neglecting non-urgent
important fixes. The not urgent & important items are
ultimately what often move the needle for a website’s SEO.
Don’t put these off.

SEO planning
& execution

“Without strategy, execution is aimless. Without
execution, strategy is useless.”
– Morris Chang

Much of your success depends on effectively mapping out and
scheduling your SEO tasks. You can use free tools like Google
Sheets to plan out your SEO execution (we have a
free template here
), but you can use whatever method works best
for you. Some people prefer to schedule out their SEO tasks in
their Google Calendar, in a kanban or scrum board, or in a daily
planner.

Use what works for you and stick to it.

Measuring your progress along the way via the metrics mentioned
above will help you monitor your effectiveness and allow you to
pivot your SEO efforts when something isn’t working. Say, for
example, you changed a primary page’s title and meta description,
only to notice that the CTR for that page decreased. Perhaps you
changed it to something too vague or strayed too far from the
on-page topic — it might be good to try a different approach.
Keeping an eye on drops in rankings, CTRs, organic traffic, and
conversions can help you manage hiccups like this early, before
they become a bigger problem.

Communication is essential for SEO client longevity

Many SEO fixes are implemented without being noticeable to a
client (or user). This is why it’s essential to employ good
communication skills around your SEO plan, the time frame in which
you’re working, and your benchmark metrics, as well as frequent
check-ins and reports.

Sign up for The Moz Top
10
, a semimonthly mailer updating you on the top ten hottest
pieces of SEO news, tips, and rad links uncovered by the Moz team.
Think of it as your exclusive digest of stuff you don’t have time
to hunt down but want to read!

https://ift.tt/2UeXmrh

Rewriting the Beginner’s Guide to SEO, Chapter 7: Measuring, Prioritizing, & Executing SEO

Posted by BritneyMuller

It’s finally here, for your review and feedback: Chapter 7 of
the new Beginner’s Guide to SEO, the last chapter. We cap off the
guide with advice on how to measure, prioritize, and execute on
your SEO. And if you missed them, check out the drafts of our
outline,
Chapter
One
, Chapter
Two
, Chapter
Three
, Chapter
Four
Chapter
Five
, and Chapter
Six
for your reading pleasure. As always, let us know what you
think of Chapter 7 in
the comments
!

Set yourself up for success.

They say if you can measure something, you can improve it.

In SEO, it’s no different. Professional SEOs track everything
from rankings and conversions to lost links and more to help prove
the value of SEO. Measuring the impact of your work and ongoing
refinement is critical to your SEO success, client retention, and
perceived value.

It also helps you pivot your priorities when something isn’t
working.

Start with the end in mind

While it’s common to have multiple goals (both macro and
micro), establishing one specific primary end goal is
essential.

The only way to know what a website’s primary end goal should
be is to have a strong understanding of the website’s goals
and/or client needs. Good
client questions
are not only helpful in strategically
directing your efforts, but they also show that you care.

Client question examples:

  1. Can you give us a brief history of your company?
  2. What is the monetary value of a newly qualified lead?
  3. What are your most profitable services/products (in
    order)?

Keep the following tips in mind while establishing a website’s
primary goal, additional goals, and benchmarks:

Goal setting tips

  • Measurable: If you can’t measure it, you can’t improve
    it.
  • Be specific: Don’t let vague industry marketing jargon water
    down your goals.
  • Share your goals:
    Studies have shown
    that writing down and sharing your goals
    with others boosts your chances of achieving them.

Measuring

Now that you’ve set your primary goal, evaluate which
additional metrics could help support your site in reaching its end
goal. Measuring additional (applicable) benchmarks can help you
keep a better pulse on current site health and progress.

Engagement metrics

How are people behaving once they reach your site? That’s the
question that engagement metrics seek to answer. Some of the most
popular metrics for measuring how people engage with your content
include:

Conversion rate – The number of conversions (for a single
desired action/goal) divided by the number of unique visits. A
conversion rate can be applied to anything, from an email signup to
a purchase to account creation. Knowing your conversion rate can
help you gauge the return on investment (ROI) your website traffic
might deliver.

In Google Analytics, you can set up
goals
to measure how well your site accomplishes its
objectives. If your objective for a page is a form fill, you can
set that up as a goal. When site visitors accomplish the task,
you’ll be able to see it in your reports.

Time on page – How long did people spend on your page? If you
have a 2,000-word blog post that visitors are only spending an
average of 10 seconds on, the chances are slim that this content is
being consumed (unless they’re a mega-speed reader). However, if
a URL has a low time on page, that’s not necessarily bad either.
Consider the intent of the page. For example, it’s normal for
“Contact Us” pages to have a low average time on page.

Pages per visit – Was the goal of your page to keep readers
engaged and take them to a next step? If so, then pages per visit
can be a valuable engagement metric. If the goal of your page is
independent of other pages on your site (ex: visitor came, got what
they needed, then left), then low pages per visit are okay.

Bounce rate – “Bounced” sessions indicate that a searcher
visited the page and left without browsing your site any further.
Many people try to lower this metric because they believe it’s
tied to website quality, but it actually tells us very little about
a user’s experience. We’ve seen cases of bounce rate spiking
for redesigned restaurant websites that are doing better than ever.
Further investigation discovered that people were simply coming to
find business hours, menus, or an address, then bouncing with the
intention of visiting the restaurant in person. A better metric to
gauge page/site quality is scroll depth.

Scroll depth – This measures how far visitors scroll down
individual webpages. Are visitors reaching your important content?
If not, test different ways of providing the most important content
higher up on your page, such as multimedia, contact forms, and so
on. Also consider the quality of your content. Are you omitting
needless words? Is it enticing for the visitor to continue down the
page? Scroll depth tracking can be set up in your Google
Analytics.

Search traffic

Ranking is a valuable SEO metric, but measuring your site’s
organic performance can’t stop there. The goal of showing up in
search is to be chosen by searchers as the answer to their query.
If you’re ranking but not getting any traffic, you have a
problem.

But how do you even determine how much traffic your site is
getting from search? One of the most precise ways to do this is
with Google Analytics.

Using Google Analytics to uncover traffic insights

Google Analytics (GA) is bursting at the seams with data — so
much so that it can be overwhelming if you don’t know where to
look. This is not an exhaustive list, but rather a general guide to
some of the traffic data you can glean from this free tool.

Isolate organic traffic – GA allows you to view traffic to your
site by channel. This will mitigate any scares caused by changes to
another channel (ex: total traffic dropped because a paid campaign
was halted, but organic traffic remained steady).

Traffic to your site over time – GA allows you to view total
sessions/users/pageviews to your site over a specified date range,
as well as compare two separate ranges.

How many visits a particular page has received – Site Content
reports in GA are great for evaluating the performance of a
particular page — for example, how many unique visitors it
received within a given date range.

Traffic from a specified campaign – You can use UTM (urchin
tracking module) codes for better attribution. Designate
the source, medium, and campaign
, then append the codes to the
end of your URLs. When people start clicking on your UTM-code
links, that data will start to populate in GA’s “campaigns”
report.

Click-through rate (CTR) – Your CTR from search results to a
particular page (meaning the percent of people that clicked your
page from search results) can provide insights on how well you’ve
optimized your page title and meta description. You can find this
data in Google Search Console, a free Google tool.

In addition, Google Tag Manager is a free tool that allows you
to manage and deploy tracking pixels to your website without having
to modify the code. This makes it much easier to track specific
triggers or activity on a website.

Additional common SEO metrics

  • Domain Authority & Page Authority (DA/PA) – Moz’s
    proprietary authority metrics provide powerful insights at a glance
    and are best used as benchmarks relative to your competitors’
    Domain
    Authority
    and Page Authority.
  • Keyword rankings – A website’s ranking position for desired
    keywords. This should also include SERP feature data, like featured
    snippets and People Also Ask boxes that you’re ranking for. Try
    to avoid vanity metrics, such as rankings for competitive keywords
    that are desirable but often too vague and don’t convert as well
    as longer-tail keywords.
  • Number of backlinks – Total number of links pointing to your
    website or the number of unique linking root domains (meaning one
    per unique website, as websites often link out to other websites
    multiple times). While these are both common link metrics, we
    encourage you to look more closely at the quality of backlinks and
    linking root domains your site has.

How to track these metrics

There are lots of different tools available for keeping track of
your site’s position in SERPs, site crawl health, SERP features,
and link metrics, such as Moz Pro and STAT.

The Moz and STAT APIs (among other tools) can also be pulled
into Google Sheets or other customizable dashboard platforms for
clients and quick at-a-glance SEO check-ins. This also allows you
to provide more refined views of only the metrics you care
about.

Dashboard tools like Data Studio, Tableau, and PowerBI can also
help to create interactive data visualizations.

Evaluating a site’s health with an SEO website audit

By having an understanding of certain aspects of your website
— its current position in search, how searchers are interacting
with it, how it’s performing, the quality of its content, its
overall structure, and so on — you’ll be able to better uncover
SEO opportunities. Leveraging the search engines’ own tools can
help surface those opportunities, as well as potential issues:

  • Google
    Search Console
    – If you haven’t already, sign up for a
    free Google Search Console (GSC) account and verify your
    website(s). GSC is full of actionable reports you can use to detect
    website errors, opportunities, and user engagement.
  • Bing Webmaster
    Tools
    – Bing Webmaster Tools has similar functionality to GSC.
    Among other things, it shows you how your site is performing in
    Bing and opportunities for improvement.
  • Lighthouse
    Audit
    – Google’s automated tool for measuring a website’s
    performance, accessibility, progressive web apps, and more. This
    data improves your understanding of how a website is performing.
    Gain specific speed and accessibility insights for a website
    here.
  • PageSpeed
    Insights
    – Provides website performance insights using
    Lighthouse and Chrome User Experience Report data from real user
    measurement (RUM) when available.
  • Structured
    Data Testing Tool
    – Validates that a website is using schema markup (structured data)
    properly.
  • Mobile-Friendly
    Test
    – Evaluates how easily a user can navigate your website on
    a mobile device.
  • Web.dev – Surfaces website
    improvement insights using Lighthouse and provides the ability to
    track progress over time.
  • Tools for
    web devs and SEOs
    – Google often provides new tools for web
    developers and SEOs alike, so keep an eye on any new releases
    here.

While we don’t have room to cover every SEO audit check you
should perform in this guide, we do offer an in-depth Technical SEO
Site Audit course
for more info. When auditing your site, keep
the following in mind:

Crawlability: Are your primary web pages crawlable by search
engines, or are you accidentally blocking Googlebot or Bingbot via
your robots.txt file? Does the website have an accurate sitemap.xml
file in place to help direct crawlers to your primary pages?

Indexed pages: Can your primary pages be found using Google?
Doing a site:yoursite.com OR site:yoursite.com/specific-page check
in Google can help answer this question. If you notice some are
missing, check to make sure a meta robots=noindex tag isn’t
excluding pages that should be indexed and found in search
results.

Check page titles & meta descriptions: Do your titles and
meta descriptions do a good job of summarizing the content of each
page? How are their CTRs in search results, according to Google
Search Console? Are they written in a way that entices searchers to
click your result over the other ranking URLs? Which pages could be
improved? Site-wide crawls are
essential for discovering on-page and technical SEO
opportunities.

Page speed: How does your website perform on mobile devices and
in Lighthouse? Which images could be compressed to improve load
time?

Content quality: How well does the current content of the
website meet the target market’s needs? Is the content 10X better
than other ranking websites’ content? If not, what could you do
better? Think about things like richer content, multimedia, PDFs,
guides, audio content, and more.

Pro tip: Website pruning!

Removing thin, old, low-quality, or rarely visited pages from
your site can help improve your website’s perceived quality.
Performing a content
audit
will help you discover these pruning opportunities. Three
primary ways to prune pages include:

  1. Delete the page (4XX): Use when a page adds no value (ex:
    traffic, links) and/or is outdated.
  2. Redirect (3XX): Redirect the URLs of pages you’re pruning
    when you want to preserve the value they add to your site, such as
    inbound links to that old URL.
  3. NoIndex: Use this when you want the page to remain on your site
    but be removed from the index.

Keyword research and competitive website analysis (performing
audits on your competitors’ websites) can also provide rich
insights on opportunities for your own website.

For example:

  • Which keywords are competitors ranking on page 1 for, but your
    website isn’t?
  • Which keywords is your website ranking on page 1 for that also
    have a featured snippet? You might be able to provide better
    content and take over that snippet.
  • Which websites link to more than one of your competitors, but
    not to your website?

Discovering website content and performance opportunities will
help devise a more data-driven SEO plan of attack! Keep an ongoing
list in order to prioritize your tasks effectively.

Prioritizing your SEO fixes

In order to prioritize SEO fixes effectively, it’s essential
to first have specific, agreed-upon goals established between you
and your client.

While there are a million different ways you could prioritize SEO, we
suggest you rank them in terms of importance and urgency. Which
fixes could provide the most ROI for a website and help support
your agreed-upon goals?

Stephen Covey, author of The 7 Habits of Highly Effective
People, developed a handy time management grid that can ease the
burden of prioritization:

Source: Stephen
Covey, The 7 Habits of
Highly Effective People

Putting out small, urgent SEO fires might feel most effective in
the short term, but this often leads to neglecting non-urgent
important fixes. The not urgent & important items are
ultimately what often move the needle for a website’s SEO.
Don’t put these off.

SEO planning
& execution

“Without strategy, execution is aimless. Without
execution, strategy is useless.”
– Morris Chang

Much of your success depends on effectively mapping out and
scheduling your SEO tasks. You can use free tools like Google
Sheets to plan out your SEO execution (we have a
free template here
), but you can use whatever method works best
for you. Some people prefer to schedule out their SEO tasks in
their Google Calendar, in a kanban or scrum board, or in a daily
planner.

Use what works for you and stick to it.

Measuring your progress along the way via the metrics mentioned
above will help you monitor your effectiveness and allow you to
pivot your SEO efforts when something isn’t working. Say, for
example, you changed a primary page’s title and meta description,
only to notice that the CTR for that page decreased. Perhaps you
changed it to something too vague or strayed too far from the
on-page topic — it might be good to try a different approach.
Keeping an eye on drops in rankings, CTRs, organic traffic, and
conversions can help you manage hiccups like this early, before
they become a bigger problem.

Communication is essential for SEO client longevity

Many SEO fixes are implemented without being noticeable to a
client (or user). This is why it’s essential to employ good
communication skills around your SEO plan, the time frame in which
you’re working, and your benchmark metrics, as well as frequent
check-ins and reports.

Sign up for The Moz Top
10
, a semimonthly mailer updating you on the top ten hottest
pieces of SEO news, tips, and rad links uncovered by the Moz team.
Think of it as your exclusive digest of stuff you don’t have time
to hunt down but want to read!

https://ift.tt/2UeXmrh

Rewriting the Beginner’s Guide to SEO, Chapter 6: Link Building & Establishing Authority

Posted by BritneyMuller

In Chapter 6 of the new Beginner’s Guide to SEO, we’ll be
covering the dos and don’ts of link building and ways your site can
build its authority. If you missed them, we’ve got the drafts of
our outline,
Chapter
One
, Chapter
Two
, Chapter
Three
, Chapter
Four
, and Chapter
Five
for your reading pleasure. Be sure to let us know what you
think of Chapter 6 in
the comments
!

Chapter 6: Link Building & Establishing Authority Turn up the
volume.

You’ve created content that people are searching for, that
answers their questions, and that search engines can understand,
but those qualities alone don’t mean it’ll rank. To outrank the
rest of the sites with those qualities, you have to establish
authority. That can be accomplished by earning links from
authoritative websites, building your brand, and nurturing an
audience who will help amplify your content.

Google has
confirmed
that links and quality content (which we covered back
in Chapter 4) are two of the three most important ranking factors
for SEO. Trustworthy sites tend to link to other trustworthy sites,
and spammy sites tend to link to other spammy sites. But what is a
link, exactly? How do you go about earning them from other
websites? Let’s start with the basics.

What are links?

Inbound links, also known as backlinks or external links, are
HTML hyperlinks that point from one website to another. They’re the
currency of the Internet, as they act a lot like real-life
reputation. If you went on vacation and asked three people (all
completely unrelated to one another) what the best coffee shop in
town was, and they all said, “Cuppa Joe on Main Street,” you would
feel confident that Cuppa Joe is indeed the best coffee place in
town. Links do that for search engines.

Since the late 1990s, search engines have treated links as votes
for popularity and importance on the web.

Internal
links
, or links that connect internal pages of the same domain,
work very similarly for your website. A high amount of internal
links pointing to a particular page on your site
will provide a signal to Google
that the page is important, so
long as it’s done naturally and not in a spammy way.

The engines themselves have refined the way they view links, now
using algorithms to evaluate sites and pages based on the links
they find. But what’s in those algorithms? How do the engines
evaluate all those links? It all starts with the concept of
E-A-T.

You are what you E-A-T

Google’s
Search Quality Rater Guidelines
put a great deal of importance
on the concept of E-A-T — an acronym for expert, authoritative,
and trustworthy. Sites that don’t display these characteristics
tend to be seen as lower-quality in the eyes of the engines, while
those that do are subsequently rewarded. E-A-T is becoming more and
more important as search evolves and increases the importance of
solving for user intent.

Creating a site that’s considered expert, authoritative, and
trustworthy should be your guiding light as you practice SEO. Not
only will it simply result in a better site, but it’s future-proof.
After all, providing great value to searchers is what Google itself
is trying to do.

E-A-T and links to your site

The more popular and important a site is, the more weight the
links from that site carry. A site like Wikipedia, for example, has
thousands of diverse sites linking to it. This indicates it
provides lots of expertise, has cultivated authority, and is
trusted among those other sites.

To earn trust and authority with search engines, you’ll need
links from websites that display the qualities of E-A-T. These
don’t have to be Wikipedia-level sites, but they should provide
searchers with credible, trustworthy content.

  • Tip: Moz has proprietary metrics to help you determine how
    authoritative a site is: Domain Authority,
    Page
    Authority
    , and Spam
    Score
    . In general, you’ll want links from sites with a higher
    Domain Authority than your sites.

Followed vs. nofollowed links

Remember how links act as votes? The rel=nofollow attribute
(pronounced as two words, “no follow”) allows you to link to a
resource while removing your “vote” for search engine purposes.

Just like it sounds, “nofollow” tells search engines not to
follow the link. Some engines still follow them simply to discover
new pages, but these links don’t pass link equity (the “votes of
popularity” we talked about above), so they can be useful in
situations where a page is either linking to an untrustworthy
source or was paid for or created by the owner of the destination
page (making it an unnatural link).

Say, for example, you write a post about link building
practices, and want to call out an example of poor, spammy link
building. You could link to the offending site without signaling to
Google that you trust it.

Standard links (ones that haven’t had nofollow added) look like
this:

<a href="https://moz.com">I love Moz</a>

Nofollow link markup looks like this:

<a href="https://moz.com" >I love Moz</a>

If follow links pass all the link equity, shouldn’t that
mean you want only follow links?

Not necessarily. Think about all the legitimate places you can
create links to your own website: a Facebook profile, a Yelp page,
a Twitter account, etc. These are all natural places to add links
to your website, but they shouldn’t count as votes for your
website. (Setting up a Twitter profile with a link to your site
isn’t a vote from Twitter that they like your site.)

It’s natural for your site to have a balance between nofollowed
and followed backlinks in its link profile (more on link profiles
below). A nofollow link might not pass authority, but it could send
valuable traffic to your site and even lead to future followed
links.

  • Tip: Use the MozBar extension for
    Google Chrome to highlight links on any page to find out whether
    they’re nofollow or follow without ever having to view the source
    code!

Your link profile

Your link profile is an overall assessment of all the inbound
links your site has earned: the total number of links, their
quality (or spamminess), their diversity (is one site linking to
you hundreds of times, or are hundreds of sites linking to you
once?), and more. The state of your link profile helps search
engines understand how your site relates to other sites on the
Internet. There are various SEO tools that allow you to analyze
your link profile and begin to understand its overall makeup.

How can I see which inbound links point to my
website?

Visit Moz Link
Explorer
and type in your site’s URL. You’ll be able to see how
many and which websites are linking back to you.

What are the qualities of a healthy link profile?

When people began to learn about the power of links, they began
manipulating them for their benefit. They’d find ways to gain
artificial links just to increase their search engine rankings.
While these dangerous tactics can sometimes work, they are against
Google’s terms of service and can get a website deindexed (removal
of web pages or entire domains from search results). You should
always try to maintain a healthy link profile.

A healthy link profile is one that indicates to search engines
that you’re earning your links and authority fairly. Just like you
shouldn’t lie, cheat, or steal, you should strive to ensure your
link profile is honest and earned via your hard work.

Links are earned or editorially placed

Editorial links are links added naturally by sites and pages
that want to link to your website.

The foundation of acquiring earned links is almost always
through creating high-quality content that people genuinely wish to
reference. This is where creating
10X content
(a way of describing extremely high-quality
content) is essential! If you can provide the best and most
interesting resource on the web, people will naturally link to
it.

Naturally earned links require no specific action from you,
other than the creation of worthy content and the ability to create
awareness about it.

  • Tip: Earned mentions are often unlinked! When websites are
    referring to your brand or a specific piece of content you’ve
    published, they will often mention it without linking to it. To
    find these earned mentions, use Moz’s Fresh Web Explorer.
    You can then reach out to those publishers to see if they’ll update
    those mentions with links.

Links are relevant and from topically similar websites

Links from websites within a topic-specific community are
generally better than links from websites that aren’t relevant to
your site. If your website sells dog houses, a link from the
Society of Dog Breeders matters much more than one from the Roller
Skating Association. Additionally, links from topically irrelevant
sources can send confusing signals to search engines regarding what
your page is about.

  • Tip: Linking domains don’t have to match the topic of your page
    exactly, but they should be related. Avoid pursuing backlinks from
    sources that are completely off-topic; there are far better uses of
    your time.

Anchor text is descriptive and relevant, without being spammy

Anchor text
helps tell Google what the topic of your page is about. If dozens
of links point to a page with a variation of a word or phrase, the
page has a higher likelihood of ranking well for those types of
phrases. However, proceed with caution! Too many backlinks with the
same anchor text could indicate to the search engines that you’re
trying to manipulate your site’s ranking in search results.

Consider this. You ask ten separate friends at separate times
how their day was going, and they each responded with the same
phrase:

“Great! I started my day by walking my dog, Peanut, and then had
a picante beef Top Ramen for lunch.”

That’s strange, and you’d be quite suspicious of your friends.
The same goes for Google. Describing the content of the target page
with the anchor text helps them understand what the page is about,
but the same description over and over from multiple sources starts
to look suspicious. Aim for relevance; avoid spam.

  • Tip: Use the “Anchor Text” report in Moz’s Link Explorer to see what
    anchor text other websites are using to link to your content.

Links send qualified traffic to your site

Link building should never be solely about search engine
rankings. Esteemed SEO and link building thought leader Eric Ward used to say that you
should build your links as though Google might disappear tomorrow.
In essence, you should focus on acquiring links that will bring
qualified traffic to your website — another reason why it’s
important to acquire links from relevant websites whose audience
would find value in your site, as well.

  • Tip: Use the “Referral Traffic” report in Google Analytics to
    evaluate websites that are currently sending you traffic. How can
    you continue to build relationships with similar types of
    websites?

Link building don’ts & things to avoid

Spammy link profiles are just that: full of links built in
unnatural, sneaky, or otherwise low-quality ways. Practices like
buying links or engaging in a link exchange might seem like the
easy way out, but doing so is dangerous and could put all of your
hard work at risk. Google penalizes
sites with spammy link profiles
, so don’t give in to
temptation.

A guiding principle for your link building efforts is to never
try to manipulate a site’s ranking in search results. But isn’t
that the entire goal of SEO? To increase a site’s ranking in search
results? And herein lies the confusion. Google wants you to earn
links, not build them, but the line between the two is often
blurry. To avoid penalties for unnatural links (known as “link
spam”), Google has made clear what should be avoided.

Purchased links

Google and Bing both seek to discount the influence of paid
links in their organic search results. While a search engine can’t
know which links were earned vs. paid for from viewing the link
itself, there are clues it uses to detect patterns that indicate
foul play. Websites caught buying or selling followed links risk
severe penalties that will severely drop their rankings. (By the
way, exchanging goods or services for a link is also a form of
payment and qualifies as buying links.)

Link exchanges / reciprocal linking

If you’ve ever received a “you link to me and I’ll link you you”
email from someone you have no affiliation with, you’ve been
targeted for a link exchange. Google’s quality guidelines caution
against “excessive” link exchange and similar partner programs
conducted exclusively for the sake of cross-linking, so there is
some indication that this type of exchange on a smaller scale might
not trigger any link spam alarms.

It is acceptable, and even valuable, to link to people you work
with, partner with, or have some other affiliation with and have
them link back to you.

It’s the exchange of links at mass scale with unaffiliated sites
that can warrant penalties.

Low-quality directory links

These used to be a popular source of manipulation. A large
number of pay-for-placement web directories exist to serve this
market and pass themselves off as legitimate, with varying degrees
of success. These types of sites tend to look very similar, with
large lists of websites and their descriptions (typically, the
site’s critical keyword is used as the anchor text to link back to
the submittor’s site).

There are many
more manipulative link building tactics
that search engines
have identified. In most cases, they have found algorithmic methods
for reducing their impact. As new spam systems emerge, engineers
will continue to fight them with targeted algorithms, human
reviews, and the collection of spam reports from webmasters and
SEOs. By and large, it isn’t worth finding ways around them.

If your site does get a manual penalty, there are steps
you can take to get it lifted
.

How to build high-quality backlinks

Link building comes in many shapes and sizes, but one thing is
always true: link campaigns should always match your unique goals.
With that said, there are some popular methods that tend to work
well for most campaigns. This is not an exhaustive list, so visit
Moz’s blog
posts on link building
for more detail on this topic.

Find customer and partner links

If you have partners you work with regularly, or loyal customers
that love your brand, there are ways to earn links from them with
relative ease. You might send out partnership badges (graphic icons
that signify mutual respect), or offer to write up testimonials of
their products. Both of those offer things they can display on
their website along with links back to you.

Publish a blog

This content and link building strategy is so popular and
valuable that it’s one of the few recommended personally by the
engineers at Google. Blogs have the unique ability to contribute
fresh material on a consistent basis, generate conversations across
the web, and earn listings and links from other blogs.

Careful, though — you should avoid low-quality guest posting
just for the sake of link building. Google has advised against this
and your energy is better spent elsewhere.

Create unique resources

Creating unique, high quality resources is no easy task, but
it’s well worth the effort. High quality content that is promoted
in the right ways can be widely shared. It can help to create
pieces that have the following traits:

Creating a resource like this is a great way to attract a lot of
links with one page. You could also create a highly-specific
resource — without as broad of an appeal — that targeted a
handful of websites. You might see a higher rate of success, but
that approach isn’t as scalable.

Users who see this kind of unique content often want to share it
with friends, and bloggers/tech-savvy webmasters who see it will
often do so through links. These high quality, editorially earned
votes are invaluable to building trust, authority, and rankings
potential.

Build resource pages

Resource pages are a great way to build links. However, to find
them you’ll want to know some Advanced Google
operators
to make discovering them a bit easier.

For example, if you were doing link building for a company that
made pots and pans, you could search for: cooking
intitle:”resources” and see which pages might be good link
targets.

This can also give you great ideas for content creation — just
think about which types of resources you could create that these
pages would all like to reference/link to.

Get involved in your local community

For a local business (one that meets its customers in person),
community outreach can result in some of the most valuable and
influential links.

  • Engage in sponsorships and scholarships.
  • Host or participate in community events, seminars, workshops,
    and organizations.
  • Donate to worthy local causes and join local business
    associations.
  • Post jobs and offer internships.
  • Promote loyalty programs.
  • Run a local competition.
  • Develop real-world relationships with related local businesses
    to discover how you can team up to improve the health of your local
    economy.

All of these smart and authentic strategies provide good local
link opportunities.

Refurbish top content

You likely already know which of your site’s content earns the
most traffic, converts the most customers, or retains visitors for
the longest amount of time.

Take that content and refurbish
it for other platforms
(Slideshare, YouTube, Instagram, Quora,
etc.) to expand your acquisition funnel beyond Google.

You can also dust off, update, and simply republish older
content on the same platform. If you discover that a few trusted
industry websites all linked to a popular resource that’s gone
stale, update it and let those industry websites know — you may
just earn a good link.

You can also do this with images.
Reach out to websites that are using your images and not
citing/linking back to you and ask if they’d mind including a
link.

Be newsworthy

Earning the attention of the press, bloggers, and news media is
an effective, time-honored way to earn links. Sometimes this is as
simple as giving something away for free, releasing a great new
product, or stating something controversial. Since so much of SEO
is about creating a digital representation of your brand in the
real world, to succeed in SEO, you have to be a great brand.

Be personal and genuine

The most common mistake new SEOs make when trying to build links
is not taking the time to craft a custom, personal, and valuable
initial outreach email. You know as well as anyone how annoying
spammy emails can be, so make sure yours doesn’t make people roll
their eyes.

Your goal for an initial outreach email is simply to get a
response. These tips can help:

  • Make it personal by mentioning something the person is working
    on, where they went to school, their dog, etc.
  • Provide value. Let them know about a broken link on their
    website or a page that isn’t working on mobile.
  • Keep it short.
  • Ask one simple question (typically not for a link; you’ll
    likely want to build a rapport first).

Pro Tip:

Earning links can be very resource-intensive, so you’ll likely
want to measure your success to prove the value of those
efforts.

Metrics for link building should match up with the site’s
overall KPIs. These might be sales, email subscriptions, page
views, etc. You should also evaluate Domain and/or Page Authority
scores, the ranking of desired keywords, and the amount of traffic
to your content — but we’ll talk more about measuring the success
of your SEO campaigns in Chapter 7.

Beyond links: How awareness, amplification, and sentiment impact
authority

A lot of the methods you’d use to build links will also
indirectly build your brand. In fact, you can view link building as
a great way to increase awareness of your brand, the topics on
which you’re an authority, and the products or services you
offer.

Once your target audience knows about you and you have valuable
content to share, let your audience know about it! Sharing your
content on social platforms will not only make your audience aware
of your content, but it can also encourage them to amplify that
awareness to their own networks, thereby extending your own
reach.

Are social shares the same as links? No. But shares to the right
people can result in links. Social shares can also promote an
increase in traffic and new visitors to your website, which can
grow brand awareness, and with a growth in brand awareness can come
a growth in trust and links. The connection between social signals
and rankings seems indirect, but even indirect correlations can be
helpful for informing strategy.

Trustworthiness goes a long way

For search engines, trust is largely determined by the quality
and quantity of the links your domain has earned, but that’s not to
say that there aren’t other factors at play that can influence your
site’s authority. Think about all the different ways you come to
trust a brand:

  • Awareness (you know they exist)
  • Helpfulness (they provide answers to your questions)
  • Integrity (they do what they say they will)
  • Quality (their product or service provides value; possibly more
    than others you’ve tried)
  • Continued value (they continue to provide value even after
    you’ve gotten what you needed)
  • Voice (they communicate in unique, memorable ways)
  • Sentiment (others have good things to say about their
    experience with the brand)

That last point is what we’re going to focus on here. Reviews of
your brand, its products, or its..

http://bit.ly/2B6AT8g

Rewriting the Beginner’s Guide to SEO, Chapter 5: Technical Optimization

Posted by BritneyMuller

After a short break, we’re back to share our working draft of Chapter 5 of the Beginner’s Guide to SEO with you! This one was a whopper, and we’re really looking forward to your input. Giving beginner SEOs a solid grasp of just what technical optimization for SEO is and why it matters — without overwhelming them or scaring them off the subject — is a tall order indeed. We’d love to hear what you think: did we miss anything you think is important for beginners to know? Leave us your feedback in the comments!

And in case you’re curious, check back on our outline, Chapter One, Chapter Two, Chapter Three, and Chapter Four to see what we’ve covered so far.

Chapter 5: Technical Optimization Basic technical knowledge will help you optimize your site for search engines and establish credibility with developers.

Now that you’ve crafted valuable content on the foundation of solid keyword research, it’s important to make sure it’s not only readable by humans, but by search engines too!

You don’t need to have a deep technical understanding of these concepts, but it is important to grasp what these technical assets do so that you can speak intelligently about them with developers. Speaking your developers’ language is important because you will likely need them to carry out some of your optimizations. They’re unlikely to prioritize your asks if they can’t understand your request or see its importance. When you establish credibility and trust with your devs, you can begin to tear away the red tape that often blocks crucial work from getting done.

Pro tip: SEOs need cross-team support to be effective

It’s vital to have a healthy relationship with your developers so that you can successfully tackle SEO challenges from both sides. Don’t wait until a technical issue causes negative SEO ramifications to involve a developer. Instead, join forces for the planning stage with the goal of avoiding the issues altogether. If you don’t, it can cost you in time and money later.

Beyond cross-team support, understanding technical optimization for SEO is essential if you want to ensure that your web pages are structured for both humans and crawlers. To that end, we’ve divided this chapter into three sections:

  1. How websites work
  2. How search engines understand websites
  3. How users interact with websites

Since the technical structure of a site can have a massive impact on its performance, it’s crucial for everyone to understand these principles. It might also be a good idea to share this part of the guide with your programmers, content writers, and designers so that all parties involved in a site’s construction are on the same page.

1. How websites work

If search engine optimization is the process of optimizing a website for search, SEOs need at least a basic understanding of the thing they’re optimizing!

Below, we outline the website’s journey from domain name purchase all the way to its fully rendered state in a browser. An important component of the website’s journey is the critical rendering path, which is the process of a browser turning a website’s code into a viewable page.

Knowing this about websites is important for SEOs to understand for a few reasons:

  • The steps in this webpage assembly process can affect page load times, and speed is not only important for keeping users on your site, but it’s also one of Google’s ranking factors.
  • Google renders certain resources, like JavaScript, on a “second pass.” Google will look at the page without JavaScript first, then a few days to a few weeks later, it will render JavaScript, meaning SEO-critical elements that are added to the page using JavaScript might not get indexed.

Imagine that the website loading process is your commute to work. You get ready at home, gather your things to bring to the office, and then take the fastest route from your home to your work. It would be silly to put on just one of your shoes, take a longer route to work, drop your things off at the office, then immediately return home to get your other shoe, right? That’s sort of what inefficient websites do. This chapter will teach you how to diagnose where your website might be inefficient, what you can do to streamline, and the positive ramifications on your rankings and user experience that can result from that streamlining.

Before a website can be accessed, it needs to be set up!

  1. Domain name is purchased. Domain names like moz.com are purchased from a domain name registrar such as GoDaddy or HostGator. These registrars are just organizations that manage the reservations of domain names.
  2. Domain name is linked to IP address. The Internet doesn’t understand names like “moz.com” as website addresses without the help of domain name servers (DNS). The Internet uses a series of numbers called an Internet protocol (IP) address (ex: 127.0.0.1), but we want to use names like moz.com because they’re easier for humans to remember. We need to use a DNS to link those human-readable names with machine-readable numbers.

How a website gets from server to browser

  1. User requests domain. Now that the name is linked to an IP address via DNS, people can request a website by typing the domain name directly into their browser or by clicking on a link to the website.
  2. Browser makes requests. That request for a web page prompts the browser to make a DNS lookup request to convert the domain name to its IP address. The browser then makes a request to the server for the code your web page is constructed with, such as HTML, CSS, and JavaScript.
  3. Server sends resources. Once the server receives the request for the website, it sends the website files to be assembled in the searcher’s browser.
  4. Browser assembles the web page. The browser has now received the resources from the server, but it still needs to put it all together and render the web page so that the user can see it in their browser. As the browser parses and organizes all the web page’s resources, it’s creating a Document Object Model (DOM). The DOM is what you can see when you right click + “inspect element” on a web page in your Chrome browser (learn how to inspect elements in other browsers).
  5. Browser makes final requests. The browser will only show a web page after all the page’s necessary code is downloaded, parsed, and executed, so at this point, if the browser needs any additional code in order to show your website, it will make an additional request from your server.
  6. Website appears in browser. Whew! After all that, your website has now been transformed (rendered) from code to what you see in your browser.

Pro tip: Talk to your developers about async!

Something you can bring up with your developers is shortening the critical rendering path by setting scripts to “async” when they’re not needed to render content above the fold, which can make your web pages load faster. Async tells the DOM that it can continue to be assembled while the browser is fetching the scripts needed to display your web page. If the DOM has to pause assembly every time the browser fetches a script (called “render-blocking scripts”), it can substantially slow down your page load.

It would be like going out to eat with your friends and having to pause the conversation every time one of you went up to the counter to order, only resuming once they got back. With async, you and your friends can continue to chat even when one of you is ordering. You might also want to bring up other optimizations that devs can implement to shorten the critical rendering path, such as removing unnecessary scripts entirely, like old tracking scripts.

Now that you know how a website appears in a browser, we’re going to focus on what a website is made of — in other words, the code (programming languages) used to construct those web pages.

The three most common are:

  • HTML – What a website says (titles, body content, etc.)
  • CSS – How a website looks (color, fonts, etc.)
  • JavaScript – How it behaves (interactive, dynamic, etc.)

HTML: What a website says

HTML stands for hypertext markup language, and it serves as the backbone of a website. Elements like headings, paragraphs, lists, and content are all defined in the HTML.

Here’s an example of a webpage, and what its corresponding HTML looks like:

HTML is important for SEOs to know because it’s what lives “under the hood” of any page they create or work on. While your CMS likely doesn’t require you to write your pages in HTML (ex: selecting “hyperlink” will allow you to create a link without you having to type in “a href=”), it is what you’re modifying every time you do something to a web page such as adding content, changing the anchor text of internal links, and so on. Google crawls these HTML elements to determine how relevant your document is to a particular query. In other words, what’s in your HTML plays a huge role in how your web page ranks in Google organic search!

CSS: How a website looks

CSS stands for cascading style sheets, and this is what causes your web pages to take on certain fonts, colors, and layouts. HTML was created to describe content, rather than to style it, so when CSS entered the scene, it was a game-changer. With CSS, web pages could be “beautified” without requiring manual coding of styles into the HTML of every page — a cumbersome process, especially for large sites.

It wasn’t until 2014 that Google’s indexing system began to render web pages more like an actual browser, as opposed to a text-only browser. A black-hat SEO practice that tried to capitalize on Google’s older indexing system was hiding text and links via CSS for the purpose of manipulating search engine rankings. This “hidden text and links” practice is a violation of Google’s quality guidelines.

Components of CSS that SEOs, in particular, should care about:

  • Since style directives can live in external stylesheet files (CSS files) instead of your page’s HTML, it makes your page less code-heavy, reducing file transfer size and making load times faster.
  • Browsers still have to download resources like your CSS file, so compressing them can make your web pages load faster, and page speed is a ranking factor.
  • Having your pages be more content-heavy than code-heavy can lead to better indexing of your site’s content.
  • Using CSS to hide links and content can get your website manually penalized and removed from Google’s index.

JavaScript: How a website behaves

In the earlier days of the Internet, web pages were built with HTML. When CSS came along, webpage content had the ability to take on some style. When the programming language JavaScript entered the scene, websites could now not only have structure and style, but they could be dynamic.

JavaScript has opened up a lot of opportunities for non-static web page creation. When someone attempts to access a page that is enhanced with this programming language, that user’s browser will execute the JavaScript against the static HTML that the server returned, resulting in a web page that comes to life with some sort of interactivity.

You’ve definitely seen JavaScript in action — you just may not have known it! That’s because JavaScript can do almost anything to a page. It could create a pop up, for example, or it could request third-party resources like ads to display on your page.

JavaScript can pose some problems for SEO, though, since search engines don’t view JavaScript the same way human visitors do. That’s because of client-side versus server-side rendering. Most JavaScript is executed in a client’s browser. With server-side rendering, on the other hand, the files are executed at the server and the server sends them to the browser in their fully rendered state.

SEO-critical page elements such as text, links, and tags that are loaded on the client’s side with JavaScript, rather than represented in your HTML, are invisible from your page’s code until they are rendered. This means that search engine crawlers won’t see what’s in your JavaScript — at least not initially.

Google says that, as long as you’re not blocking Googlebot from crawling your JavaScript files, they’re generally able to render and understand your web pages just like a browser can, which means that Googlebot should see the same things as a user viewing a site in their browser. However, due to this “second wave of indexing” for client-side JavaScript, Google can miss certain elements that are only available once JavaScript is executed.

There are also some other things that could go wrong during Googlebot’s process of rendering your web pages, which can prevent Google from understanding what’s contained in your JavaScript:

  • You’ve blocked Googlebot from JavaScript resources (ex: with robots.txt, like we learned about in Chapter 2)
  • Your server can’t handle all the requests to crawl your content
  • The JavaScript is too complex or outdated for Googlebot to understand
  • JavaScript doesn’t “lazy load” content into the page until after the crawler has finished with the page and moved on.

Needless to say, while JavaScript does open a lot of possibilities for web page creation, it can also have some serious ramifications for your SEO if you’re not careful. Thankfully, there is a way to check whether Google sees the same thing as your visitors. To see a page how Googlebot views your page, use Google Search Console’s “Fetch and Render” tool. From your site’s Google Search Console dashboard, select “Crawl” from the left navigation, then “Fetch as Google.”

From this page, enter the URL you want to check (or leave blank if you want to check your homepage) and click the “Fetch and Render” button. You also have the option to test either the desktop or mobile version.

In return, you’ll get a side-by-side view of how Googlebot saw your page versus how a visitor to your website would have seen the page. Below, Google will also show you a list of any resources they may not have been able to get for the URL you entered.

Understanding the way websites work lays a great foundation for what we’ll talk about next, which is technical optimizations to help Google understand the pages on your website better.

2. How search engines understand websites

Search engines have gotten incredibly sophisticated, but they can’t (yet) find and interpret web pages quite like a human can. The following sections outline ways you can better deliver content to search engines.

Help search engines understand your content by structuring it with Schema

Imagine being a search engine crawler scanning down a 10,000-word article about how to bake a cake. How do you identify the author, recipe, ingredients, or steps required to bake a cake? This is where schema (Schema.org) markup comes in. It allows you to spoon-feed search engines more specific classifications for what type of information is on your page.

Schema is a way to label or organize your content so that search engines have a better understanding of what certain elements on your web pages are. This code provides structure to your data, which is why schema is often referred to as “structured data.” The process of structuring your data is often referred to as “markup” because you are marking up your content with organizational code.

JSON-LD is Google’s preferred schema markup (announced in May ‘16), which Bing also supports. To view a full list of the thousands of available schema markups, visit Schema.org or view the Google Developers Introduction to Structured Data for additional information on how to implement structured data. After you implement the structured data that best suits your web pages, you can test your markup with Google’s Structured Data Testing Tool.

In addition to helping bots like Google understand what a particular piece of content is about, schema markup can also enable special features to accompany your pages in the SERPs. These special features are referred to as “rich snippets,” and you’ve probably seen them in action. They’re things like:

  • Top Stories carousel
  • Review stars
  • Sitelinks search boxes
  • Recipes

Remember, using structured data can help enable a rich snippet to be present, but does not guarantee it. Other types of rich snippets will likely be added in the future as the use of schema markup increases.

Some last words of advice for schema success:

  • You can use multiple types of schema markup on a page. However, if you mark up one element, like a product for example, and there are other products listed on the page, you must also mark up those products.
  • Don’t mark up content that is not visible to visitors and follow Google’s Quality Guidelines. For example, if you add review structured markup to a page, make sure those reviews are actually visible on that page.
  • If you have duplicate pages, Google asks that you mark up each duplicate page with your structured markup, not just the canonical version.
  • Provide original and updated (if applicable) content on your structured data pages.
  • Structured markup should be an accurate reflection of your page.
  • Try to use the most specific type of schema markup for your content.
  • Marked-up reviews should not be written by the business. They should be genuine unpaid business reviews from actual customers.

Tell search engines about your preferred pages with canonicalization

When Google crawls the same content on different web pages, it sometimes doesn’t know which page to index in search results. This is why the tag was invented: to help search engines better index the preferred version of content and not all its duplicates.

The tag allows you to tell search engines where the original, master version of a piece of content is located. You’re essentially saying, “Hey search engine! Don’t index this; index this source page instead.” So, if you want to republish a piece of content, whether exactly or slightly modified, but don’t want to risk creating duplicate content, the canonical tag is here to save the day.

Proper canonicalization ensures that every unique piece of content on your website has only one URL. To prevent search engines from indexing multiple versions of a single page, Google recommends having a self-referencing canonical tag on every page on your site. Without a canonical tag telling Google which version of your web page is the preferred one, http://www.example.com could get indexed separately from http://example.com, creating duplicates.

“Avoid duplicate content” is an Internet truism, and for good reason! Google wants to reward sites with unique, valuable content — not content that’s taken from other sources and repeated across multiple pages. Because engines want to provide the best searcher experience, they will rarely show multiple versions of the same content, opting instead to show only the canonicalized version, or if a canonical tag does not exist, whichever version they deem most likely to be the original.

Pro tip: Distinguishing between content filtering & content penalties
There is no such thing as a duplicate content penalty. However, you should try to keep duplicate content from causing indexing issues by using the tag when possible. When duplicates of a page exist, Google will choose a canonical and filter the others out of search results. That doesn’t mean you’ve been penalized. It just means that Google only wants to show one version of your content.

It’s also very common for websites to have multiple duplicate pages due to sort and filter options. For example, on an e-commerce site, you might have what’s called a faceted navigation that allows visitors to narrow down products to find exactly what they’re looking for, such as a “sort by” feature that reorders results on the product category page from lowest to highest price. This could create a URL that looks something like this: example.com/mens-shirts?sort=price_ascending. Add in more sort/filter options like color, size, material, brand, etc. and just think about all the variations of your main product category page this would create!

To learn more about different types of duplicate content, this post by Dr. Pete helps distill the different nuances.

3. How users interact with websites

In Chapter 1, we said that despite SEO standing for search engine optimization, SEO is as much about people as it is about search engines themselves. That’s because search engines exist to serve searchers. This goal helps explain why Google’s algorithm rewards websites that provide the best possible experiences for searchers, and why some websites, despite having qualities like robust backlink profiles, might not perform well in search.

When we understand what makes their web browsing experience optimal, we can create those experiences for maximum search performance.

Ensuring a positive experience for your mobile visitors

Being that well over half of all web traffic today comes from mobile, it’s safe to say that your website should be accessible and easy to navigate for mobile visitors. In April 2015, Google rolled out an update to its algorithm that would promote mobile-friendly pages over non-mobile-friendly pages. So how can you ensure that your..

https://ift.tt/2PoeXug

Rewriting the Beginner’s Guide to SEO, Chapter 4: On-Page Optimization

Posted by BritneyMuller

Chapter Four of the Beginner’s Guide to SEO rewrite is chock full of on-page SEO learnings. After all the great feedback you’ve provided thus far on our outline, Chapter One, Chapter Two, and Chapter Three, we’re eager to hear how you feel about Chapter Four. What really works for you? What do you think is missing? Read on, and let us know your thoughts in the comments!

Chapter 4: On-Page Optimization
Use your research to craft your message.

Now that you know how your target market is searching, it’s time to dive into on-page optimization, the practice of crafting web pages that answer searcher’s questions. On-page SEO is multifaceted, and extends beyond content into other things like schema and meta tags, which we’ll discuss more at length in the next chapter on technical optimization. For now, put on your wordsmithing hats — it’s time to create your content!

Creating your contentApplying your keyword research

In the last chapter, we learned methods for discovering how your target audience is searching for your content. Now, it’s time to put that research into practice. Here is a simple outline to follow for applying your keyword research:

  1. Survey your keywords and group those with similar topics and intent. Those groups will be your pages, rather than creating individual pages for every keyword variation.
  2. If you haven’t done so already, evaluate the SERP for each keyword or group of keywords to determine what type and format your content should be. Some characteristics of ranking pages to take note of:
    1. Are they image or video heavy?
    2. Is the content long-form or short and concise?
    3. Is the content formatted in lists, bullets, or paragraphs?
  3. Ask yourself, “What unique value could I offer to make my page better than the pages that are currently ranking for my keyword?”

On-page optimization allows you to turn your research into content your audience will love. Just make sure to avoid falling into the trap of low-value tactics that could hurt more than help!

Low-value tactics to avoid

Your web content should exist to answer searchers’ questions, to guide them through your site, and to help them understand your site’s purpose. Content should not be created for the purpose of ranking highly in search alone. Ranking is a means to an end, the end being to help searchers. If we put the cart before the horse, we risk falling into the trap of low-value content tactics.

Some of these tactics were introduced in Chapter 2, but by way of review, let’s take a deeper dive into some low-value tactics you should avoid when crafting search engine optimized content.

Thin content

While it’s common for a website to have unique pages on different topics, an older content strategy was to create a page for every single iteration of your keywords in order to rank on page 1 for those highly specific queries.

For example, if you were selling bridal dresses, you might have created individual pages for bridal gowns, bridal dresses, wedding gowns, and wedding dresses, even if each page was essentially saying the same thing. A similar tactic for local businesses was to create multiple pages of content for each city or region from which they wanted clients. These “geo pages” often had the same or very similar content, with the location name being the only unique factor.

Tactics like these clearly weren’t helpful for users, so why did publishers do it? Google wasn’t always as good as it is today at understanding the relationships between words and phrases (or semantics). So, if you wanted to rank on page 1 for “bridal gowns” but you only had a page on “wedding dresses,” that may not have cut it.

This practice created tons of thin, low-quality content across the web, which Google addressed specifically with its 2011 update known as Panda. This algorithm update penalized low-quality pages, which resulted in more quality pages taking the top spots of the SERPs. Google continues to iterate on this process of demoting low-quality content and promoting high-quality content today.

Google is clear that you should have a comprehensive page on a topic instead of multiple, weaker pages for each variation of a keyword.

Depiction of distinct pages for each keyword variation versus one page covering multiple variations

Cloaking

A basic tenet of search engine guidelines is to show the same content to the engine’s crawlers that you’d show to a human visitor. This means that you should never hide text in the HTML code of your website that a normal visitor can’t see.

When this guideline is broken, search engines call it “cloaking” and take action to prevent these pages from ranking in search results. Cloaking can be accomplished in any number of ways and for a variety of reasons, both positive and negative. Below is an example of an instance where Spotify showed different content to users than to Google.

Spotify shows a login page to Google.

In some cases, Google may let practices that are technically cloaking pass because they contribute to a positive user experience. For more on the subject of cloaking and the levels of risk associated with various tactics, see our article on White Hat Cloaking.

Keyword stuffing

If you’ve ever been told, “You need to include {critical keyword} on this page X times,” you’ve seen the confusion over keyword usage in action. Many people mistakenly think that if you just include a keyword within your page’s content X times, you will automatically rank for it. The truth is, although Google looks for mentions of keywords and related concepts on your site’s pages, the page itself has to add value outside of pure keyword usage. If a page is going to be valuable to users, it won’t sound like it was written by a robot, so incorporate your keywords and phrases naturally in a way that is understandable to your readers.

Below is an example of a keyword-stuffed page of content that also uses another old method: bolding all your targeted keywords. Oy.

Screenshot of a site that bolds keywords in a paragraph.

It is worth noting that advancements in machine learning have contributed to more sophisticated auto-generated content that will only get better over time. This is likely why in Google’s quality guidelines on automatically generated content, Google specifically calls out the brand of auto-generated content that attempts to manipulate search rankings, rather than any-and-all auto-generated content.

What to do instead: 10x it!

There is no “secret sauce” to ranking in search results. Google ranks pages highly because it has determined they are the best answers to the searcher’s questions. In today’s search engine, it’s not enough that your page isn’t duplicate, spamming, or broken. Your page has to provide value to searchers and be better than any other page Google is currently serving as the answer to a particular query. Here’s a simple formula for content creation:

  • Search the keyword(s) you want your page to rank for
  • Identify which pages are ranking highly for those keywords
  • Determine what qualities those pages possess
  • Create content that’s better than that

We like to call this 10x content. If you create a page on a keyword that is 10x better than the pages being shown in search results (for that keyword), Google will reward you for it, and better yet, you’ll naturally get people linking to it! Creating 10x content is hard work, but will pay dividends in organic traffic.

Just remember, there’s no magic number when it comes to words on a page. What we should be aiming for is whatever sufficiently satisfies user intent. Some queries can be answered thoroughly and accurately in 300 words while others might require 1,000 words!

Pro tip: Don’t reinvent the wheel!
If you already have content on your website, save yourself time by evaluating which of those pages are already bringing in good amounts of organic traffic and converting well. Refurbish that content on different platforms to help get more visibility to your site. On the other side of the coin, evaluate what existing content isn’t performing as well and adjust it, rather than starting from square one with all new content.

NAP: A note for local businesses

If you’re a business that makes in-person contact with your customers, be sure to include your business name, address, and phone number (NAP) prominently, accurately, and consistently throughout your site’s content. This information is often displayed in the footer or header of a local business website, as well as on any “contact us” pages. You’ll also want to mark up this information using local business schema. Schema and structured data are discussed more at length in the “Code” section of this chapter.

If you are a multi-location business, it’s best to build unique, optimized pages for each location. For example, a business that has locations in Seattle, Tacoma, and Bellevue should consider having a page for each:

example.com/seattle
example.com/tacoma
example.com/bellevue

Each page should be uniquely optimized for that location, so the Seattle page would have unique content discussing the Seattle location, list the Seattle NAP, and even testimonials specifically from Seattle customers. If there are dozens, hundreds, or even thousands of locations, a store locator widget could be employed to help you scale.

Hope you still have some energy left after handling the difficult-yet-rewarding task of putting together a page that is 10x better than your competitors’ pages, because there are just a few more things needed before your page is complete! In the next sections, we’ll talk about the other on-page optimizations your pages need, as well as naming and organizing your content.

Beyond content: Other optimizations your pages need

Can I just bump up the font size to create paragraph headings?

How can I control what title and description show up for my page in search results?

After reading this section, you’ll understand other important on-page elements that help search engines understand the 10x content you just created, so let’s dive in!

Header tags

Header tags are an HTML element used to designate headings on your page. The main header tag, called an H1, is typically reserved for the title of the page. It looks like this:

 <h1>Page Title</h1>

There are also sub-headings that go from H2 (<h2>) to H6 (<h6>) tags, although using all of these on a page is not required. The hierarchy of header tags goes from H1 to H6 in descending order of importance.

Each page should have a unique H1 that describes the main topic of the page, this is often automatically created from the title of a page. As the main descriptive title of the page, the H1 should contain that page’s primary keyword or phrase. You should avoid using header tags to mark up non-heading elements, such as navigational buttons and phone numbers. Use header tags to introduce what the following content will discuss.

Take this page about touring Copenhagen, for example:

<h1>Copenhagen Travel Guide</h1>
<h2>Copenhagen by the Seasons</h2>
<h3>Visiting in Winter</h3>
<h3>Visiting in Spring</h3>

The main topic of the page is introduced in the main <h1> heading, and each additional heading is used to introduce a new sub-topic. In this example, the <h2> is more specific than the <h1>, and the <h3> tags are more specific than the <h2>. This is just an example of a structure you could use.

Although what you choose to put in your header tags can be used by search engines to evaluate and rank your page, it’s important to avoid inflating their importance. Header tags are one among many on-page SEO factors, and typically would not move the needle like quality backlinks and content would, so focus on your site visitors when crafting your headings.

Internal links

In Chapter 2, we discussed the importance of having a crawlable website. Part of a website’s crawlability lies in its internal linking structure. When you link to other pages on your website, you ensure that search engine crawlers can find all your site’s pages, you pass link equity (ranking power) to other pages on your site, and you help visitors navigate your site.

The importance of internal linking is well established, but there can be confusion over how this looks in practice.

Link accessibility

Links that require a click (like a navigation drop-down to view) are often hidden from search engine crawlers, so if the only links to internal pages on your website are through these types of links, you may have trouble getting those pages indexed. Opt instead for links that are directly accessible on the page.

Anchor text

Anchor text is the text with which you link to pages. Below, you can see an example of what a hyperlink without anchor text and a hyperlink with anchor text would look like in the HTML.

https://ift.tt/2MaEzfQ

Rewriting the Beginner’s Guide to SEO, Chapter 3: Keyword Research

Posted by BritneyMuller

Welcome to the draft of Chapter Three of the new and improved Beginner’s Guide to SEO! So far you’ve been generous and energizing with your feedback for our outline, Chapter One, and Chapter Two. We’re asking for a little more of your time as we debut the our third chapter on keyword research. Please let us know what you think in the comments!

Chapter 3: Keyword ResearchUnderstand what your audience wants to find.

Now that you’ve learned how to show up in search results, let’s determine which strategic keywords to target in your website’s content, and how to craft that content to satisfy both users and search engines.

The power of keyword research lies in better understanding your target market and how they are searching for your content, services, or products.

Keyword research provides you with specific search data that can help you answer questions like:

  • What are people searching for?
  • How many people are searching for it?
  • In what format do they want that information?

In this chapter, you’ll get tools and strategies for uncovering that information, as well as learn tactics that’ll help you avoid keyword research foibles and build strong content. Once you uncover how your target audience is searching for your content, you begin to uncover a whole new world of strategic SEO!

What terms are people searching for?

You may know what you do, but how do people search for the product, service, or information you provide? Answering this question is a crucial first step in the keyword research process.

Discovering keywords

You likely have a few keywords in mind that you would like to rank for. These will be things like your products, services, or other topics your website addresses, and they are great seed keywords for your research, so start there! You can enter those keywords into a keyword research tool to discover average monthly search volume and similar keywords. We’ll get into search volume in greater depth in the next section, but during the discovery phase, it can help you determine which variations of your keywords are most popular amongst searchers.

Once you enter in your seed keywords into a keyword research tool, you will begin to discover other keywords, common questions, and topics for your content that you might have otherwise missed.

Let’s use the example of a florist that specializes in weddings.

Typing “wedding” and “florist” into a keyword research tool, you may discover highly relevant, highly searched for related terms such as:

  • Wedding bouquets
  • Bridal flowers
  • Wedding flower shop

In the process of discovering relevant keywords for your content, you will likely notice that the search volume of those keywords varies greatly. While you definitely want to target terms that your audience is searching for, in some cases, it may be more advantageous to target terms with lower search volume because they’re far less competitive.

Since both high- and low-competition keywords can be advantageous for your website, learning more about search volume can help you prioritize keywords and pick the ones that will give your website the biggest strategic advantage.

Pro tip: Diversify!

It’s important to note that entire websites don’t rank for keywords, pages do. With big brands, we often see the homepage ranking for many keywords, but for most websites, this isn’t usually the case. Many websites receive more organic traffic to pages other than the homepage, which is why it’s so important to diversify your website’s pages by optimizing each for uniquely valuable keywords.

How often are those terms searched?Uncovering search volume

The higher the search volume for a given keyword or keyword phrase, the more work is typically required to achieve higher rankings. This is often referred to as keyword difficulty and occasionally incorporates SERP features; for example, if many SERP features (like featured snippets, knowledge graph, carousels, etc) are clogging up a keyword’s result page, difficulty will increase. Big brands often take up the top 10 results for high-volume keywords, so if you’re just starting out on the web and going after the same keywords, the uphill battle for ranking can take years of effort.

Typically, the higher the search volume, the greater the competition and effort required to achieve organic ranking success. Go too low, though, and you risk not drawing any searchers to your site. In many cases, it may be most advantageous to target highly specific, lower competition search terms. In SEO, we call those long-tail keywords.

Understanding the long tail

It would be great to rank #1 for the keyword “shoes”… or would it?

It’s wonderful to deal with keywords that have 50,000 searches a month, or even 5,000 searches a month, but in reality, these popular search terms only make up a fraction of all searches performed on the web. In fact, keywords with very high search volumes may even indicate ambiguous intent, which, if you target these terms, it could put you at risk for drawing visitors to your site whose goals don’t match the content your page provides.

Does the searcher want to know the nutritional value of pizza? Order a pizza? Find a restaurant to take their family? Google doesn’t know, so they offer these features to help you refine. Targeting “pizza” means that you’re likely casting too wide a net.

The remaining 75% lie in the “chunky middle” and “long tail” of search.

Don’t underestimate these less popular keywords. Long tail keywords with lower search volume often convert better, because searchers are more specific and intentional in their searches. For example, a person searching for “shoes” is probably just browsing. Whereas, someone searching for “best price red womens size 7 running shoe,” practically has their wallet out!

Pro tip: Questions are SEO gold!

Discovering what questions people are asking in your space, and adding those questions and their answers to an FAQ page, can yield incredible organic traffic for your website.

Getting strategic with search volume

Now that you’ve discovered relevant search terms for your site and their corresponding search volumes, you can get even more strategic by looking at your competitors and figuring out how searches might differ by season or location.

Keywords by competitor

You’ll likely compile a lot of keywords. How do you know which to tackle first? It could be a good idea to prioritize high-volume keywords that your competitors are not currently ranking for. On the flip side, you could also see which keywords from your list your competitors are already ranking for and prioritize those. The former is great when you want to take advantage of your competitors’ missed opportunities, while the latter is an aggressive strategy that sets you up to compete for keywords your competitors are already performing well for.

Keywords by season

Knowing about seasonal trends can be advantageous in setting a content strategy. For example, if you know that “christmas box” starts to spike in October through December in the United Kingdom, you can prepare content months in advance and give it a big push around those months.

Keywords by region

You can more strategically target a specific location by narrowing down your keyword research to specific towns, counties, or states in the Google Keyword Planner, or evaluate “interest by subregion” in Google Trends. Geo-specific research can help make your content more relevant to your target audience. For example, you might find out that in Texas, the preferred term for a large truck is “big rig,” while in New York, “tractor trailer” is the preferred terminology.

Which format best suits the searcher’s intent?

In Chapter 2, we learned about SERP features. That background is going to help us understand how searchers want to consume information for a particular keyword. The format in which Google chooses to display search results depends on intent, and every query has a unique one. While there are thousands of of possible search types, there are five major categories to be aware of:

1. Informational queries: The searcher needs information, such as the name of a band or the height of the Empire State Building.

2. Navigational queries: The searcher wants to go to a particular place on the Internet, such as Facebook or the homepage of the NFL.

3. Transactional queries: The searcher wants to do something, such as buy a plane ticket or listen to a song.

4. Commercial investigation: The searcher wants to compare products and find the best one for their specific needs.

5. Local queries: The searcher wants to find something locally, such as a nearby coffee shop, doctor, or music venue.

An important step in the keyword research process is surveying the SERP landscape for the keyword you want to target in order to get a better gauge of searcher intent. If you want to know what type of content your target audience wants, look to the SERPs!

Google has closely evaluated the behavior of trillions of searches in an attempt to provide the most desired content for each specific keyword search.

Take the search “dresses,” for example:

By the shopping carousel, you can infer that Google has determined many people who search for “dresses” want to shop for dresses online.

There is also a Local Pack feature for this keyword, indicating Google’s desire to help searchers who may be looking for local dress retailers.

If the query is ambiguous, Google will also sometimes include the “refine by” feature to help searchers specify what they’re looking for further. By doing so, the search engine can provide results that better help the searcher accomplish their task.

Google has a wide array of result types it can serve up depending on the query, so if you’re going to target a keyword, look to the SERP to understand what type of content you need to create.

Tools for determining the value of a keyword

How much value would a keyword add to your website? These tools can help you answer that question, so they’d make great additions to your keyword research arsenal:

  • Moz Keyword Explorer – Our own Moz Keyword Explorer tool extracts accurate search volume data, keyword difficulty, and keyword opportunity metrics by using live clickstream data. To learn more about how we’re producing our keyword data, check out Announcing Keyword Explorer.
  • Google Keyword Planner – Google’s AdWords Keyword Planner has historically been the most common starting point for SEO keyword research. However, Keyword Planner does restrict search volume data by lumping keywords together into large search volume range buckets. To learn more, check out Google Keyword Planner’s Dirty Secrets.
  • Google Trends – Google’s keyword trend tool is great for finding seasonal keyword fluctuations. For example, “funny halloween costume ideas” will peak in the weeks before Halloween.
  • AnswerThePublic – This free tool populates commonly searched for questions around a specific keyword. Bonus! You can use this tool in tandem with another free tool, Keywords Everywhere, to prioritize ATP’s suggestions by search volume.
  • SpyFu Keyword Research Tool – Provides some really neat competitive keyword data.

Download our free keyword research template!

Keyword research can yield a ton of data. Stay organized by downloading our free keyword research template. You can customize the template to fit your unique needs (ex: remove the “Seasonal Trends” column), sort keywords by volume, and categorize by Priority Score. Happy keyword researching!

Now that you know how to uncover what your target audience is searching for and how often, it’s time to move onto the next step: crafting pages in a way that users will love and search engines can understand.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

https://ift.tt/2n2iGkL

Rewriting the Beginner’s Guide to SEO, Chapter 2: Crawling, Indexing, and Ranking

Posted by BritneyMuller

It’s been a few months since our last share of our work-in-progress rewrite of the Beginner’s Guide to SEO, but after a brief hiatus, we’re back to share our draft of Chapter Two with you! This wouldn’t have been possible without the help of Kameron Jenkins, who has thoughtfully contributed her great talent for wordsmithing throughout this piece.

This is your resource, the guide that likely kicked off your interest in and knowledge of SEO, and we want to do right by you. You left amazingly helpful commentary on our outline and draft of Chapter One, and we’d be honored if you would take the time to let us know what you think of Chapter Two in the comments below.

Chapter 2: How Search Engines Work – Crawling, Indexing, and Ranking First, show up.

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet’s content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It’s arguably the most important piece of the SEO puzzle: If your site can’t be found, there’s no way you’ll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher’s query. Order the search results by the most helpful to a particular query.

What is search engine crawling?

Crawling, is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

The bot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, crawlers are able to find new content and add it to their index — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher’s query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

Note: In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that’s nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your site?

As you’ve just learned, making sure your site gets crawled and indexed is a prerequisite for showing up in the SERPs. First things first: You can check to see how many and which pages of your website have been indexed by Google using “site:yourdomain.com”, an advanced search operator.

Head to Google and type “site:yourdomain.com” into the search bar. This will return results Google has in its index for the site specified:

Screen Shot 2017-08-03 at 5.19.15 PM.png

The number of results Google displays (see “About __ results” above) isn’t exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don’t currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google’s index, among other things.

If you’re not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn’t been crawled yet.
  • Your site isn’t linked to from any external websites.
  • Your site’s navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

If your site doesn’t have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console or manually submitting individual URLs to Google. There’s no guarantee they’ll include a submitted URL in their index, but it’s worth a try!

Can search engines see your whole site?

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It’s important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won’t see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there’s no guarantee they will be able to read and understand it just yet. It’s always best to add text within the <HTML> markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

Common navigation mistakes that can keep crawlers from seeing all of your site:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it’s essential that your website has a clear navigation and helpful URL folder structures.

Information architecture

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and fundability for users. The best information architecture is intuitive, meaning that users shouldn’t have to think very hard to flow through your website or to find something.

Your site should also have a useful 404 (page not found) page for when a visitor clicks on a dead link or mistypes a URL. The best 404 pages allow users to click back into your site so they don’t bounce off just because they tried to access a nonexistent link.

Tell search engines how to crawl your site

In addition to making sure crawlers can reach your most important pages, it’s also pertinent to note that you’ll have pages on your site you don’t want them to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

Blocking pages from search engines can also help crawlers prioritize your most important pages and maximize your crawl budget (the average number of pages a search engine bot will crawl on your site).

Crawler directives allow you to control what you want Googlebot to crawl and index using a robots.txt file, meta tag, sitemap.xml file, or Google Search Console.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn’t crawl via specific robots.txt directives. This is a great solution when trying to block search engines from non-private pages on your site.

You wouldn’t want to block private/sensitive pages from being crawled here because the file is easily accessible by users and bots.

Pro tip:

  • If Googlebot can’t find a robots.txt file for a site (40X HTTP status code), it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site (20X HTTP status code), it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot finds neither a 20X or a 40X HTTP status code (ex. a 501 server error) it can’t determine if you have a robots.txt file or not and won’t crawl your site.

Meta directives

The two types of meta directives are the meta robots tag (more commonly used) and the x-robots-tag. Each provides crawlers with stronger instructions on how to crawl and index a URL’s content.

The x-robots tag provides more flexibility and functionality if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

These are the best options for blocking more sensitive*/private URLs from search engines.

*For very sensitive URLs, it is best practice to remove them from or require a secure login to view the pages.

WordPress Tip: In Dashboard > Settings > Reading, make sure the “Search Engine Visibility” box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Avoid these common pitfalls, and you’ll have clean, crawlable content that will allow bots easy access to your pages.

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Sitemaps

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google’s standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Google Search Console

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly. How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages.

Indexing: How do search engines understand and remember your site?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page’s contents. All of that information is stored in its index.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing “Cached”:

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a “not found” error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can manually submit the URL to Google by navigating to the “Submit URL” tool in Search Console.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: “We’re making quality updates all the time.” This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics—- the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or “inbound links” are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real life WOM (Word-Of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google’s core algorithm) is a link analysis algorithm named after one of Google’s founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the..

https://ift.tt/2uN3trW

Rewriting the Beginner’s Guide to SEO, Chapter 2: Crawling, Indexing, and Ranking

Posted by BritneyMuller

It’s been a few months since our last share of our work-in-progress rewrite of the Beginner’s Guide to SEO, but after a brief hiatus, we’re back to share our draft of Chapter Two with you! This wouldn’t have been possible without the help of Kameron Jenkins, who has thoughtfully contributed her great talent for wordsmithing throughout this piece.

This is your resource, the guide that likely kicked off your interest in and knowledge of SEO, and we want to do right by you. You left amazingly helpful commentary on our outline and draft of Chapter One, and we’d be honored if you would take the time to let us know what you think of Chapter Two in the comments below.

Chapter 2: How Search Engines Work – Crawling, Indexing, and Ranking First, show up.

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet’s content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It’s arguably the most important piece of the SEO puzzle: If your site can’t be found, there’s no way you’ll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher’s query. Order the search results by the most helpful to a particular query.

What is search engine crawling?

Crawling, is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

The bot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, crawlers are able to find new content and add it to their index — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher’s query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

Note: In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that’s nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your site?

As you’ve just learned, making sure your site gets crawled and indexed is a prerequisite for showing up in the SERPs. First things first: You can check to see how many and which pages of your website have been indexed by Google using “site:yourdomain.com”, an advanced search operator.

Head to Google and type “site:yourdomain.com” into the search bar. This will return results Google has in its index for the site specified:

Screen Shot 2017-08-03 at 5.19.15 PM.png

The number of results Google displays (see “About __ results” above) isn’t exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don’t currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google’s index, among other things.

If you’re not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn’t been crawled yet.
  • Your site isn’t linked to from any external websites.
  • Your site’s navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

If your site doesn’t have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console or manually submitting individual URLs to Google. There’s no guarantee they’ll include a submitted URL in their index, but it’s worth a try!

Can search engines see your whole site?

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It’s important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won’t see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there’s no guarantee they will be able to read and understand it just yet. It’s always best to add text within the <HTML> markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

Common navigation mistakes that can keep crawlers from seeing all of your site:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it’s essential that your website has a clear navigation and helpful URL folder structures.

Information architecture

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and fundability for users. The best information architecture is intuitive, meaning that users shouldn’t have to think very hard to flow through your website or to find something.

Your site should also have a useful 404 (page not found) page for when a visitor clicks on a dead link or mistypes a URL. The best 404 pages allow users to click back into your site so they don’t bounce off just because they tried to access a nonexistent link.

Tell search engines how to crawl your site

In addition to making sure crawlers can reach your most important pages, it’s also pertinent to note that you’ll have pages on your site you don’t want them to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

Blocking pages from search engines can also help crawlers prioritize your most important pages and maximize your crawl budget (the average number of pages a search engine bot will crawl on your site).

Crawler directives allow you to control what you want Googlebot to crawl and index using a robots.txt file, meta tag, sitemap.xml file, or Google Search Console.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn’t crawl via specific robots.txt directives. This is a great solution when trying to block search engines from non-private pages on your site.

You wouldn’t want to block private/sensitive pages from being crawled here because the file is easily accessible by users and bots.

Pro tip:

  • If Googlebot can’t find a robots.txt file for a site (40X HTTP status code), it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site (20X HTTP status code), it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot finds neither a 20X or a 40X HTTP status code (ex. a 501 server error) it can’t determine if you have a robots.txt file or not and won’t crawl your site.

Meta directives

The two types of meta directives are the meta robots tag (more commonly used) and the x-robots-tag. Each provides crawlers with stronger instructions on how to crawl and index a URL’s content.

The x-robots tag provides more flexibility and functionality if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

These are the best options for blocking more sensitive*/private URLs from search engines.

*For very sensitive URLs, it is best practice to remove them from or require a secure login to view the pages.

WordPress Tip: In Dashboard > Settings > Reading, make sure the “Search Engine Visibility” box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Avoid these common pitfalls, and you’ll have clean, crawlable content that will allow bots easy access to your pages.

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Sitemaps

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google’s standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Google Search Console

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly. How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages.

Indexing: How do search engines understand and remember your site?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page’s contents. All of that information is stored in its index.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing “Cached”:

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a “not found” error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can manually submit the URL to Google by navigating to the “Submit URL” tool in Search Console.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: “We’re making quality updates all the time.” This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics—- the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or “inbound links” are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real life WOM (Word-Of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google’s core algorithm) is a link analysis algorithm named after one of Google’s founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the..

https://ift.tt/2uN3trW

Rewriting the Beginner’s Guide to SEO, Chapter 1: SEO 101

Posted by BritneyMuller

Back in mid-November, we kicked off a campaign to rewrite our biggest piece of content: the Beginner’s Guide to SEO. You offered up a huge amount of helpful advice and insight with regards to our outline, and today we’re here to share our draft of the first chapter.

In many ways, the Beginner’s Guide to SEO belongs to each and every member of our community; it’s important that we get this right, for your sake. So without further ado, here’s the first chapter — let’s dive in!

Chapter 1: SEO 101What is it, and why is it important?

Welcome! We’re excited that you’re here!

If you already have a solid understanding of SEO and why it’s important, you can skip to Chapter 2 (though we’d still recommend skimming the best practices from Google and Bing at the end of this chapter; they’re useful refreshers).

For everyone else, this chapter will help build your foundational SEO knowledge and confidence as you move forward.

What is SEO?

SEO stands for “search engine optimization.” It’s the practice of increasing both the quality and quantity of website traffic, as well as exposure to your brand, through non-paid (also known as “organic”) search engine results.

Despite the acronym, SEO is as much about people as it is about search engines themselves. It’s about understanding what people are searching for online, the answers they are seeking, the words they’re using, and the type of content they wish to consume. Leveraging this data will allow you to provide high-quality content that your visitors will truly value.

Here’s an example. Frankie & Jo’s (a Seattle-based vegan, gluten-free ice cream shop) has heard about SEO and wants help improving how and how often they show up in organic search results. In order to help them, you need to first understand their potential customers:

  • What types of ice cream, desserts, snacks, etc. are people searching for?
  • Who is searching for these terms?
  • When are people searching for ice cream, snacks, desserts, etc.?
    • Are there seasonality trends throughout the year?
  • How are people searching for ice cream?
    • What words do they use?
    • What questions do they ask?
    • Are more searches performed on mobile devices?
  • Why are people seeking ice cream?
    • Are individuals looking for health conscious ice cream specifically or just looking to satisfy a sweet tooth?
  • Where are potential customers located — locally, nationally, or internationally?

And finally — here’s the kicker — how can you help provide the best content about ice cream to cultivate a community and fulfill what all those people are searching for?

Search engine basics

Search engines are answer machines. They scour billions of pieces of content and evaluate thousands of factors to determine which content is most likely to answer your query.

Search engines do all of this by discovering and cataloguing all available content on the Internet (web pages, PDFs, images, videos, etc.) via a process known as “crawling and indexing.”

What are “organic” search engine results?

Organic search results are search results that aren’t paid for (i.e. not advertising). These are the results that you can influence through effective SEO. Traditionally, these were the familiar “10 blue links.”

Today, search engine results pages — often referred to as “SERPs” — are filled with both more advertising and more dynamic organic results formats (called “SERP features”) than we’ve ever seen before. Some examples of SERP features are featured snippets (or answer boxes), People Also Ask boxes, image carousels, etc. New SERP features continue to emerge, driven largely by what people are seeking.

For example, if you search for “Denver weather,” you’ll see a weather forecast for the city of Denver directly in the SERP instead of a link to a site that might have that forecast. And, if you search for “pizza Denver,” you’ll see a “local pack” result made up of Denver pizza places. Convenient, right?

It’s important to remember that search engines make money from advertising. Their goal is to better solve searcher’s queries (within SERPs), to keep searchers coming back, and to keep them on the SERPs longer.

Some SERP features on Google are organic and can be influenced by SEO. These include featured snippets (a promoted organic result that displays an answer inside a box) and related questions (a.k.a. “People Also Ask” boxes).

It’s worth noting that there are many other search features that, even though they aren’t paid advertising, can’t typically be influenced by SEO. These features often have data acquired from proprietary data sources, such as Wikipedia, WebMD, and IMDb.

Why SEO is important

While paid advertising, social media, and other online platforms can generate traffic to websites, the majority of online traffic is driven by search engines.

Organic search results cover more digital real estate, appear more credible to savvy searchers, and receive way more clicks than paid advertisements. For example, of all US searches, only ~2.8% of people click on paid advertisements.

In a nutshell: SEO has ~20X more traffic opportunity than PPC on both mobile and desktop.

SEO is also one of the only online marketing channels that, when set up correctly, can continue to pay dividends over time. If you provide a solid piece of content that deserves to rank for the right keywords, your traffic can snowball over time, whereas advertising needs continuous funding to send traffic to your site.

Search engines are getting smarter, but they still need our help.

Optimizing your site will help deliver better information to search engines so that your content can be properly indexed and displayed within search results.

Should I hire an SEO professional, consultant, or agency?

Depending on your bandwidth, willingness to learn, and the complexity of your website(s), you could perform some basic SEO yourself. Or, you might discover that you would prefer the help of an expert. Either way is okay!

If you end up looking for expert help, it’s important to know that many agencies and consultants “provide SEO services,” but can vary widely in quality. Knowing how to choose a good SEO company can save you a lot of time and money, as the wrong SEO techniques can actually harm your site more than they will help.

White hat vs black hat SEO

“White hat SEO” refers to SEO techniques, best practices, and strategies that abide by search engine rule, its primary focus to provide more value to people.

“Black hat SEO” refers to techniques and strategies that attempt to spam/fool search engines. While black hat SEO can work, it puts websites at tremendous risk of being penalized and/or de-indexed (removed from search results) and has ethical implications.

Penalized websites have bankrupted businesses. It’s just another reason to be very careful when choosing an SEO expert or agency.

Search engines share similar goals with the SEO industry

Search engines want to help you succeed. They’re actually quite supportive of efforts by the SEO community. Digital marketing conferences, such as Unbounce, MNsearch, SearchLove, and Moz’s own MozCon, regularly attract engineers and representatives from major search engines.

Google assists webmasters and SEOs through their Webmaster Central Help Forum and by hosting live office hour hangouts. (Bing, unfortunately, shut down their Webmaster Forums in 2014.)

While webmaster guidelines vary from search engine to search engine, the underlying principles stay the same: Don’t try to trick search engines. Instead, provide your visitors with a great online experience.

Google webmaster guidelinesBasic principles:

  • Make pages primarily for users, not search engines.
  • Don’t deceive your users.
  • Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you’d feel comfortable explaining what you’ve done to a website to a Google employee. Another useful test is to ask, “Does this help my users? Would I do this if search engines didn’t exist?”
  • Think about what makes your website unique, valuable, or engaging.

Things to avoid:

  • Automatically generated content
  • Participating in link schemes
  • Creating pages with little or no original content (i.e. copied from somewhere else)
  • Cloaking — the practice of showing search engine crawlers different content than visitors.
  • Hidden text and links
  • Doorway pages — pages created to rank well for specific searches to funnel traffic to your website.

Full Google Webmaster Guidelines version here.

Bing webmaster guidelinesBasic principles:

  • Provide clear, deep, engaging, and easy-to-find content on your site.
  • Keep page titles clear and relevant.
  • Links are regarded as a signal of popularity and Bing rewards links that have grown organically.
  • Social influence and social shares are positive signals and can have an impact on how you rank organically in the long run.
  • Page speed is important, along with a positive, useful user experience.
  • Use alt attributes to describe images, so that Bing can better understand the content.

Things to avoid:

  • Thin content, pages showing mostly ads or affiliate links, or that otherwise redirect visitors away to other sites will not rank well.
  • Abusive link tactics that aim to inflate the number and nature of inbound links such as buying links, participating in link schemes, can lead to de-indexing.
  • Ensure clean, concise, keyword-inclusive URL structures are in place. Dynamic parameters can dirty up your URLs and cause duplicate content issues.
  • Make your URLs descriptive, short, keyword rich when possible, and avoid non-letter characters.
  • Burying links in Javascript/Flash/Silverlight; keep content out of these as well.
  • Duplicate content
  • Keyword stuffing
  • Cloaking — the practice of showing search engine crawlers different content than visitors.

Guidelines for representing your local business on Google

These guidelines govern what you should and shouldn’t do in creating and managing your Google My Business listing(s).

Basic principles:

  • Be sure you’re eligible for inclusion in the Google My Business index; you must have a physical address, even if it’s your home address, and you must serve customers face-to-face, either at your location (like a retail store) or at theirs (like a plumber)
  • Honestly and accurately represent all aspects of your local business data, including its name, address, phone number, website address, business categories, hours of operation, and other features.

Things to avoid

  • Creation of Google My Business listings for entities that aren’t eligible
  • Misrepresentation of any of your core business information, including “stuffing” your business name with geographic or service keywords, or creating listings for fake addresses
  • Use of PO boxes or virtual offices instead of authentic street addresses
  • Abuse of the review portion of the Google My Business listing, via fake positive reviews of your business or fake negative ones of your competitors
  • Costly, novice mistakes stemming from failure to read the fine details of Google’s guidelines

Fulfilling user intent

Understanding and fulfilling user intent is critical. When a person searches for something, they have a desired outcome. Whether it’s an answer, concert tickets, or a cat photo, that desired content is their “user intent.”

If a person performs a search for “bands,” is their intent to find musical bands, wedding bands, band saws, or something else?

Your job as an SEO is to quickly provide users with the content they desire in the format in which they desire it.

Common user intent types:

Informational: Searching for information. Example: “How old is Issa Rae?”

Navigational: Searching for a specific website. Example: “HBOGO Insecure”

Transactional: Searching to buy something. Example: “where to buy ‘We got y’all’ Insecure t-shirt”

You can get a glimpse of user intent by Googling your desired keyword(s) and evaluating the current SERP. For example, if there’s a photo carousel, it’s very likely that people searching for that keyword search for photos.

Also evaluate what content your top-ranking competitors are providing that you currently aren’t. How can you provide 10X the value on your website?

Providing relevant, high-quality content on your website will help you rank higher in search results, and more importantly, it will establish credibility and trust with your online audience.

Before you do any of that, you have to first understand your website’s goals to execute a strategic SEO plan.

Know your website/client’s goals

Every website is different, so take the time to really understand a specific site’s business goals. This will not only help you determine which areas of SEO you should focus on, where to track conversions, and how to set benchmarks, but it will also help you create talking points for negotiating SEO projects with clients, bosses, etc.

What will your KPIs (Key Performance Indicators) be to measure the return on SEO investment? More simply, what is your barometer to measure the success of your organic search efforts? You’ll want to have it documented, even if it’s this simple:

For the website ________________________, my primary SEO KPI is _______________.

Here are a few common KPIs to get you started:

  • Sales
  • Downloads
  • Email signups
  • Contact form submissions
  • Phone calls

And if your business has a local component, you’ll want to define KPIs for your Google My Business listings, as well. These might include:

  • Clicks-to-call
  • Clicks-to-website
  • Clicks-for-driving-directions

Notice how “Traffic” and “Ranking” are not on the above lists? This is because, for most websites, ranking well for keywords and increasing traffic won’t matter if the new traffic doesn’t convert (to help you reach the site’s KPI goals).

You don’t want to send 1,000 people to your website a month and have only 3 people convert (to customers). You want to send 300 people to your site a month and have 40 people convert.

This guide will help you become more data-driven in your SEO efforts. Rather than haphazardly throwing arrows all over the place (and getting lucky every once in awhile), you’ll put more wood behind fewer arrows.

Grab a bow (and some coffee); let’s dive into Chapter 2 (Crawlers & Indexation).

We’re looking forward to hearing your thoughts on this draft of Chapter 1. What works? Anything you feel could be added or explained differently? Let us know your suggestions, questions, and thoughts in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

http://ift.tt/2nKms2W

Rewriting the Beginner’s Guide to SEO

Posted by BritneyMuller

Many of you reading likely cut your teeth on Moz’s Beginner’s Guide to SEO. Since it was launched, it’s easily been our top-performing piece of content:

Most months see 100k+ views (the reverse plateau in 2013 is when we changed domains).

While Moz’s Beginner’s Guide to SEO still gets well over 100k views a month, the current guide itself is fairly outdated. This big update has been on my personal to-do list since I started at Moz, and we need to get it right because — let’s get real — you all deserve a bad-ass SEO 101 resource!

However, updating the guide is no easy feat. Thankfully, I have the help of my fellow Mozzers. Our content team has been a collective voice of reason, wisdom, and organization throughout this process and has kept this train on its tracks.

Despite the effort we’ve put into this already, it felt like something was missing: your input! We’re writing this guide to be a go-to resource for all of you (and everyone who follows in your footsteps), and want to make sure that we’re including everything that today’s SEOs need to know. You all have a better sense of that than anyone else.

So, in order to deliver the best possible update, I’m seeking your help.

This is similar to the way Rand did it back in 2007. And upon re-reading your many “more examples” requests, we’ve continued to integrate more examples throughout.

The plan:

  • Over the next 6–8 weeks, I’ll be updating sections of the Beginner’s Guide and posting them, one by one, on the blog.
  • I’ll solicit feedback from you incredible people and implement top suggestions.
  • The guide will be reformatted/redesigned, and I’ll 301 all of the blog entries that will be created over the next few weeks to the final version.
  • It’s going to remain 100% free to everyone — no registration required, no premium membership necessary.

To kick things off, here’s the revised outline for the Beginner’s Guide to SEO:

Click each chapter’s description to expand the section for more detail.

Chapter 1: SEO 101
What is it, and why is it important? ↓

  • What is SEO?
  • Why invest in SEO?
  • Do I really need SEO?
  • Should I hire an SEO professional, consultant, or agency?

Search engine basics:

  • Google Webmaster Guidelines basic principles
  • Bing Webmaster Guidelines basic principles
  • Guidelines for representing your business on Google

Fulfilling user intent
Know your SEO goals

Chapter 2: Crawlers & Indexing

First, you need to show up. ↓


How do search engines work?

  • Crawling & indexing
  • Determining relevance
  • Links
  • Personalization

How search engines make an index

  • Googlebot
  • Indexable content
  • Crawlable link structure
  • Links
  • Alt text
  • Types of media that Google crawls
  • Local business listings

Common crawling and indexing problems

  • Online forms
  • Blocking crawlers
  • Search forms
  • Duplicate content
  • Non-text content

Tools to ensure proper crawl & indexing

  • Google Search Console
  • Moz Pro Site Crawl
  • Screaming Frog
  • Deep Crawl

How search engines order results

  • 200+ ranking factors
  • RankBrain
  • Inbound links
  • On-page content: Fulfilling a searcher’s query
  • PageRank
  • Domain Authority
  • Structured markup: Schema
  • Engagement
  • Domain, subdomain, & page-level signals
  • Content relevance
  • Searcher proximity
  • Reviews
  • Business citation spread and consistency

SERP features

  • Rich snippets
  • Paid results
  • Universal results
    • Featured snippets
    • People Also Ask boxes
  • Knowledge Graph
  • Local Pack
  • Carousels

Chapter 3: Keyword Research

Next, know what to say and how to say it. ↓

How to judge the value of a keyword
The search demand curve

  • Fat head
  • Chunky middle
  • Long tail

Four types of searches:

  • Transactional queries
  • Informational queries
  • Navigational queries
  • Commercial investigation

Fulfilling user intent
Keyword research tools:

  • Google Keyword Planner
  • Moz Keyword Explorer
  • Google Trends
  • AnswerThePublic
  • SpyFu
  • SEMRush

Keyword difficulty
Keyword abuse
Content strategy {link to the Beginner’s Guide to Content Marketing}

Chapter 4: On-Page SEO

Next, structure your message to resonate and get it published. ↓

Keyword usage and targeting
Keyword stuffing
Page titles:

  • Unique to each page
  • Accurate
  • Be mindful of length
  • Naturally include keywords
  • Include branding

Meta data/Head section:

  • Meta title
  • Meta description
  • Meta keywords tag
    • No longer a ranking signal
  • Meta robots

Meta descriptions:

  • Unique to each page
  • Accurate
  • Compelling
  • Naturally include keywords

Heading tags:

  • Subtitles
  • Summary
  • Accurate
  • Use in order

Call-to-action (CTA)

  • Clear CTAs on all primary pages
  • Help guide visitors through your conversion funnels

Image optimization

  • Compress file size
  • File names
  • Alt attribute
  • Image titles
  • Captioning
  • Avoid text in an image

Video optimization

  • Transcription
  • Thumbnail
  • Length
  • “~3mo to YouTube” method

Anchor text

  • Descriptive
  • Succinct
  • Helps readers

URL best practices

  • Shorter is better
  • Unique and accurate
  • Naturally include keywords
  • Go static
  • Use hyphens
  • Avoid unsafe characters

Structured data

  • Microdata
  • RFDa
  • JSON-LD
  • Schema
  • Social markup
    • Twitter Cards markup
    • Facebook Open Graph tags
    • Pinterest Rich Pins

Structured data types

  • Breadcrumbs
  • Reviews
  • Events
  • Business information
  • People
  • Mobile apps
  • Recipes
  • Media content
  • Contact data
  • Email markup

Mobile usability

  • Beyond responsive design
  • Accelerated Mobile Pages (AMP)
  • Progressive Web Apps (PWAs)
  • Google mobile-friendly test
  • Bing mobile-friendly test

Local SEO

  • Business citations
  • Entity authority
  • Local relevance

Complete NAP on primary pages
Low-value pages

Chapter 5: Technical SEO

Next, translate your site into Google’s language. ↓

Internal linking

  • Link positioning
  • Anchor links

Common search engine protocols

  • Sitemaps
    • Mobile
    • News
    • Image
    • Video
  • XML
  • RSS
  • TXT

Robots

  • Robots.txt
    • Disallow
    • Sitemap
    • Crawl Delay
  • X-robots
  • Meta robots
    • Index/noindex
    • Follow/nofollow
  • Noimageindex
  • None
  • Noarchive
  • Nocache
  • No archive
  • No snippet
  • Noodp/noydir
  • Log file analysis
  • Site speed
  • HTTP/2
  • Crawl errors

Duplicate content

  • Canonicalization
  • Pagination

What is the DOM?

  • Critical rendering path
  • Help robots find the most important code first

Hreflang/Targeting multiple languages
Chrome DevTools
Technical site audit checklist

Chapter 6: Establishing Authority

Finally, turn up the volume. ↓

Link signals

  • Global popularity
  • Local/topic-specific popularity
  • Freshness
  • Social sharing
  • Anchor text
  • Trustworthiness
    • Trust Rank
  • Number of links on a page
  • Domain Authority
  • Page Authority
  • MozRank

Competitive backlinks

  • Backlink analysis

The power of social sharing

  • Tapping into influencers
  • Expanding your reach

Types of link building

  • Natural link building
  • Manual link building
  • Self-created

Six popular link building strategies

  1. Create content that inspires sharing and natural links
  2. Ego-bait influencers
  3. Broken link building
  4. Refurbish valuable content on external platforms
  5. Get your customers/partners to link to you
  6. Local community involvement

Manipulative link building

  • Reciprocal link exchanges
  • Link schemes
  • Paid links
  • Low-quality directory links
  • Tiered link building
  • Negative SEO
  • Disavow

Reviews

  • Establishing trust
  • Asking for reviews
  • Managing reviews
  • Avoiding spam practices

Chapter 7: Measuring and Tracking SEO


Pivot based on what’s working. ↓

KPIs

  • Conversions
  • Event goals
  • Signups
  • Engagement
  • GMB Insights:
    • Click-to-call
    • Click-for-directions
  • Beacons

Which pages have the highest exit percentage? Why?
Which referrals are sending you the most qualified traffic?
Pivot!
Search engine tools:

  • Google Search Console
  • Bing Webmaster Tools
  • GMB Insights

Appendix A: Glossary of Terms
Appendix B: List of Additional Resources
Appendix C: Contributors & Credits

What did you struggle with most when you were first learning about SEO? What would you have benefited from understanding from the get-go?

Are we missing anything? Any section you wish wouldn’t be included in the updated Beginner’s Guide? Leave your suggestions in the comments!

Thanks in advance for contributing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

http://ift.tt/2yAr3rE