Before reading this article, if you have not already, I recommend starting out with Part 1 of this series: ‘The Basics of SEO’. It’s designed to provide an overview of the fundamentals of SEO to those who a brand new to it. These blogs start from the very basics of SEO, and cover important SEO concepts in digestible chunks, so that you can really process and get an understanding of the ins and outs of SEO. Today, let’s break down search engines. Understanding how search engines works can help us optimise our website for them.
What Is a Search Engine?
Search engines are online tools designed to search websites across the internet based on inputs from users – a search query. This list that appears from this search is constructed based on unique algorithms and is known as a ‘search engine results page’ (SERP).
You have probably heard of Google, Bing, and Yahoo search engines before, but did you know that there are more than 30 Google alternatives? It’s true! However, Google dominates such a large market share (over 88%), that, generally, it is only worthwhile focusing your optimisation efforts on Google. There may be exceptions to this if more of your target audience use an alternative search engine, but search engines generally work in the same fundamental ways.
How Do Search Engines Work?
Let’s look at how search engines interact and understand your website – and every other website!
Crawling is the process through which search engines discover sites. They send out robots to websites to retrieve data. The robots sent out by Google, Bing, Yahoo, etc. go by different names, including ‘web crawler, spider, or spiderbot’. Often these are simply shortened to ‘crawler’. These robots attempt to access websites via URLs. Crawling is the first step to allowing search engines to understand and index your site, so that judgements can be made, and each webpage is effectively ranked.
During a crawl, bots will access other links on your pages, which allows them to navigate through your site internally and takes them to any linked external pages, expanding the ever-growing network of connected pages and links. Following links is the way crawlers find new URLs to index and rank.
Having visited your website and gathered data, the information is stored into an enormous catalogue. This allows the search engine to quickly grab relevant stored data to present to users when they enter a search query.
The index stores and organises:
- Detailed data about the relevance and importance of each web page and it’s content
- A map/web of every URL each page links to
- Information about links including anchor text, whether they’re ads, and where they are on the page.
The data indexed by robots includes text that users see on the page, as well as metadata. We’ll go into this in more detail in future articles as it’s an important element of SEO, but for now, you can understand metadata as the titles and descriptions of websites that appear on SERPs.
Google says it’s aware of over 130 trillion webpages. Many more pages than this exist, but they are blocked from being crawled, indexed, or ranked, for different reasons. Sometimes it can be beneficial to block specific webpages from being indexed by search engines, such as duplicate web pages, admin pages, and low-value pages. There are different methods to doing this, such as using ‘noindex’ metatags or using a robots.txt file. Generally, however, you want your pages to be indexable and found by crawlers. Otherwise, your webpage won’t appear on SERPs , and users won’t get reach that page via organic search.
You can see your indexed pages using Google Search Console or by checking site operator: “site:domain.com”.
Remember, crawling and indexing are a continuous process, which allows the search engine’s database to remain fresh and up to date with any changes that might impact the relevance and importance of a site.
The aim of search engines is to provide the most relevant results to user queries. That’s why if you Google ‘Weather in Hobart’, you will (hopefully!) get a page of results showing the weather in Hobart, and not for the weather in Queensland.
Search engines have made our lives more convenient and given us quite significant time savings from searching around the web for relevant content. It’s easy to take that for granted and not think about what’s happening behind the scenes. There are complex algorithms and processes in place to rank web pages in a specific order, from most to least relevant/important. The higher a website is ranking for a search, the more relevant and/or important the search engine believes the content on that page is to the user’s query.
It’s the job of search engines to interpret user’s intent, access it’s database of indexed URLs to retrieve webpages relevant to the query and rank them appropriately.
Ranking is really where SEO becomes important. Individuals, businesses, and organisations want their website to rank highly on queries that match their target audience, with different goals in mind. Whether it is to sell a product, get RSVPs to an event, create awareness, or provide an online service. In the modern digital era. ranking well on SERPs is important to meet many business goals.
How Do Search Engines Decide Which Content is Relevant and Important?
As I have mentioned, ranking is the process of sorting web pages from most to least important based on relevance and importance.
Relevance means how much the content on a given webpage aligns with the user’s query and their search intent.
Importance, on the other hand, relates to domain authority. When other pages link to a webpage, this is essentially a vote of confidence for that page, and search engines will generally think of the site as more important, and a good site to consider showing to users.
There are various ranking signals that can influence how search engines perceive a page’s relevance and importance, which SEO aims to understand and influence. Relevant content, fast speed, mobile-friendliness, backlinks, time spent on page, bounce rate, and more can impact how well a site ranks on SERPs.
Now you should have a good understanding of the key components of how search engine’s function: crawling, indexing, and ranking. Each part of the process is important in SEO, and there are various things that can be done to make each part a bit easier for Google to visit, understand, index, and favourably rank your website.
Richard is the owner and head of digital at Tailored SEO. He has worked in the digital marketing space for 10 years and has worked with a wide range of clients, including B2B and B2C businessess.