How to Become an SEO Expert by Thinking Like a Search Engine

How to Become an SEO Expert by Thinking Like a Search Engine

How well do you know about SEO?

You know that it stands for “Search Engine Optimization”.

And you know that SEO is one of the most important areas of digital marketing.

The ability to optimize a website for the search engines and rank high for the keywords is regarded as one of the top skills a digital marketer can have.

SEO experts are one of the highest paid consultants and employees in the world.

You want to become an SEO expert?

How are you going to become an SEO expert?

You cannot start reading about the latest tactics that will help you rank better in the search engines.

In this post, I want to give a completely different approach to learning SEO.

A thought experiment.

Because…

Everyone who learns about SEO does it the wrong way.

And that’s one of the reasons why there are very less SEO experts in the country.

Most people who learn SEO learn about the tricks and tactics.

They learn that one needs to build backlinks, one needs to do on-page SEO and so on.

They don’t think deeply about why they are doing that.

Doing all the tactics would give some results in the short term.

Sometimes it would work and sometimes it wouldn’t.

The true SEO experts in the world are the people who think like search engines.

If you want to become good at SEO, then you need to think like a search engine.

​​​​​​​What would you do if you were designing the algorithm for the search engines?

How would you rank the websites?

If you start thinking like that then you will be on your way to become an SEO expert.

That’s what we are going to do today.

We will learn how to think like a search engine.

We will build our own search engine in our minds.

That’s why it is called a thought experiment 🙂

So…

Let’s say you want to create a directory of all the websites in your city and you want to rank them for different keywords.

And let’s assume that there are 100 websites in your city.

How would you start?

First, your job will be to list all the possible websites available in your city that can be accessed in the public domain. (The surface web).

Sometimes, there will be websites you will not know about.

You can search about it anywhere, right?

(Since you are building your own search engine as a thought experiment, let’s assume there are no search engines in the world while you are doing this work.)

How would you discover new websites to add it to your list?

You would find out the links from one site to another. And you will follow the links.

And that way, you will land on new websites.

And you will DISCOVER them.

This will give you the list of websites.

But this list of website URLs alone is not enough.

If someone is searching for websites in your search engine, they would want to know more than the website URL.

So…

You can write down the titles, the descriptions of the websites and your own understanding of what the website is about.

Let’s say there are 10 websites about restaurants.

10 websites about educational institutions.

10 government websites and so on…

Now if someone searches for “Restaurants in “, you can list all the websites of the restaurants.

Long back, most of the search engines did only this. They listed their discovery.

Now, if you want to build a search engine for the whole world, you cannot visit all the websites manually.

There are billions of websites in the web.

Founders of Google knew that it is impossible to visit these websites manually.

So they built the crawler.

The crawler is a robot that visits all the websites in the world.

It discovers new websites through links.

As one website links to another, the crawler will follow and see how deep the rabbit hole goes.

If I search for “cameras”, then Google will show me websites which would be about cameras.

So this is website DISCOVERY on a large scale.

But is DISCOVERY enough?

No.

Let’s get back to building our search engine…

When you are building your search engine, the first problem you are solving is the DISCOVERY.

You discover websites from link to link.

You have to discover all the websites that are available until there is nothing else to find and list down all the URLs.

So here you have solved the problem of DISCOVERY.

The next problem to solve is the RELEVANCY.

Because you are not a website directory listing all the websites in your website.

You are a search engine.

You have to give RELEVANT information based on the search queries of your users.

That’s why you need to collect more information apart from the website URL during the discovery phase

So you collect the title, description and keywords.

If you have learned SEO before, you would have heard of this term meta-title, meta-description and meta-keywords.

What is meta anyway?

According to Wikipedia, the definition of Meta is: “a set of data that describes and gives information about other data.”

You you are collecting the meta data as well… additional data about the primary data (website URLs).

When someone searches for “restaurants in ” you can display all the restaurant websites with Title and Description.

When the data matches with the keywords, you have solved the problem of RELEVANCY.

As a search engine, you have to show relevant results to your users based on what they are searching for.

If someone is searching for “restaurants in ” you cannot show “schools in “.

You are solving that problem with meta data information.

RELEVANCY is usually determined by on-page information.

If a restaurant website should have keywords related to food.

School websites will have keywords related to education.

Based on this keyword matching, you can show relevant results to your search users.

If you become the number one search engine in your city, you can also give guidelines to all the people who have websites and create new websites.

You would tell them to use the correct keywords so that you can solve the problem of RELEVANCY for your users and send them the right customers.

A search engine like Google, issues guidelines to webmasters to submit their content in the Search Console (Webmaster Tools).

RELEVANCY is usually solved by on-page information: using the right content on the website with the right keywords.

Optimizing the on-page data and information is what we call as on-page optimization.

But is this enough?

What about the QUALITY of the results?

The 2nd biggest problem that you will face as a search engine owner is the problem of QUALITY.

Let’s say some searches for “restaurants in “.

You list 10 restaurants.

What if you list the worst restaurants on the top and the best restaurants in the bottom?

People will stop using your search engine if your recommendations are not of good QUALITY.

So it is not enough if the search results are of good RELEVANCY.

You also have to take care of QUALITY – and here it means sorting it from the best to worst.

To solve the problem of QUALITY, you cannot collect data from the restaurant owners who own these websites…

Because every restaurant would say that they are the best.

Every restaurant’s title would be “The best restaurant in this city!”.

We know how the world works, everyone calls themselves the No.1.

So to determine QUALITY data has to be collected from the people who visit these restaurants.

Let’s say apart from 100 websites, there are a lot of bloggers and social media users in your city.

You can try to guess which is the best restaurant based on user data.

You can find out how many times a restaurant is mentioned on social media.

How many times bloggers link to the restaurant websites.

If there is a restaurant that has no links or no mentions, you can assume that this restaurant is not popular. You can rank it last.

If there is a restaurant which people keep mentioning on social media, and if bloggers are linking to it, then you can rank this restaurant on the top.

When you are giving high rankings to the restaurant websites that are TRULY and honestly popular among people, then your search engine users will love you.

…because you are making the right recommendations.

That’s how you solve the problem of QUALITY.

And this is what we call Off-page SEO.

A big search engine like Google has to make the right recommendations.

They do it by collecting user data.

Like mentions, backlinks, brand searches, reviews, time spent on site… and 100s of other factors.

As a default rule, off-page SEO is supposed to be done by the users, not the webmasters.

That’s why Google is not happy if someone goes out and builds backlinks to their websites.

Because you cannot vote for yourself.

Webmasters are only supposed to solve the problem of on-page SEO by giving correct information to the search engines and using the right keywords on the websites.

The users of a website will do the off-page SEO.

User’s behaviour will give insights for the search engines to find out which website is better and they will rank it better.

That’s why I haven’t done any off-page SEO directly ever. And I still get 1000s of visitors from the search engines.

I make good quality content, distribute it using social media, and increase my brand awareness using ads.

I don’t build backlinks.

I earn backlinks.

I build brand-awareness and content.

Coming back to our thought experiment…

If your search results are getting tampered because a few restaurant website owners are faking the quality through fake social media mentions and fake blog links… how would that make you feel?

That’s why Google keeps getting smarter everyday and makes sure that people do not fake their quality.

Faking the quality signals of your website is what is called as Black-Hat SEO.

That doesn’t work in the long term.

The only SEO that works is to make good quality content and distribute it.

Make good quality products and services and make customers happy.

It is the responsibility of the search engine to make sure that they discover that you are the best.

And in my experience, they always do that job well.

Google is getting exponentially smarter every week, so you can try to game the search engines all you want, and in the next update, all your rankings will go.

That’s why I made a decision to build a long term brand.

Create good content.

Create good products.

Earn good feedback.

I get 1000+ visitors from the search engines every day to my blog for free.

And I’ve totally earned it in the past 5 years with consistently publishing good content.

So to recap:

Search engine crawlers discover websites – make sure your site is up so that it can be discovered.

Search engine crawlers read your meta data and scan your site data for keywords – make sure that your site is about one specific topic, has the right titles and descriptions. And it should have the right content for your users. This helps the search engine to rank RELEVANT websites on their results.

Search engine crawlers look at social media mentions, backlinks, user’s behaviour such as time spent, bounce rate, pages/session and 100s of other factors to determine the quality of your website. This helps them sort the websites on their results according to QUALITY.

On-page SEO is for RELEVANCY.

Off-page SEO is for QUALITY.

Webmasters do on-page SEO.

Users do off-page SEO.

And this understanding is important for you to become an SEO expert.

If you fail to understand this fundamental concept… you cannot become an SEO expert with all the tools, tricks and tactics.

All the best!

Your SEO Trainer,
Deepak Kanakaraju