Keyword optimization is the best means by which to bring structure to the unstructured.

Occasionally, optimizing the articles we write for relevant keywords feels un-literary, and I worry that when I explain the practice to colleagues and clients, it comes across as deceptive. However, given the nature of the web as a decentralized system with no official index, keyword optimization is the online equivalent of indexing the contents of a book — an effective means of ensuring specific content is discoverable.

The Backstory of Indexing

Indexing has been a fundamental way of organizing information since the 16th century, both on the scale of individual works (such as in a book) and of collections of thousands or millions of individual works. 

This is scale-dependent, however: centuries ago, most people may have owned only one book (likely a work of scripture) or none at all, for that matter. My late father grew up in the mid 20th century in a family that owned just one book — an encyclopedia. Indexing is relevant within the encyclopedia, but small personal libraries rarely require indexing beyond this.

At scale, given large personal libraries, and colossal municipal and national libraries, searches without the use of metadata risk absorbing lifetimes. As such, libraries index by author, topic, genre, and the authors themselves can use formal citations to refer to specific pieces of information.

Indexes like these remain possible given the scale, limitations, and formality of academic, journalistic, and conventional publishing; it is not possible online.

Understanding Search Engine Indexing

It is impossible to index the web like we might a library or a book, for two primary reasons. 

Firstly, online media is simply not oriented around top-down structures like indexing. An article or web page is not submitted to a web authority, along with recommended metadata, for inclusion in a grand index that users can query. 

Secondly, the scale of online publishing would make a formal endeavor of this type impossible. People publish more than 70 million posts every day through WordPress, just one of many online publishing platforms — not to mention the breadth of media found online, including podcasts, video, images, and more. An attempt to index this content formally would require the bureaucracy of a nation and would, perhaps, be rendered undesirable for that very reason.

For these reasons, teams build search engines that index the web automatically and according to the nature of the content itself. Google and other search engines use software often referred to as “web crawlers” that navigate the internet, following links and cataloguing what they find. 

This method introduces a few key differences between an online search index and a conventional index:

  • Old-school materials are structured at birth, whereas most online material is inherently unstructured until search engines impose structure upon it.
  • Publishers must petition libraries to catalogue their books — Google, for example, indexes web pages automatically.
  • Old-school indexes are strict, comprehensive, and rigid — Google’s index is fast, loose, and incomprehensive. 

How Does Google Index?

The technicalities of how Google indexes are trade secrets, but experts including Google itself publish guidelines, and we, as online publishers, can make educated guesses by observing how Google works. 

Most importantly, we know that Google responds to a user’s query by surfacing content that contains keywords that match, partly match, or are semantically related to the words used in the query. This is analogous to looking in a library index for books on, say, the Library of Alexandria. We also know that Google weighs the relevance and authority of a page by the number of links pointing to it, rather like academic citations.

Remember: Google is built by people, but they rarely use their intelligence to update specific data; Google does not really make decisions, but rather is running a program and cannot take note of and correct mistakes (yet). Taking a look at the keyword rankings for this site, for example, we rank for very many relevant queries, such as marketing agency nyc (what we are), long form content (one of the sorts of things we create). We also rank for totally irrelevant terms, like pickett industries — our CEO’s family name is Pickett, but this has nothing to do with us. 

Fundamentally, Google can’t help itself, we have to help it.

What Does This Mean?

As mentioned above, an online article is its own index card for Google. As such, if we want our ideas to be found by users seeking them, we should include relevant keywords in the article. Much of the time, this happens without trying: in writing this post, I’ve used relevant terms like google, index, search, keyword rankings, and how does google index

Beyond this, it pays to speak to your audience in its own language: use tools like SEMRush to understand the kinds of queries real users type into search engines when they’re seeking products or services that you provide. For instance, we refer to the form of regular, strategic publishing that we sell as brand publishing, but know that this is not as well-used as what we consider to be a less sophisticated approach, content marketing. We still use the latter term when writing, however, because this is how most people think about the concept, and therefore, more likely to be searched than brand publishing.

It’s critical to avoid deliberately misleading users by adding irrelevant keywords. Brands lose credibility when they act deceitfully, and weaving in irrelevant keywords does nothing to add value to the reader. Furthermore, keyword-stuffing, the act of loading content with as many instances and variations of a target keyword as possible, has not worked since Google’s Panda update in 2011, is likely to harm your success with Google and will undoubtedly impede the reading experience of your audience.

Fundamentally, diligent keyword optimization is vital as a means of maintaining the discoverability of information online. Nobody is responsible for making sure that users can locate the information that they need. Google’s algorithms are a smart but imperfect workaround and, as publishers, it’s our job to think of the user in optimizing our part of a decentralized, organically-structured system.

Author Oliver Cox

Having originally joined the company as a writer in 2013, Oliver currently works as a full-time member of L&T's sales team to prospect, nurture and help close sales leads in the US and UK markets. Oliver is a graduate of the University of Liverpool and is a prolific musician and author.

More posts by Oliver Cox