Proud Sponsor of BrightonSEO April 25 & 26

Avoiding SEO issues in JavaScript

Avoiding SEO issues in JavaScript

In many organizations, site design falls entirely to developers — that’s, frankly, what a lot of us marketers hope for. Leaving back-end tech to the dev department allows you more time to focus on the concept, language, and execution of your campaigns. But, an amazing user experience does not a findable website make. At Found Conference, Danielle Rohe, Senior Marketing Strategist at UpBuild, discussed issues with JavaScript — and how you can work with your development team to keep your website findable.

What is Javascript again?

JavaScript is the language in which most interactive websites are built. Moz explains it this way: “every time a web page does more than just sit there and display static information for you to look at — displaying timely content updates, interactive maps, animated 2D/3D graphics, scrolling video jukeboxes, etc. — you can bet that JavaScript is probably involved.” This language is an integral part of most websites. 

JavaScript allows you to create animated content (like image carousels), content that updates dynamically (like a re-populating timeline), and all kinds of cool interactive stuff. Javascript usually layers over HTML (which forms the basic structure of your page) and CSS (which applies appearance customization) as the layer that allows websites to “do something.”

How JavaScript can mess with SEO

Not all search engine crawlers are able to process JavaScript documents successfully or immediately. To understand how Javascript affects SEO, you need to think about the client-side / server-side dynamic. 

  • “Server-side” refers to the systems that run on web servers — the stuff that users never see. Server-side code is run before content is downloaded and displayed on the user’s web browser. 
  • “Client-side” refers to the software that runs on a user’s web browser. The page’s client-side code is downloaded, then run and displayed by the browser. Typically, JavaScript is applied to the client-side render of documents. Here, the document is shown through an interface called the document object model (DOM). The DOM presents the document as nodes and objects, which Javascript can manipulate. 

Historically, crawlers have looked at the server-side render of pages: if your site used JavaScript, bots simply wouldn’t see it. As client-side rendering and the application of DOMs became more common, Google in particular began reading JavaScript as part of its crawls. 

However, the system isn’t perfect, and may not read HTML content nested inside of certain JavaScipt tags.You need to make sure that you’re not hiding important search terms inside Javascript’s containers.

Client Side Server Side
The server side of the document is shown to the browser. It goes through a framework before becoming viewable by the user.

In non-technical terms, you may be placing information in Javascript that isn’t read on the server side — and therefore may not be read by web crawlers. 

Googlebot does crawl Javascript, but it may not crawl correctly. And, other crawlers, including Facebook, do not crawl Javascript. So, how do we bridge the gap for crawlers that may not be rendering Javascript properly or at all?

Use dynamic rendering. 

Google recommends  dynamic rendering, which is basically a means of sending different versions of your code to crawlers and users. The site shown to Googlebots will have gone through a pre-render, which transforms your content into static HTML that’s easier for crawlers to consume and faster to load onto pages.

In order to implement dynamic rendering, Google recommends sending its bots to a pre-render software. A pre-render is “middleware” that sits between the server and the client. When the end client sends a request to the server to fetch a page, the middleware renders JavaScript and sends rendered HTML to the client. However, you don’t want your users to see only the content that goes through a pre-render. This process serves HTML directly to the server — so your viewer isn’t getting an interactive JavaScript experience. 

This is where dynamic rendering can help. Dynamic rendering uses typical server-to-browser rendering for the browser, but sends a rendered version of the doc (with static HTML) to crawlers. While this may sound like cloaking (showing different content to bots than what is shown to users), Google has approved this method. It says:

“As long as your dynamic rendering produces similar content, Googlebot won’t view dynamic rendering as cloaking…Using dynamic rendering to serve completely different content to users and crawlers can be considered cloaking. For example, a website that serves a page about cats to users and a page about dogs to crawlers can be considered cloaking.”

Google does say that it hopes to one day resolve the crawl issues associated with JavaScript. In the meantime, dynamic rendering is the best practice here. Danielle recommends prerender.io as an open-source pre-render software that is compatible with Angular, React, jQuery, and other JavaScript frameworks. 

How Dynamic Rendering Works
Dynamic rendering creates two paths for rendering. Images via developers.google.com.

Make sure lazy loading isn’t hiding important content

Developers can use JavaScript to build “lazy load” images and content. Lazy loading saves data and loading time by only loading off-screen content once the user scrolls to that content. It improves site performance and user experience — however, you want to make sure that your implementation of lazy loading doesn’t inadvertently hide content from Google.

Make sure that your lazy loading implementation allows crawlers to see all of the text and image tags that may not be viewable when a user visits the page. And, make sure that all of the content in the viewport is viewable when the user reaches it. 

If you’re implementing infinite scroll, use paginated loading. This method provides the user with a unique link to each section — it loads easily and is easily shareable. Use an API to make sure the content populates dynamically.

Final words: get testing, and get cozy with your development team

Do you need JavaScript? Probably. Your site simply needs to avoid potential pitfalls with hidden content. As with so many common SEO errors, you can catch Javascript issues early by using the URL Inspection Tool in Google Search Console. You can check for errors, view the HTML rendered to Googlebots, and make sure that your pages aren’t missing important tags and headers.

And if you, like many marketers, aren’t directly in charge of coding your website, you need to get to know your development team. If they have tech standups, attend and ask questions! You don’t need to become a JavaScript wizard to make sure that SEO and development play well together.