SEO & Progressive Web Apps: Looking to the Future


1. Summary: PWAs, SPAs, and service workers

Progressive web apps are basically websites that offer a customer experience that is similar to a native application. Features such as push notifications allow an easy re-engagement with your users and users can also add their preferred websites to their home screen without hassle of apps stores. PWAs are able to continue working offline, or even on networks with low quality They also provide a premium full-screen experiences on mobile devices, which is more similar to the experience offered by apps that are native iOS as well as Android apps.

Most importantly, PWAs can do this while keeping and even expanding the fundamentally accessible and open nature of the internet. According to the name, they are advanced as well as responsive and designed to function for all users regardless of of device or browser. They are also maintained automatically andas we’ll discover they are easily found and linked as traditional websites. It’s more than just all-or-nothing. websites that are already in operation can use a restricted subset of these techniques (using a simple service worker) and begin benefiting immediately.

The specification is new, and obviously there are areas that require improvement however, that doesn’t prevent it from being one of the most significant technological advancements in web technology’s capabilities internet in a 10 years. The adoption of PWAs is rapidly growing and companies are recognizing the many real-world business objectives they could impact.

Learn more about the capabilities and specifications of PWAs Google Developers. Google Developers, however, two of the most important technologies that enable PWAs feasible are:

  • app Shell Architecture: Commonly accomplished by using a JavaScript framework such as React or Angular it is a method of creating single-page apps (SPAs) which can separate the logic from the actual content. Imagine an app shell like the minimum HTML, CSS, and JS your app requires to be able to function. a the skeleton of your user interface that can be stored.
  • Service Workers: A specific script that is running on your web browser in your background completely separate from your web page. It basically acts as a proxy that intercepts and handles requests from the network in a programmatic manner.

It is important to note that these technologies aren’t in any way mutually exclusive. The single-page application model (brought to maturity by AngularJS in 2010.) certainly predates service worker and PWAs by a considerable amount. As we’ll see it’s also feasible to develop a PWA that’s not built in the form of a single-page app. In this article we’ll concentrate on the “typical approach to creating contemporary PWAs, exploring the SEO implications and the opportunities for teams who decide to join the growing number of companies which make use of the two mentioned technologies.

We’ll begin with the shell of an app and the rendering consequences of the single page model of the app.

2. The App Shell Architecture


In a simple way, the application shell architecture is based on caching static content (the basic elements of UI and functionalities) and loading the content dynamically employing JavaScript. The majority of modern JavaScript SPA frameworks encourage something like this. the separation of logic from content in this way improves speed and user-friendliness. The interactions feel instantaneous similar to those in a native application and the use of data can be extremely cost-effective.

As I mentioned in the intro, a excessive reliance upon client-side JavaScript is a issue in SEO. In the past, many problems centered on the fact that although crawlers use unique URLs to search for the content they want to index, single-page applications don’t require changing the URL to reflect the state of the website or application (hence the phrase “single-page’). The use of fragment-identifiers — that aren’t included in an HTTP request to alter content dynamically without having to reload the page was a big problem for SEO. The old methods involved changing the haveh using a called hashbang (#!) and the _escaped_fragment_ parameter a technique that has since been discarded and will not be exploring in the near future.

With the help of using the HTML5 historical API and pushState technique we now have a better option. The URL bar in the browser is able to be changed with JavaScript without having to reload the page, keeping it in tune with the current state of your website or application and allowing users to take advantage of the browser’s “back” button. While this method isn’t a miracle cure your server has to be configured to handle requests for these URLs by loading the application in its original state It provides us with tools to resolve the issue that URLs are a problem in SPAs.

Rendering content

It is important to note that when I say rendering here, I’m talking in the procedure of making the HTML. We’re looking at how the content is transmitted to the browser and not on how pixels are drawn onto the screen.

In the beginning of the internet the web was simpler in this regard. The server would usually deliver all of the HTML required to create a page. Today, however websites that use a single-page app framework only provide a small amount of HTML from their servers and transfer the bulk of the work to the user (be it a person or a robot). Because of the size of the internet, this demands a large amount of computing power and time as Google stated at the I/O Conference in the year 2018, this presents a significant issue to search engine optimization:

“The rendering of JavaScript-powered websites in Google Search is deferred until Googlebot has resources available to process that content.”

On bigger sites the second round of indexing can be delayed by some days. In addition it is possible to experience a variety of issues with critical information like metadata and canonical tags being not being indexed. I highly recommend you watch the YouTube video of Google’s great speech on the topic to get a overview of the issues encountered by modern search engine crawlers.

Google is among the rare search engines to render JavaScript in any way. It does this with a internet rendering program which, until recently was built on Google’s Chrome 41 (released in 2015). This obviously will have implications beyond single-page apps and the wider topic of JavaScript SEO is a intriguing field at the moment. Rachel Costello’s recent white paper on JavaScript SEO is by far the most comprehensive source I’ve found about the subject. Additionally, it also includes the work of other experts such as Bartosz Goralewicz Alexis Sanders, Addy Osmani and a numerous others.

In this post, the main message here is in 2019 you shouldn’t count for search engines to precisely search or render JavaScript-dependent internet application. When your website’s content renders by a client it will take a lot of resource to Google to crawl and your site is likely to be underperforming in the search results. Whatever you’ve heard contrary to this, if organic search is a beneficial source for your site you must make provision for server-side rendering.

Server-side rendering a concept that is often confused about…

“Implement server-side rendering”

This is a frequent SEO auditing recommendation that I often hear as if it’s a simple, self-contained solution that can be easily implemented. At best it’s an oversimplification of an enormous technical undertaking, and at worst it’s a misunderstanding of what’s possible/necessary/beneficial for the website in question. The rendering on the server is an result of a variety of possible configurations and can be accomplished in numerous ways. in the end we’re more concerned with making our server provide an unchanging HTML.

What do we have to choose from? Let’s dissect the concept of rendered content on the server a bit, and then look into our possibilities. These are the top-level strategies that Google described at the previous I/O conference.

  • Dynamic rendering — In this case, normal browsers use the’standard web app that needs rendering on the client side, and bots (such as Googlebot and social media services) are served static snapshots. This is done by adding an extra step to your server infrastructure, specifically a service that downloads your web application renders your content, and then sends that static HTML back to the bots, based on their user agents (i.e. UA scanning). This was previously done using a software like PhantomJS (now removed and no longer in development) however, nowadays Puppeteer (headless chrome) can serve a similar task. Its main benefit of this is the way it is usually connected to the existing system.
  • Hybrid rendering Google’s long-term suggestion and is the only method to follow for newer website designs. All usershumans and botswill be delivered as fully rendered static HTML. Users can keep requesting URLs in this manner and receive static content every time, whereas on regular web browsers JavaScript is used after initial loading of the page. This is a excellent solution in the theory of things and has several other benefits in terms of speed and ease of use More on that later.

This is a cleaner approach as it doesn’t require UA sniffing and is Google’s long-term advice. It’s important to note that hybrid rendering isn’t a one-size-fits-all solution; it’s a result of several options for providing static prerendered content on the server side. Let’s examine a few ways for this outcome could be accomplished.

Universal/isomorphic apps

This is one of the ways in that you can get a hybrid rendering setup. Isomorphic programs make use of JavaScript that runs on the server as well as the client. This is due to the development of Node.js that, among many other things , allows developers to write code that can be executed in the backend, in addition to in your browser.

Typically, you’ll install the framework (React, Angular Universal, whatever) to be run in a Node server. It’s rendering any (or all) of HTML before sending it on to clients. Your server needs to be set up for response to URLs that are deep by rendering HTML to the correct page. For normal browsers this is the moment where the client-side app can seamlessly assume control. The server-generated static HTML for the first view is rehydrated (brilliant word) in the browser which transforms it back to a single-page app and running future navigation events using JavaScript.

If done correctly, this configuration can be a game changer as it can provide the usability benefits of rendering on the client side as well as the SEO benefits of server-side rendering and a speedy first paint (even even if Time to Interactive is often affected by the rehydration process in the event that JS starts to kick in). In order to avoid simplifying the job, I’ll not provide too much specifics here, but the most important issue is that, while isomorphic JavaScript and true server-side rendering could be a efficient method, it is often extremely difficult to implement.

What other alternatives are available? If you’re not able to justifiably justify the cost or time of a fully isomorphic system or it’s excessive for the goals you’re trying achieve Are there alternatives to benefit from the single-page app model — and also the hybrid rendering setup without jeopardizing the SEO of your site?


Making rendered content accessible server-side does not suggest you have to think that it’s a requirement that the rendering itself has to be done at the servers. All we require is rendering HTML to be available and ready to be served to the client. The rendering process itself is able to happen anyplace you’d like. Utilizing a JAMstack method rendering of your content to HTML is part of the build process.

We’ve discussed the JAMstack method before . To give a brief overview this term refers to JavaScript APIs, JAMstack,and markup and is a method of creating complicated websites without the use of servers-side software. The process of creating a website using front-end elements -the same as a task that a traditional site could achieve using WordPress and PHP — is carried out in the process of building, while interaction is managed on the client side using JavaScript as well as APIs.

Imagine it in this way: everything lives in your Git repository. Your content is stored in Markdown text in plain format (editable through a headless CMS or any other API-based solutions) and your templates for pages as well as assembly logic is written in Go, JavaScript, Ruby or whatever language your preferred website generator happens to employ. Your website can be constructed in static HTML on any device with the right collection of command-line tools prior to it’s uploaded to any web hosting service. The resultant set of easily-cacheable static files are often safely stored on a CDN for almost nothing.

I believe that static website generators, or the concepts and technologies that support themare the future. There’s a good chance that I’m not right but the potential and versatility of this approach is obvious for anyone who’s worked with modern, npm-based automation tools like Gulp and Webpack to create your CSS and JavaScript. I’d encourage anyone to test the robust Git integration provided by webhosting specialist Netlify in a real-world scenario and yet believe that the JAMstack method is a trend.

The importance of a JAMstack configuration to our discussion on single-page applications and prerendering should be pretty evident. If our static website generator can build HTML using templates created in Liquid and Handlebars What’s the reason it can’t achieve the same feat using JavaScript?

There’s a new kind that is a static generator that can do exactly that. Commonly run through React or Vue.js These programs permit developers to build websites with modern JavaScript frameworks. They can also be configured to generate SEO-friendly static HTML for every page (or the ‘route’). The individual HTML documents is fully rendered that is ready for consumption by bots and humans, and is a access point for a full client-side app (i.e. a single page app). This is a perfect implementation of what Google called “hybrid rendering”, though the specific character of this pre-rendering method distinguishes it from an isomorphic set-up.

An excellent example one is GatsbyJS that is developed in React and GraphQL. We won’t get into many details, but I’d recommend to everyone who has read this far to look up their homepage as well as their their excellent documentation. It’s a highly-supported and well-supported tool that has a moderate learning curve, a vibrant community (a packed with features v2.0 launched in September) and an extensible, plugin-based architecture and rich integrations with various CMSs and allows developers to use modern frameworks such as React without harming their SEO. There’s another one, Gridsome that is which is based in VueJS as well as React Static, which , as you’ve guessed it utilizes React.

Adoption by businesses for these systems is likely to increase; GatsbyJS was utilized by Nike to power the “Just Do it campaign as well as Airbnb to power their engineering website, and Braun have employed for the power of a major online store. In the end, our colleagues from SEOmonitor utilized GatsbyJS to create their latest website.

It’s enough to talk about single-page apps and JavaScript rendering right now. It’s about time we looked at the other of our two fundamental technologies that underpin PWAs. I promise that you’ll be through the entire course (haha, nerdy joke) as it’s time to look into Service Workers.

3. Service Workers

To begin I’ll make it clear both of the technologies that we’re looking at -services workers and SPAs -they can not mutually distinct. Together they are the foundation of what’s known being a Progressive Web Application, although it’s possible to use a PWA that isn’t an SPA. You can also incorporate a service worker on a static site (i.e. one that does not have any client-side rendered content) This is something I think we’ll witness happening a number of times in coming years. Service workers also work in combination with other technologies, such as one called the web App Manifest one that my friend Maria recently looked into in more depth in her comprehensive article on PWAs as well as SEO.

In the end, however, it is service workers who provide the most thrilling capabilities of PWAs. They’re among the most significant updates in the internet platform in its history. And all those who work on developing and maintaining or conducting an audit of a website must know about this revolutionary new set of technology. If, as I am, you’ve been glued to the Jake Archibald’s Is the Service Worker ready page over the past few years, and watched the adoption of browsers by vendors has increased and you’ll realize that the best time to build by working with Service Workers is currently.

We’ll look at the definition of what they are and how they work and how they can be implemented and what their consequences have for SEO.

What are the possibilities for service workers?

Service workers are a specific type of JavaScript file that is run outside of the browser’s main thread. It’s located between your browser’s network and browser and it has the following capabilities:

  • intercepting requests from the network and making decisions about what to do programming. The worker could connect to the network in a normal manner and depend solely on the cache. It may even create a completely new response using a number of sources. This includes the creation of HTML.
  • The process of preloading files during installation of a service worker. For SPAs this usually is the case with the ‘app shell’ that we talked about earlier, while simpler static websites may choose to preload all HTML, CSS, and JavaScript to ensure that the functionality is not compromised while offline.
  • handling push notifications like a native application. It means that websites will get the permission of users to send notifications. They then depend on the service worker to receive notifications and then send the actions even if you close your browser.
  • Background sync is executed and delaying operations in the network until connectivity is improved. It could be an outbox or an outbox for a webmail service, or a photo upload service. Don’t worry about “request failed, please try again later” and the person who handles it will do it when it is appropriate.

The advantages of these types of functions go beyond obvious benefits of usability. They also aid in increasing the adoption of HTTPS across the internet (all major browsers will only allow service workers to register on SSL) Service workers can be revolutionary in terms of speeds and speed. They are the foundation for new strategies and concepts like Google’s PRPL Pattern as they allow us to increase the efficiency of caching and reduce dependence on the internet. This is how service workers are expected to play a important part in making the web faster and usable for the coming billion users of the internet.

They’re an absolutely powerful force.

Incorporating a work force

Instead of performing a poor job of creating a basic tutorial instead, I’ll provide links to some of the most important sources. In the end, you’re in the ideal position to determine how thorough your knowledge of service workers must be.

Its Documents from MDN are a excellent resource to find out more about service employees and their skills. If you’re comfortable in the fundamentals of web development and you enjoy a learning-by-doing method, I’d strongly recommend taking the Google PWA Training Course. It offers a entire practical lesson about service workers, which is a fantastic way to familiarize you with the fundamentals. In the event that ES6 or promises isn’t a component of your JavaScript collection, be prepared for a baptism in fire.

The main thing you need to be aware of and you’ll discover quickly when you’ve started experimenting is that the service providers transfer an enormous amount and authority over their development. Contrary to previous attempts to resolve the connectivity problem (such as the failed app cache) service workers do not impose any particular guidelines on how you work; They’re a system of tools that let you write yourself solutions for the issues that you’re faced with.

A consequence of this is that they are extremely complex. The process of registering and installing a service worker isn’t a easy task or even attempting to create one using copy-paste from StackExchange will fail (seriously avoid this). There isn’t such thing that can be described as a ready-made service worker that can be installed on your website If you’re to create a suitable service worker you must understand the structure, infrastructure and use habits of your website. Uncle Ben always the web development expert has said it well: with great power comes immense responsibility.

Another thing to note: you’ll be amazed at the number of websites you visit have already implemented a Service worker. Head to chrome://serviceworker-internals/ in Chrome or about:debugging#workers in Firefox to see a list.

Service employees and SEO

As for SEO-related consequences, perhaps the most important aspect of service workers is their capability to alter requests, and create responses with Fetch API. Fetch API. What you will see in the ‘View Source’ tab or even the Network tab is not always a depiction of the information received from servers. It could represent a cached response, or it could be something created by the service worker using a range of sources.

No content, right? There are just some scripts inline and styles, as well as the empty HTML elements -the React app is a typical client-side JavaScript application written in React. Even if you go to the Network tab and then refresh the page it will be the same. The Preview as well as Response tabs will display the identical story. The actual content will only appear in the Element inspector because the DOM is assembled using JavaScript.

This is due to the fact that the site utilizes hybrid rendering as well as a service worker running in your browser handles any other navigation events. There’s no need to retrieve data from the HTML of this Docs site from the web server, since the client-side program is already functioning. View Source shows you what the service worker has returned for the program instead of what the network has returned. Furthermore, these pages can be refreshed even when you’re not online due to the effective utilization of cache.

You can quickly determine the responses that were sent by the service worker via the Network tab. Note the “from ServiceWorker Below is the line.

In the Application tab, you will view the service worker that is currently running on the page as well as the other caches it has made. You can turn off or disable the service worker and check any more sophisticated functions it may be employing. Knowing how to utilize these tools is a useful exercise. I’m not going to discuss the details here, but I would recommend reading through the Google Web Fundamentals tutorial on the debugging of service workers.

I’ve put in a intentional effort limit codes to a minimal amount in this post However, I’m going to give you this one. I’ve put together a sample that illustrates the way a basic service worker can make use of Fetch API to handle requests. Fetch API to manage requests, and also the level of control we’re given:

I hope the above (hugely simplified and not production ready) instance illustrates a crucial point, namely the fact that you have an extremely precise control over the way resource requests are dealt with. In the example above we’ve opted for a simple try-cache-first, fall-back-to-network, fall-back-to-custom-page pattern, but the possibilities are endless. Developers can decide the way requests are processed based on hostnames, directories and kinds of files, methods for requesting such as cache freshness, cache type, and a lot more. Responses, including complete pages – are fabricated through the services worker. Jake Archibald explores some common strategies and techniques in the off-line Cookbook.

The best time to understand what service worker capabilities are. The knowledge required for the modern-day technical SEO is one that has a good amount of resemblance to those required by a web developer. And in the present, a deep understanding of the development tools in every major browser including debugging for service workers is a necessity.

4. Wrapping Up

SEOs require adaptation

It’s been all too easy to be excused for not knowing the implications and opportunities that Service workers and PWAs.

These were innovative features that were on the fringe of the relevant aspects of SEO, and the above-mentioned skepticism of many SEOs with regard to JavaScript did not encourage the development of new features. However, PWAs are quickly on the journey to becoming a normality and will soon become impossible to carry out effectively without understanding the workings that govern the way they work. To remain current in your role as a professional SEO, or technical SEO (or SEO Engineer, to borrow a different term borrowed from Mike King), you must be on the cutting edge of these technological advancements that are changing the paradigm. The SEO with a technical background who is not knowledgeable in the field of web design is an old-fashioned notion and I think that any further divergence between technical and content-driven aspects of marketing is not a bad thing. Specialize!

If you find out that a development team is implementing a different JavaScript framework to build a new site design It’s not unusual to see SEOs to respond with a amount of skepticism. I’m definitely guilty of making jokes about developers being drawn to the latest technologies or frameworks, and also at how fast development in JavaScript development is evolving with layer upon layer of automation and abstraction adding to whatfrom afarappears to appear to be a the leaning tower a technology stack. It’s important to take the time to learn the reasons behind why frameworks are picked, when technologies will soon be utilized in production and what these decisions affect SEO.

Instead of focusing on 404 management or interlinking in a single-page app framework, for instance it is more effective provide meaningful suggestions that are rooted in an understanding of how they operate. According to Jono Alderson observed in his presentation about the democraticization of SEO and contributions to open sources are more beneficial in creating awareness and appreciation of SEO instead of resolving the same issues on a regular basis.

Beyond SEO

A final aspect I’d like you to know PWAs are a revolutionary set of technologies that they are bound to have implications that extend beyond SEO. The other areas of digital marketing can be directly affected too as well, and from my perspective the one that is most fascinating can be analysis.

If your site is not fully or completely functional when it is not online Have you modified your analytics settings to accommodate this? If you use push notifications as a KPI of your site Do you track this as a objective? Keep in mind that service workers don’t possess access to the Window object, so tracking these events is not feasible using a standard tracking program. Instead, you must set up your worker’s service application to create hits with Measurement Protocol. Measurement Protocol, queue them when necessary, and then send them directly to Google Analytics servers.

Next Post