Optimizing Page Speed Performance
Tips and Best Practices for Frontend Engineers
Introduction
Page speed performance has become increasingly important in the world of website development and digital marketing. With the growth of the internet and the rise of mobile devices, users have come to expect fast-loading websites that provide a seamless browsing experience. Slow-loading pages can lead to frustration, high bounce rates, and a poor user experience. As a result, optimizing page speed performance has become a top priority for businesses and website owners.
The importance of page speed can be traced back to the early days of the internet. In the 1990s, when the World Wide Web was first introduced, websites were primarily text-based and loaded quickly even on slow dial-up connections. However, as the internet evolved, websites became more complex, featuring images, videos, and interactive content. This led to longer load times and a decrease in overall page speed performance.
Today, the average website takes around 5 seconds to load fully. However, studies have shown that users expect pages to load in two seconds or less. In fact, according to Google, if a page takes more than three seconds to load, over half of the users will abandon the page and move on to another website. This highlights the importance of page speed performance and the impact it can have on user experience, website traffic, and business success.
Why Page Speed Performance Matters
User Experience: Fast-loading pages provide a better user experience. Visitors are more likely to stay on a website that loads quickly and is easy to navigate.
Search Engine Rankings: Page speed is a ranking factor in search engines like Google. Websites that load faster have a higher chance of ranking higher in search results.
Conversion Rates: Slow-loading pages can negatively impact conversion rates. Visitors are more likely to abandon a website and move on to a competitor if a page takes too long to load.
Mobile Users: Mobile users expect fast-loading pages. With more than half of internet traffic coming from mobile devices, it's essential to optimize your website's page speed for mobile users.
Page Speed Metrics
Core web vitals
Core Web Vitals are a set of metrics introduced by Google in May 2020 to measure and improve the user experience of a website. They are based on three main factors that Google considers important for providing a good user experience: loading, interactivity, and visual stability. The three metrics that make up the Core Web Vitals are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).
Largest Contentful Paint (LCP)
Largest Contentful Paint (LCP) is a Core Web Vital metric used to measure the loading performance of a website. Specifically, LCP measures the time it takes for the largest piece of content on a web page to load and become visible to the user.
The largest piece of content could be an image, video, or block of text, and LCP is important because it gives us a clear idea of how long it takes for the most critical content on their page to load. If LCP is slow, it can lead to a poor user experience, high bounce rates, and ultimately lower search engine rankings.
Google recommends that LCP should occur within the first 2.5 seconds of a page starting to load, and anything longer than that is considered poor performance. To optimize LCP, we can take a number of steps, such as reducing the file size of images and videos, optimizing server response times, and leveraging browser caching.
Optimizing for LCP is important for improving the user experience of a website and ultimately driving more traffic and business success. By providing faster loading times, we can reduce bounce rates and improve search engine rankings.
Several factors can lead to a poor Largest Contentful Paint (LCP) score:
Large file sizes: Large images, videos, and other media files can significantly slow down a website's load time, which can negatively impact LCP. It's important to optimize images and other media files for the web by compressing them and reducing their file size.
Slow server response times: If a website's server is slow to respond to requests, it can cause delays in loading the largest piece of content on a page. To improve server response times, we can consider upgrading to a faster hosting provider or optimizing their website's code.
Render-blocking resources: JavaScript and CSS files that block the rendering of a web page can also slow down LCP. We can improve LCP by minimizing the use of render-blocking resources and deferring the loading of non-critical scripts and stylesheets.
Poor caching: Caching allows frequently accessed content to be stored in a user's browser, which can speed up page load times. However, if caching is not configured correctly, it can cause delays in loading the largest piece of content on a page.
Network and device performance: A user's network speed and device performance can also impact LCP. We should ensure that their website is optimized for a range of devices and network conditions, and consider implementing a content delivery network (CDN) to improve performance.
First Input Delay (FID)
First Input Delay (FID) is a Core Web Vital metric used to measure the interactivity of a website. Specifically, FID measures the time between a user's first interaction with a website (such as clicking on a button or link) and the website's response to that interaction.
FID is important because it measures the responsiveness of a website, which can greatly impact the user experience. If a website is slow to respond to user interactions, it can lead to frustration and a poor user experience, which can result in high bounce rates and lower search engine rankings.
Google recommends that FID should be 100 milliseconds or less for a good user experience. To optimize FID, we can take several steps, such as reducing the size and complexity of JavaScript code, optimizing server response times, and minimizing the use of third-party scripts.
Optimizing for FID is important for improving the user experience of a website and ultimately driving more traffic and business success. By providing a faster and more responsive website, we can reduce bounce rates and improve search engine rankings.
Several factors can lead to a poor First Input Delay (FID) score:
Large JavaScript files: JavaScript is often used to add interactivity to a website, but large and complex JavaScript files can significantly slow down a website's responsiveness. To improve FID, we can consider minimizing the size and complexity of their JavaScript code.
Slow server response times: If a website's server is slow to respond to requests, it can cause delays in processing user interactions. To improve FID, we can consider upgrading to a faster hosting provider or optimizing their website's code.
Render-blocking resources: JavaScript and CSS files that block the rendering of a web page can also slow down FID. We can improve FID by minimizing the use of render-blocking resources and deferring the loading of non-critical scripts and stylesheets.
Third-party scripts: Third-party scripts, such as those used for advertising or tracking, can also impact FID. We should minimize the use of third-party scripts and only use those that are essential for their website's functionality.
Network and device performance: A user's network speed and device performance can also impact FID. We should ensure that their website is optimized for a range of devices and network conditions, and consider implementing a content delivery network (CDN) to improve performance.
Cumulative Layout Shift (CLS)
Cumulative Layout Shift (CLS) is a Core Web Vital metric used to measure the visual stability of a website. Specifically, CLS measures the amount of unexpected layout shifts that occur during the loading of a web page.
Layout shifts occur when elements on a web page move unexpectedly while the page is loading, which can cause frustration and confusion for users. CLS measures the total amount of layout shift that occurs during the loading of a web page, taking into account both the size and distance of the shifted elements.
Google recommends that a CLS score of 0.1 or less is considered good for a website's user experience. To optimize for CLS, we can take several steps, such as setting size attributes for images and videos, avoiding dynamically injected content above existing content, and reserving space for ads and other third-party content.
Optimizing for CLS is important for improving the user experience of a website and ultimately driving more traffic and business success. By providing a visually stable website, we can reduce bounce rates and improve search engine rankings.
Several factors can lead to a poor Cumulative Layout Shift (CLS) score:
Images and videos without dimensions: If images and videos on a web page do not have specified dimensions, it can cause unexpected shifts in the layout as the content loads. To improve CLS, we can specify dimensions for images and videos in the HTML code.
Dynamically injected content: Content that is dynamically added to a web page after it has started to load can cause unexpected shifts in the layout. To improve CLS, we can avoid adding new content above existing content or reserve space for the new content to avoid layout shifts.
Ads and other third-party content: Third-party content, such as ads or social media widgets, can cause unexpected layout shifts if the space for the content is not reserved in advance. To improve CLS, we can reserve space for third-party content in the layout of their web page.
Custom fonts: Custom fonts can cause layout shifts if they are not loaded correctly or if the fallback font is significantly different in size or style. To improve CLS, we can ensure that custom fonts are loaded properly and that the fallback font is similar in size and style to the custom font.
Google has stated that Core Web Vitals will be used as a ranking factor in search results starting in May 2021. This means that websites that perform well on these metrics will be rewarded with higher search engine rankings, while websites that perform poorly may see a negative impact on their rankings.
Optimizing for Core Web Vitals is important for improving the user experience of a website and driving more traffic and business success. By improving loading times, interactivity, and visual stability, we can provide a better experience for their users and ultimately improve their search engine rankings.
Other page performance metrics
Time to first byte (TTFB): This is the time it takes for a server to respond to a request. It includes the time it takes for the server to receive, process, and send back the first byte of data.
Speed index: This is a score that measures how quickly a web page loads visually. It calculates the time it takes for the above-the-fold content to become visible to the user.
Time to interactive (TTI): This measures the time it takes for a web page to become fully interactive, meaning users can click on links and buttons and fill out forms.
How to improve page speed performance metrics
There are a plethora of methods frontend developers can use to improve site speed, such as optimizing images, minimizing render-blocking resources, reducing server response time, using code splitting, and using performance optimization tools.
Optimizing Images
Use the correct image format
Using the correct image format can also impact page load times. JPEG is best for photographs and images with many colors, while PNG is better for images with transparency and simple graphics. However, WebP is a newer format that can provide better compression for images. WebP is a newer image format developed by Google that is specifically designed for the web. WebP offers better compression and smaller file sizes than JPEG and PNG, which can significantly improve page speed performance.
WebP achieves better compression by using advanced compression techniques such as predictive coding, spatial and temporal redundancy elimination, and alpha plane compression. These techniques allow WebP to deliver high-quality images with smaller file sizes than JPEG and PNG formats.
In addition to smaller file sizes, WebP also offers other benefits such as lossless and lossy compression, transparency support, and animated image support. WebP also supports progressive loading, which means that images can be displayed in low-resolution versions while the full image continues to load in the background. This can further improve page speed performance by allowing the user to see the image faster, even before it's fully loaded.
Overall, WebP is an excellent option for optimizing images for the web and improving page speed performance. However, it's important to note that not all browsers support WebP, so it's important to provide fallback options for users whose browsers don't support it. Additionally, converting existing images to WebP format may require some additional effort, especially for websites with a large number of images.
Resize images
Resizing images to the correct dimensions can also help to reduce their file size. For example, if an image is only going to be displayed at 500 pixels wide, there's no need to upload an image that's 2000 pixels wide.
ImageKit is a cloud-based image optimization and transformation service that can be used to improve page speed performance for websites. ImageKit offers a range of features and tools for optimizing and transforming images in real-time, which can significantly reduce the size and load time of images.
One of the key features of ImageKit is its ability to transform images on-the-fly. This means that when a user requests an image from a website, ImageKit can automatically resize, crop, and compress the image to reduce its size and improve its load time. This is particularly useful for websites that have a large number of images, as it can save a significant amount of time and effort compared to manually optimizing each image.
Regardless of the method you choose, there are some best practices to keep in mind when resizing images for improved site performance:
Choose the right dimensions: Before resizing an image, consider the maximum size it will be displayed on your website. This will help you choose the appropriate dimensions for the resized image.
Maintain aspect ratio: When resizing an image, be sure to maintain the original aspect ratio to avoid stretching or distorting the image.
Choose the right file format: Different file formats have different compression capabilities and can affect image quality differently. Choose the right file format for the type of image you are using, and consider using modern formats such as WebP for improved performance.
Serving images through CDN's
Using a Content Delivery Network (CDN) to serve images can significantly improve page speed performance for websites. A CDN is a network of servers that are strategically located around the world and are designed to deliver content, including images, to users more quickly and efficiently.
When a user requests an image from a website, the request is routed to the nearest server in the CDN, which then delivers the image to the user. This reduces the distance that the image needs to travel and can significantly reduce the time it takes to load the image.
CDNs also offer other benefits for serving images, such as caching and compression. When an image is requested from a CDN, the server will cache the image and store it on multiple servers around the world. This means that subsequent requests for the same image can be served from the cache, reducing the amount of time it takes to load the image.
CDNs can also compress images on-the-fly, which can further reduce the file size of the image and improve page speed performance. This is particularly useful for large images that would otherwise take a long time to load.
Lazy loading images
Lazy loading is a technique used to improve page speed performance by delaying the loading of images until they are needed. Instead of loading all the images on a page when it loads, lazy loading loads only the images that are visible to the user or that will be visible soon as the user scrolls down the page.
Lazy loading can significantly reduce the initial load time of a web page, particularly for pages with many large images. By reducing the amount of content that needs to be loaded upfront, the page can load faster and improve the user experience.
To implement lazy loading of images in Next.js, you can use the next/image
component, which is a wrapper around the HTML img
tag. The next/image
component allows you to easily optimize images for performance, and it also supports lazy loading out of the box.
To enable lazy loading of images in Next.js, you can set the loading
attribute of the next/image
component to "lazy"
. This will tell the browser to lazy load the image, which means that it will only be loaded when it is about to be displayed on the screen.
import Image from "next/image";
function MyComponent() {
return (
<div>
<Image
src="/my-image.jpg"
alt="My Image"
width={500}
height={500}
loading="lazy"
placeholder="blur"
/>
</div>
);
}
In addition to lazy loading images, you can also set priority to true for images that are above the fold when the page renders in Next.js. This is especially useful when the image is the largest component on the page, and it can help improve the LCP (Largest Contentful Paint) metric.
To set priority to true for an image in Next.js, you can use the priority
attribute of the next/image
component. When priority
is set to true, Next.js will prioritize loading the image and will preload it as soon as possible.
import Image from "next/image";
function MyComponent() {
return (
<div>
<Image
src="/my-image.jpg"
alt="My Image"
width={500}
height={500}
priority={true}
/>
</div>
);
}
Minimize render-blocking resources
Minimizing render-blocking resources is an essential technique for improving website performance and optimizing Core Web Vitals, which are metrics used by Google to measure user experience on the web. Render-blocking resources, such as large CSS and JavaScript files, can delay the rendering of a webpage, leading to a poor user experience, particularly on slower devices and connections. By minimizing these resources, you can reduce the time it takes for a page to load and improve its overall performance.
Lazy loading components
To minimize render-blocking resources in a React application, you can use the React.lazy()
function to asynchronously load components and their dependencies only when they are needed. This reduces the initial load time of a page by loading only the required resources and postponing the loading of non-critical resources until after the page has loaded.
import React, { lazy, Suspense } from "react";
const MyComponent = lazy(() => import("./MyComponent"));
function App() {
return (
<div>
<h1>My App</h1>
<Suspense fallback={<div>Loading...</div>}>
<MyComponent />
</Suspense>
</div>
);
}
export default App;
In this example, the MyComponent
component is loaded asynchronously using the React.lazy()
function. This defers the loading of the component and its dependencies until they are actually needed. The Suspense
component is used to handle the loading of the component, displaying a fallback UI (in this case, the text "Loading...") until the component is loaded and rendered.
Using the React.lazy()
function can significantly improve the performance of a React application by reducing the initial load time and deferring the loading of non-critical resources until after the page has loaded. This, in turn, can help optimize Core Web Vitals, particularly the Largest Contentful Paint (LCP) metric, which measures the time it takes for the largest element on the page to render. By minimizing render-blocking resources and optimizing LCP, you can improve the user experience and search engine ranking of your website.
Deferring non-essential scripts
Deferring non-essential scripts to the end of the render cycle is a useful technique for improving the Largest Contentful Paint (LCP) metric. LCP measures the time it takes for the largest element on a page to render, and by deferring non-essential scripts, you can reduce the load time of the page and improve its overall performance.
To defer non-essential scripts in a web page, you can use the defer
attribute on the script
element. This attribute tells the browser to defer the loading and execution of the script until after the page has finished rendering. This way, the browser can focus on loading and rendering the content that is visible to the user first, which helps improve the LCP metric.
<!DOCTYPE html>
<html>
<head>
<title>My Page</title>
</head>
<body>
<h1>Hello, world!</h1>
<p>This is my page.</p>
<script src="non-essential.js" defer></script>
</body>
</html>
In a Next.js application, you can defer non-essential scripts using the next/script
component. This component allows you to load scripts asynchronously and defer their execution until after the page has finished rendering.
import Script from 'next/script';
function MyComponent() {
return (
<div>
<h1>Hello, world!</h1>
<p>This is my component.</p>
<Script src="non-essential.js" strategy="lazyOnload" />
</div>
);
}
export default MyComponent;
Reduce server response time
Cache pages on web server (NGINX varnish)
Varnish Cache is a popular open-source web application accelerator that can be used to improve the performance of web applications by caching web pages on the server. By caching web pages, Varnish Cache can serve web pages to users faster, since it doesn't need to generate the pages from scratch each time a user requests them.
To use Varnish Cache, you install it on your web server and configure it to cache the web pages that you want to accelerate. When a user requests a web page, Varnish Cache checks to see if it has a cached copy of the page. If it does, it serves the cached copy to the user. If it doesn't have a cached copy, it forwards the request to your web application, which generates the page and sends it back to Varnish Cache, which then caches the page and serves it to the user.
Using Varnish Cache can significantly improve the performance of web applications by reducing the time it takes to generate and serve web pages. However, it's important to configure Varnish Cache properly to ensure that it doesn't cache dynamic content that shouldn't be cached, such as user-specific content or pages that include sensitive information.
Statically render page if possible
Static rendering is a technique where web pages are generated and served as static HTML files, without the need for server-side processing at the time of each request. This approach is particularly beneficial for websites with static content, such as blogs, as it eliminates the need for repeated server-side rendering of the same content.
Static rendering in Next.js can significantly improve the Core Web Vitals, particularly the Largest Contentful Paint (LCP) metric. When a page is statically rendered, the HTML content is pre-generated during the build process and served as a static file. This means that when a user requests the page, the server can quickly deliver the pre-generated HTML file, resulting in faster load times.
Since the LCP measures the time it takes for the largest content element on the page to load, statically rendering a page can help to ensure that the largest content element is immediately available, resulting in a faster LCP. This is particularly true for pages that contain dynamic content that can slow down the LCP, such as images and videos.
Static rendering in Next.js can also improve the First Contentful Paint (FCP) metric, which measures the time it takes for the first piece of content to appear on the screen. With static rendering, the first piece of content is already pre-generated, meaning it can be served to the user more quickly.
However, if you want to update the content of a statically generated webpage when the data changes, you can use the revalidate
option in getStaticProps
. The revalidate
option specifies the number of seconds after which a page re-generation can occur. When the specified time has elapsed, the page will be regenerated with the latest data.
export async function getStaticProps() {
const res = await fetch('https://api.example.com/data')
const data = await res.json()
return {
props: {
data,
},
revalidate: 60, // re-generate page every 60 seconds
}
}
Remove unused CSS and JS
Reducing unused CSS and JS is a powerful technique for improving Core Web Vitals, particularly the Cumulative Layout Shift (CLS) metric. CSS and JS files can often be large and include many rules that aren't used on a given page. This means that the browser has to download and parse unnecessary code, which can slow down page loading and negatively impact CLS.
To reduce unused CSS and JS, it's important to analyze your site's code and identify which rules and scripts aren't used on specific pages. Several tools can help with this, such as Google's PageSpeed Insights and Lighthouse, as well as third-party tools like PurifyCSS and UnCSS.
Once you have identified the unused code, you can remove it from your site's codebase. This can be done manually, by going through your CSS and JS files and deleting unnecessary rules and scripts, or automatically, using a tool like PurgeCSS or webpack-bundle-analyzer.
The unused-webpack-plugin is a helpful tool for reducing the size of your JavaScript bundles by detecting/removing any unused modules/source files. Here is an example of the webpack config below.
const path = require('path');
const UnusedWebpackPlugin = require('unused-webpack-plugin');
module.exports = {
// webpack configuration
plugins: [
...otherPlugins,
new UnusedWebpackPlugin({
// Source directories
directories: [path.join(__dirname, 'src')],
// Exclude patterns
exclude: ['*.test.js'],
// Root directory (optional)
root: __dirname,
}),
],
};
It's important to note that reducing unused code can also have a positive impact on other Core Web Vitals, such as the Largest Contentful Paint (LCP) and First Input Delay (FID), by reducing the amount of code the browser has to download and parse.
Tools for measuring page speed
There are several tools available to measure page speed performance. Some of the most popular ones include:
Google PageSpeed Insights: This tool analyzes the content of a web page and generates suggestions to make that page faster. It also provides a score for both mobile and desktop versions of the website.
GTmetrix: This tool provides a detailed analysis of a website's performance, including its speed, performance scores, and recommendations for improvement.
Lighthouse: This tool is built into Google Chrome and can be used to analyze the performance of a web page. It provides a report on performance, accessibility, best practices, and SEO.
Conclusion
In conclusion, improving page speed performance is critical for website success in today's digital world. Slow-loading websites not only affect user experience but also negatively impact search engine ranking and conversions. By optimizing images, minimizing render-blocking resources, lazy-loading images, deferring non-essential scripts, caching web pages, and reducing unused CSS and JS, frontend software engineers can significantly improve the performance of their ReactJS and NextJS websites.
Furthermore, with the introduction of Core Web Vitals, it has become more critical than ever to optimize page speed performance. The three metrics of Core Web Vitals, namely LCP, FID, and CLS, provide a clear and straightforward way of assessing website speed and usability. Web developers and designers can use these metrics as guidelines to improve the page speed performance of their websites, ultimately leading to improved user experience and higher search engine ranking.
In summary, optimizing page speed performance is crucial for website success. As frontend software engineers, we need to prioritize website speed and usability by implementing the techniques and strategies discussed above. With the ever-evolving digital landscape, we must continuously monitor and improve website performance to remain competitive and provide the best user experience for our website visitors.