Trying Lighthouse

Installing and using Lighthouse to measure your application's performance.

This blog post shows how to measure the performance of your web application using Lighthouse tool, and how to run such measurements on each pull request to make sure the performance does not drop.

Example application

I have prepared a small web page that loads and shows "Loaded" after one second. You can find the source code in the repo bahmutov/web-performance-example.

public/index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<html>
<head>
<title>Web performance</title>
<meta name="viewport" content="width=device-width" />
<link rel="stylesheet" href="./reset.css" />
<link rel="stylesheet" href="./style.css" />
</head>
<body>
<main></main>
<script>
setTimeout(() => {
document.querySelector('main').innerHTML = `
<h1>Loaded</h1>
`
}, 2000)
</script>
</body>
</html>

Loaded web site at localhost:3000

The server slows down the style.css and index.html on purpose

server.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
fastify.get('/', (request, reply) => {
setTimeout(() => {
reply.type('text/html').send(indexDoc)
}, 1000)
})

fastify.get('/style.css', (request, reply) => {
setTimeout(() => {
reply.type('text/css').send(styles)
}, 500)
})

fastify.register(require('@fastify/static'), {
root: publicFolder,
})

The slowdown is clearly visible in the Network panel waterfall of resources.

The network panel

The loaded resources and the load DOM event

How fast does this page load? Can we confirm it automatically?

Lighthouse in Chrome browser

The simplest way to measure how fast the site loads is by using the built-in Lighthouse panel in the Chrome browser's DevTools.

The Lighthouse panel in Chrome DevTools

Once you click the "Analyze page load" button, the Lighthouse cranks for a few seconds, reloads the page, while measuring everything, and then shows its results and recommendations.

The Lighthouse page score

The Lighthouse recommendations

Notice how Lighthouse correctly found the bottlenecks in our page: the slow server taking too long to return the page and blocking CSS.

Reports

If you click on the "See calculator" link, you will see how the score 84 was calculated. Different Web Vitals are weighted to produce the final page performance score.

Find the explanation for the score under the calculator link

The page score is a weighted sum of the individual performance measurements

You can generate a PDF or HTML of this report using the menu in the top right corner.

Export Lighthouse results in different formats

Lighthouse CLI

Let's generate the same Lighthouse report from the command line. I will install both Lighthouse NPM module and my utility start-server-and-test to make starting the app and taking its performance a single command.

1
2
3
$ npm i -D lighthouse start-server-and-test
+ [email protected]
+ [email protected]

I will add a few NPM aliases to start the application and run Lighthouse without asking any prompts

package.json
1
2
3
4
5
6
7
{
"scripts": {
"perf": "lighthouse http://localhost:3003 --preset=desktop --only-categories=performance --no-enable-error-reporting",
"measure": "start-test 3003 perf",
"start": "node ./server"
}
}

Let's run the measure script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ npm run measure

> [email protected] measure
> start-test 3003 perf

1: starting server using command "npm run start"
and when url "[ 'http://127.0.0.1:3003' ]" is responding with HTTP status code 200
running tests using command "npm run perf"


> [email protected] start
> node ./server

...
> [email protected] perf
> lighthouse http://localhost:3003 --preset=desktop --only-categories=performance --no-enable-error-reporting

LH:ChromeLauncher Waiting for browser. +0ms
LH:ChromeLauncher Waiting for browser... +1ms
LH:ChromeLauncher Waiting for browser..... +505ms
LH:ChromeLauncher Waiting for browser.....✓ +3ms
LH:status Connecting to browser +924ms
LH:status Navigating to about:blank +14ms
LH:status Benchmarking machine +14ms
LH:status Preparing target for navigation mode +1s
LH:status Navigating to about:blank +14ms
LH:status Preparing target for navigation +8ms
LH:status Cleaning origin data +84ms
LH:status Cleaning browser cache +10ms
LH:status Preparing network conditions +56ms
LH:status Navigating to http://localhost:3003/ +93ms
...
LH:status Generating results... +0ms
LH:Printer html output written to /web-performance-example/localhost_2023-06-14_21-22-50.report.html +33ms
LH:CLI Protip: Run lighthouse with `--view` to immediately open the HTML report in your browser +0ms
LH:ChromeLauncher Killing Chrome instance 76349 +0ms
closing the server with the signal SIGINT

Let's open the generated static HTML file localhost_2023-06-14_21-22-50.report.html

1
$ open localhost_2023-06-14_21-22-50.report.html

The report shows slightly different score from running Lighthouse panel:

The Lighthouse CLI scores our local site higher

The difference seems to be in the initial loading of the page for some reason: the CLI reports 600ms vs 1000ms for the index page.

Differences in blocking resources timings

I am still learning why such differences exist.

Tip: here are some of the Lighthouse CLI flags I use right now:

1
2
3
4
5
6
lighthouse http://localhost:3003  // URL to audit
--preset=desktop // desktop settings mode
--only-categories=performance // only interested in performance
--max-wait-for-load=6000 // let the slow page load everything
--skip-audits // do not run any other audits
--no-enable-error-reporting // do not ask if the user wants to send diagnostics

You can also generate reports in several formats: HTML, CSV, JSON. For example, to generate both HTML and CSV reports and overwrite the existing files, use --output html,csv --output-path=./lighthouse-results.html which creates lighthouse-results.html and lighthouse-results.csv files.

Set performance budget

Let's say we are happy with the performance score 92 and want to keep the site this fast. We can set the performance budget and make Lighthouse fail if the performance score drops below 92.

budget.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[
{
"path": "/",
"timings": [
{
"metric": "first-contentful-paint",
"budget": 100
},
{
"metric": "speed-index",
"budget": 200
}
]
}
]

We set the very low metric limits on purpose. The analysis should run with CLI flag

1
$ lighthouse http://localhost:3003 --budget-path=budget.json ...

The process exists with code 0, but the generated report shows the metrics that are over the budget

The Lighthouse reports timings that were over the set budgets

Lighthouse on CI

We cannot stop the build if our site is over the performance budget using Lighthouse CLI. Instead we can use Lighthouse-CI which grabs the measurements and lets us fail the build if there is a slowdown.

Let's create the initial Lighthouse CI configuration file. It uses JavaScript, and at first I will just put our starting command and URL there

lighthouserc.js
1
2
3
4
5
6
7
8
9
10
11
module.exports = {
ci: {
collect: {
url: ['http://localhost:3003/'],
startServerCommand: 'npm start',
},
upload: {
target: 'temporary-public-storage',
},
},
}

The GitHub Actions workflow installs dependencies and runs the Lighthouse CI (LHCI) using the above configuration

Tip: you can read my blog post Trying GitHub Actions to get familiar with GHA.

.github/workflows/performance.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
name: performance
on: [push]
jobs:
lhci:
name: Lighthouse
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js
uses: actions/setup-node@v3
with:
node-version: 18.x
- name: Install 📦
run: npm install
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun

By default LHCI runs Lighthouse 3 times and measure everything using a mobile device emulation

The performance workflow output

Because we configured LHCI to upload the report to public storage, we can open the displayed URL to see the HTML report

The performance HTML report

Again, keep in mind that this is a performance report for mobile emulation, as shown at the bottom of the report.

The performance measurements were taken using a mobile device emulation

Let's run LHCI with the same settings as Lighthouse CLI. We can put all CLI arguments under the settings object in the resource file

lighthouserc.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
module.exports = {
ci: {
collect: {
url: ['http://localhost:3003/'],
startServerCommand: 'npm start',
settings: {
budgetPath: 'budget.json',
preset: 'desktop',
onlyCategories: 'performance',
maxWaitForLoad: 6000,
},
},
upload: {
target: 'temporary-public-storage',
},
},
}

The LHCI runs on GitHub and generates a report closely matching what I see locally using Lighthouse Chrome DevTools panel.

Performance reports local vs LHCI

Fail on low performance

Now lets add assertions to our LHCI configuration to fail this step if the performance is too low. To show the failure I will set the performance score to 90 instead of 80.

lighthouserc.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
module.exports = {
ci: {
collect: {
url: ['http://localhost:3003/'],
startServerCommand: 'npm start',
settings: {
budgetPath: 'budget.json',
preset: 'desktop',
onlyCategories: 'performance',
maxWaitForLoad: 6000,
},
},
upload: {
target: 'temporary-public-storage',
},
assert: {
assertions: {
'categories:performance': ['error', { minScore: 0.9 }],
},
},
},
}

Let's push the code to GitHub and watch it fail

LHCI breaks the build because of the low performance score

Ok, now we know the CI will catch a performance regression, and I will set the performance minScore to 0.83

Tip: you can overwrite the url to test from the command line lhci autorun --url <url to test>

Lighthouse GitHub status checks

You could add commit status checks using either your personal GitHub token or by installing the Lighthouse GitHub App.

Lighthouse GitHub App

I will install the app and will give it access to the status checks in the example repo.

Give Lighthouse app access to the specific repos

The installed application will show the LH token - save it and keep it secret.

Lighthouse app creates a secret token for you

You should set this token as GitHub Actions secret, then pass it in the "Lighthouse CI" step as an environment variable.

Set the Lighthouse token as Actions secret

1
2
3
4
5
6
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}

The workflow runs and finds the token and posts the commit status check.

LHCI finds the token and posts the GH commit status

Here is how the commit status looks

LHCI commit status

The "details" link goes directly to the public report URL.

Performance checks for pull requests

I will adjust the performance workflow to execute on pull requests to the main branch and on any commit pushed to the main branch

.github/workflows/performance.yml
1
2
3
4
5
6
7
8
name: performance
# run performance testing on main branch
# and on any pull request to the main branch
on:
pull_request:
branches: [main]
push:
branches: [main]

Let's open a pull request that delays serving the index.html by one second. This should decrease the performance. We see the Lighthouse step failing, but no LHCI status check.

Missing the LHCI commit status

When running on pull_request, GitHub Actions set GITHUB_REF to the merge commit SHA, while the status checks are attached to the pull request head commit SHA. Luckily, LHCI allows you to overwrite the SHA value to set the status on any commit.

1
2
3
4
5
6
7
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
LHCI_BUILD_CONTEXT__CURRENT_HASH: ${{ github.event.pull_request.head.sha }}

The status checks are set correctly

LHCI commit status set for pull request commits

See also

🎁 You can find the full source code for this blog post in the repo bahmutov/web-performance-example.

Bonus 1: Run Chrome in Docker

If you need to use a Docker container to run Lighthouse, then it needs Chrome installed. You can pick a Chrome image from Cypress cypress-docker-images:

1
2
# we need Chrome browser to run Lighthouse
container: cypress/browsers:node18.12.0-chrome107

When running Chrome inside a Docker container, it needs a few flags usually:

1
2
3
4
5
6
7
8
9
10
11
// lighthouserc.js
collect: {
url: ['...'],
settings: {
chromeFlags: '--disable-gpu --no-sandbox --disable-dev-shm-usage',
preset: 'desktop',
onlyCategories: 'performance',
maxWaitForLoad: 10_000,
},
...
}

Bonus 2: Basic authentication

If your page is protected by the basic authentication, you need to encode the username and password and send it with your Lighthouse requests

lighthouserc.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// encode the username and password like the browser does
const basicAuth = btoa(
encodeURIComponent(process.env.username) +
':' +
encodeURIComponent(process.env.password),
)

module.exports = {
ci: {
collect: {
url: ['https://web.mercari.guru'],
settings: {
// provide basic auth via browser header
extraHeaders: `{"Authorization": "Basic ${basicAuth}"}`,
chromeFlags: '--disable-gpu --no-sandbox --disable-dev-shm-usage',
preset: 'desktop',
onlyCategories: 'performance',
maxWaitForLoad: 10_000,
},
},
...

Bonus 3: Local Lighthouse report

If we do not want to save the HTML report to the public static storage, we can set it to store the report to the local file.

lighthouserc.js
1
2
3
upload: {
target: 'filesystem',
}

Since LHCI runs 3 performance tests in a row by default, there will be three JSON and HTML files, the names include the timestamps

LHCI saves local reports

We can simply preserve the last report in both JSON and HTML formats by specifying the output report filename

lighthouserc.js
1
2
3
4
5
upload: {
// target: 'temporary-public-storage',
target: 'filesystem',
reportFilenamePattern: 'lighthouse-results.%%EXTENSION%%',
}

LHCI saves the last report under consistent name

Let's save the report files as build artifact.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
LHCI_BUILD_CONTEXT__CURRENT_HASH: ${{ github.event.pull_request.head.sha }}
- name: List files 💾
run: ls -la
- name: Save LH reports 💾
uses: actions/upload-artifact@v3
with:
name: lighthouse
path: lighthouse-results.*

The report zip with two files appears as a job artifact.

LHCI reports saved as a GHA artifact

Tip: you might want to always save the performance test result artifacts, even if the LHCI reports an audit failure.

1
2
3
4
5
6
7
# after LHCI step
- name: Save LH reports 💾
uses: actions/upload-artifact@v3
if: always()
with:
name: lighthouse
path: lighthouse-results.*

This saves the test report even if the LHCI audit does not pass.

Bonus 4: Post GitHub summary

The LHCI generates JSON and HTML reports. Here is a typical metric in the JSON file:

lighthouse-results.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
:"audits": {
"largest-contentful-paint": {
"id": "largest-contentful-paint",
"title": "Largest Contentful Paint",
"description": "Largest Contentful Paint marks ...",
"score": 0.23,
"scoreDisplayMode": "numeric",
"numericValue": 3543.9742082824705,
"numericUnit": "millisecond",
"displayValue": "3.5 s"
}
}
}

You can read the JSON file and write the main performance metrics to the terminal and to the GitHub job summary. Here is a typical Node.js script:

post-summary.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
const ghCore = require('@actions/core')
const results = require('./lighthouse-results.json')
const metrics = [
'first-contentful-paint',
'interactive',
'speed-index',
'total-blocking-time',
'largest-contentful-paint',
'cumulative-layout-shift',
]

/**
* Returns a symbol for the score.
* @param {number} score from 0 to 1
*/
function evalEmoji(score) {
if (score >= 0.9) {
return '🟢'
}
if (score >= 0.5) {
return '🟧'
}
return '🔺'
}

const rows = []

metrics.forEach((key) => {
const audit = results.audits[key]
// be safe and always push strings
rows.push([audit.title, String(audit.displayValue), evalEmoji(audit.score)])
})

console.table(rows)

ghCore.summary
.addHeading('Lighthouse Performance')
.addTable([
[
{ data: 'Metric', header: true },
{ data: 'Time', header: true },
{ data: 'Eval', header: true },
],
...rows,
])
.addLink(
'Trying Lighthouse',
'https://glebbahmutov.com/blog/trying-lighthouse/',
)
.write()

We run this script after LHCI

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
LHCI_BUILD_CONTEXT__CURRENT_HASH: ${{ github.event.pull_request.head.sha }}
- name: List files 💾
run: ls -la
if: always()
- name: Post job summary 📊
run: node ./post-summary
if: always()
- name: Save LH reports 💾
uses: actions/upload-artifact@v3
if: always()
with:
name: lighthouse
path: lighthouse-results.*

Here is the terminal output

Main performance metrics printed to the terminal

Here is the job summary from a typical run:

Main performance metrics in GHA summary

Bonus 5: Split autorun command

The lhci autorun command executes collect, upload, and assert commands internally. For example, you might want to collect and upload the reports before running any checks. Thus it makes sense to split the steps:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# https://github.com/GoogleChrome/lighthouse-ci
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci collect
- name: Save reports 💾
run: lhci upload
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
LHCI_BUILD_CONTEXT__CURRENT_HASH: ${{ github.event.pull_request.head.sha }}
- name: Upload reports
uses: actions/upload-artifact@v3
with:
name: lighthouse
path: lighthouse-results.*

- name: Post performance summary 📊
run: node ./post-summary

- name: Lighthouse assertions
run: lhci assert

Bonus 6: Reusable GitHub Actions module

To simplify posting the performance job summary and commit status, I created an NPM package lhci-gha hosted at https://github.com/bahmutov/lhci-gha. You can install this module as a dev dependency

1
2
$ npm i -D lhci-gha
+ [email protected]

The updated workflow uses npx to run the two scripts provided by the lhci-gha module

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# https://github.com/GoogleChrome/lighthouse-ci
- name: run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci collect
# post performance summary and set the commit status
# https://github.com/bahmutov/lhci-gha
- name: Post performance summary 📊
run: npx post-summary --report-filename lighthouse-results.json

- name: Post performance commit status
run: |
npx post-status \
--report-filename lighthouse-results.json \
--owner bahmutov --repo web-performance-example \
--commit ${{ github.event.pull_request.head.sha || github.sha }}
env:
PERSONAL_GH_TOKEN: ${{ secrets.PERSONAL_GH_TOKEN }}

The post-status script does not require Lighthouse GH app installation, since it uses my personal GH token. Here is how the commit status looks.

Lighthouse performance using custom status

You can set the minimum performance score. Let's make our project fail by requiring at least 90.

1
2
3
4
5
6
7
8
- name: Post performance commit status 
run: |
npx post-status --min 90 \
--report-filename lighthouse-results.json \
--owner bahmutov --repo web-performance-example \
--commit ${{ github.event.pull_request.head.sha || github.sha }}
env:
PERSONAL_GH_TOKEN: ${{ secrets.PERSONAL_GH_TOKEN }}

The status check shows the current performance 82 is below 90.

Failing performance status check