paulirish
Repos
327
Followers
30770
Following
280

Automated auditing, performance metrics, and best practices for the web.

25949
8699

A faster youtube embed.

4517
199

Type `git open` to open the GitHub page or website for a repository in your browser.

3045
216

See your latest local git branches, formatted real fancy

908
32

paul's fish, bash, git, etc config files. good stuff.

4039
1149

Events

https://chromiumdash.appspot.com/releases
          https://chromiumdash.appspot.com/releases

Originally posted by @BreakerZoeBurma in https://github.com/GoogleChrome/lighthouse/issues/14728#issuecomment-1407169404

Created at 2 days ago
issue comment
[Feedback] Enhanced Traces experiment

@robpaveza on the linked crbug.. I'll get back to you, the issue creator is OOO and I don't want to flip it to public without his consent. :) But the gist is that we're going to add basic metadata to traces saved from Perf Panel, and thus will almost certainly use the json format described above.

Aye, it's your call on the naming. I am just attracted to the idea that tools like traceviewer and perfetto could view these enhanced traces as well. I'm less familiar with the 3 memory profiles, but after looking at them, I don't see an obvious solution that'd net us some 'free' compatibility anywhere.

Created at 3 days ago

core(trace-processor): refactor processEvents and frameEvents (#14287)

Created at 4 days ago
delete branch
paulirish delete branch processEventsfix
Created at 4 days ago
pull request closed
core(trace-processor): refactor processEvents and frameEvents

As discovered in #14264 we were severely excluding events in our processEvents array. And also frameEvents, frameTreeEvents, and mainThreadEvents.

The primary changes here are in trace-processor and this pr contains mostly bug fixes. I envision a second PR which refactors the traceprocessor functions and flow a bit (and wouldn't adjust test results). I can do that here, too, but i'll leave that to the reviewer to opt-in to.

Observations & changes:

  • about:blank => website can undergo a process switch. the twitter.com repro makes this super obvious.
  • findMainFrameIds() returns the "starting" pid, but that will not be the pid of the primary navigation necessarily. that needs to be found afterwards.
  • frameIds are consistent across x-process navigations. (always have been, AFAIK)
  • the FrameCommittedInBrowser event is great and spells out the potentially-new processId of the renderer in the new navigation.
  • I tried to support the usecase of 'multiple navigations in one trace' (eg in timespans). I'm not convinced we support that well right now.
  • Trace events explicitly associated with a frame are generally tagged with either args.data.frame or args.frame. No consistency. Where I saw our code depending on one location, I expanded it to both.
  • Removed trace minification

Followup work:

edit: filed trace processor - cleanup and fixes ☂️ · #14708

Created at 4 days ago

core: delete util.cjs (#14709)

deps: upgrade axe-core to 4.6.3 (#14690)

deps(lighthouse-stack-packs): upgrade to 1.9.0 (#14713)

lint

Merge remote-tracking branch 'origin/main' into processEventsfix

Created at 4 days ago
Created at 4 days ago
Created at 4 days ago

deps: upgrade puppeteer to 19.6.0 (#14706)

core: remove util.cjs (#14703)

Merge remote-tracking branch 'origin/main' into processEventsfix

Created at 5 days ago
issue comment
core(trace-processor): fix subsetting of processEvents and frameEvents

Still outstanding review comments:

  • first item moved to #14708
  • second item:
    • Would there be an easy way to make sure we don't fall back to this in new traces? If somehow the thread_name event changed format it would be good if the change wasn't masked by the fallback still working.

    • just sorted this with 6cbf6cc78 I restored the key trace event to our fixture traces that had stripped it out. There's no more fallback, but that's fine as it wasn't necessary.
  • third item (frameEvents is wrong) moved to #14708

That resolves the only remaining open threads on this PR.

Created at 5 days ago

core(fr): preserve scroll position in gatherers (#14660)

core(full-page-screenshot): remove audit, move to top-level (#14657)

core(user-flow): passively collect full-page screenshot (#14656)

core(legacy): convert some base artifacts to regular gatherers (#14680)

misc: exclude core/util.cjs from code coverage (#14688)

core(scoring): rebalance perf metric weightings for v10 (#14667)

core(scoring): update expected perf score for flow fixtures (#14692)

core(processed-navigation): computed directly from trace (#14693)

core: restructure types for direct import and publishing (#14441)

core: use performance.now in isolation (#14685)

report(thumbnails): increase res and display, reduce number (#14679)

core(viewport): support interactive-widget (#14664)

report: fix compat for older lighthouse reports (#14617)

core(bf-cache): link to chrome developer docs (#14699)

misc(assets): update logo (#13919)

report: rename i18n to i18n-formatter, move strings to Util (#13933)

core(trace-processing): add backport for pubads (#14700)

report: specify in tooltip that cpu/memory power is unthrottled (#14704)

core: add entity classification of origins to the LHR (#14622)

Merge remote-tracking branch 'origin/main' into processEventsfix

Created at 5 days ago
trace processor - cleanup and fixes ☂️

Forking off from #14697 …

  • Verify we handle metric calculation of multiple navigations correctly. (For timespan mode)
  • Nearly certain that this fcpAllFrames calculation doesnt return the right value.
  • Fix the LCP-Allframes calculation
  • While trace-processor could organize all processes found in the trace, I think it's better to just instantly whittle down the events to the "inspected" process tree and frame tree. Drop everything else, so no metric calculation code needs to filter for themselves.
  • Clarify that frameEvents/frameTreeEvents are a subset of all events from that frame.
  • Validate all uses of startingPid are using it correctly. (They're probably not)
  • Review all uses of trace.traceEvents as there's potentially a mistake handling pids/frames.
  • Refactor the traceprocessor flow. In short: determine the "inspected" pids/frames in one pass before doing all the subsetting.
  • Adopt isOutermostMainFrame?
  • Audit all uses of .args[.data].frame to see if there's a better way to ensure the data is reliably there.
  • handle the pid reuse case (however unlikely that is while tracing). Having just a map of pid->tid says nothing about the timing. Seems like we might need a temporal aspect to the tracking as well? Or just step though the trace, subsetting in chunks between any FrameCommittedInBrowser events.
Created at 5 days ago
issue comment
[Feedback] Enhanced Traces experiment

This feature is a great idea and I really like the experience. 👍

WDYT about naming the property holding the trace events traceEvents rather than payload?

That'd be consistent with the typical trace JSON format, which is ~canonically{traceEvents: Array<TraceEvent>}. about:tracing and Performance Insights both use {traceEvents: [], metadata: {}}, whereas Chromium Perf panel saves Array<TraceEvent> traces, but we're soon going to make it match.

An example usecase: currently https://trace.cafe doesn't work on enhanced traces, but with the above change it'd work (minus the enhancements;)

I'm not sure exactly the impact for the memory enhanced traces.. but, a property that's specific to them would help tools ingesting this data. (I can see the .meta.type prop differentiates, but the object shape being distinctive would assist.)

Created at 5 days ago
issue comment
"Image elements have explicit `width` and `height`" audit ok even with percentage values

depend on the native impl that supports document-policy: unsized-media=?0

Seems compelling. But we quickly looked into the impl and are not sure it has a more robust check than us..

https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/core/html/media/media_element_parser_helpers.cc;l=20-21;drc=fef0781f2b4b0694ba56a0d688802c3c47f12dc9 https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/geometry/length.h;l=246-249;drc=fef0781f2b4b0694ba56a0d688802c3c47f12dc9

Also this header will affect the size of unsized images. and AFAIK you can't engage it in report-only mode. AFAICT that was never implemented. So... nevermind on this thread. :/

Created at 5 days ago
issue comment
"Image elements have explicit `width` and `height`" audit ok even with percentage values

MDN states % should not be there. Is it allowed by the HTML spec?

@brendankenny touched base with Domenic to get to the bottom of the spec issue. Domenic says its allowed in the useragent requirements (but not really in the authoring side). https://github.com/whatwg/html/issues/8589

Created at 5 days ago
issue comment
"Image elements have explicit `width` and `height`" audit ok even with percentage values

( Apparently 3 of us wrote a comment at the same time for this. Here's my overlapping take...)

First, We could drop our LH unsized-images impl and depend on the native impl that supports document-policy: unsized-media=?0. see https://github.com/WICG/document-policy/blob/main/document-policy-explainer.md and https://wicg.github.io/document-policy/#reporting We know that the reporting API has CDP support. So.. possible to set a document-policy-report-only response header and slurp up the findings via the CDP events.

Now.. a few cases we discussed:

  1. Percentage width + aspect ratio is fine. (And LH passes here). 👍
  2. Percentage height (w/ or w/o aspect ratio) is not fine. (But LH doesn't flag this) 👎
  3. There are other 'edge' cases like ..
    • width: min-content. not good. and LH fails. 👍
    • max-width + aspect-ratio is fine (i think. assuming no width value) but LH flags. 👎
    • i'm sure there's other situations considering the various valid values for width, min-width, max-width and the combinations...

We also discussed the influence of the surrounding elements. A) a fixed height parent has significance for a percent height child. B) If there's no content in the flow following an image, there's no Layout Shift. We (without adam....) decided that LH would not check either of these cases, as.. it's just a lot of complication. :)

Created at 5 days ago
issue comment
Surface removed LCP element in report

LargestImagePaint::Candidate is available and gives us image URL and coordinates. But doesn't help on the GC front.

The new CDP events in the above crbug give us the nodes at runtime. We can collect their nodeDetails as those events come in to ensure they're not GC'd before the TraceElements gatherer runs. For this to be worth it, we'd probably have to collect these nodeDetails right when the events come in. But... overhead is unknown and that may be unwise.

Something to look into, especially if we see this problem happening often.

Created at 5 days ago
issue comment
Lantern EIL+TBT should use internally consistent FCP/TTI estimates

Related to the FCP-can-be-faster-than-TTFB issue.

Also related: should TBT not be dependent on TTI.

Created at 5 days ago
Audit: Report the SI-optimal order of resource loading and the corresponding optimal SI value

Feature request summary

Using the information such as network transmission time and CPU time of each resource gathered during a test page load, Lighthouse can in fact calculate the best order of resource loading and task execution towards the (nearly) optimal Speed Index (SI) value for the web page. I am willing to develop this feature myself.

What is the motivation or use case for changing this?

The page load performance metric, Speed Index, reflects how quickly web page contents are populated onto the screen during the entire page load, which is more sensible compared with some other simple metrics based on a specific time instant.

Given a web page, if we know its SI-optimal loading order as well as the corresponding SI lower bound, we can decide whether we should change the priorities of resources or reduce the page complexity. In this way, we can provide users with a better visual experience during page loading.

How is this beneficial to Lighthouse?

Although Lighthouse reports the SI of a page load, currently it seems to provide limited suggestions on how to optimize the page for a better SI. It would be helpful to provide more informative content about SI in the report.

Created at 5 days ago
Get `yarn test-devtools` to work on linux

This never worked because there was a crash, and it was decided to just focus on Mac. However, @adamraine needs to run this on GCP to do collect data at scale, so let's revisit.

Currently yarn test-devtools errors loudly complaining that "crashpad_handler does not exist". This is easy to silence with --disable-breakpad to the webserver python program. However, the crash is still there, this just removes the complaining. (Note: unnecessary, but if we wanted to we could download the crashpad handler from chrome-PLATFORM.zip and place it next to the content shell).

So with that said, here's the true error. Still investigating.

[501216:501216:1124/122624.392648:WARNING:vaapi_wrapper.cc(534)] VAAPI video acceleration not available for disabled
[501132:501132:1124/122624.384668:FATAL:platform_font_skia.cc(97)] Check failed: InitDefaultFont(). Could not find the default font
#0 0x5633328ce399 base::debug::CollectStackTrace()
#1 0x56333284af13 base::debug::StackTrace::StackTrace()
#2 0x56333285a3f0 logging::LogMessage::~LogMessage()
#3 0x56333285afbe logging::LogMessage::~LogMessage()
#4 0x56333345a3af gfx::PlatformFontSkia::PlatformFontSkia()
#5 0x56333345be4b gfx::PlatformFont::CreateDefault()
#6 0x56333344a3ce gfx::Font::Font()
#7 0x56333237931a content::RenderViewHostImpl::GetPlatformSpecificPrefs()
#8 0x5633324acf96 content::WebContentsImpl::SyncRendererPrefs()
#9 0x5633328390a3 content::Shell::CreateShell()
#10 0x563332839727 content::Shell::CreateNewWindow()
#11 0x56333617a66e content::WebTestControlHost::PrepareForWebTest()
#12 0x56333614b383 content::WebTestBrowserMainRunner::RunBrowserMain()
#13 0x563332835e50 content::ShellMainDelegate::RunProcess()
#14 0x563331f1090c content::ContentMainRunnerImpl::RunServiceManager()
#15 0x563331f10533 content::ContentMainRunnerImpl::Run()
#16 0x563331275d74 content::RunContentProcess()
#17 0x56333127675c content::ContentMain()
#18 0x5633305d44cc main
#19 0x7f2c44825cca __libc_start_main
#20 0x5633305d436a _start

Received signal 6
#0 0x5633328ce399 base::debug::CollectStackTrace()
#1 0x56333284af13 base::debug::StackTrace::StackTrace()
#2 0x5633328cdf3b base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x7f2c45ee1140 (/lib/x86_64-linux-gnu/libpthread-2.31.so+0x1413f)
#4 0x7f2c4483adb1 gsignal
#5 0x7f2c44824537 abort
#6 0x5633328ccec5 base::debug::BreakDebugger()
#7 0x56333285a862 logging::LogMessage::~LogMessage()
#8 0x56333285afbe logging::LogMessage::~LogMessage()
#9 0x56333345a3af gfx::PlatformFontSkia::PlatformFontSkia()
#10 0x56333345be4b gfx::PlatformFont::CreateDefault()
#11 0x56333344a3ce gfx::Font::Font()
#12 0x56333237931a content::RenderViewHostImpl::GetPlatformSpecificPrefs()
#13 0x5633324acf96 content::WebContentsImpl::SyncRendererPrefs()
#14 0x5633328390a3 content::Shell::CreateShell()
#15 0x563332839727 content::Shell::CreateNewWindow()
#16 0x56333617a66e content::WebTestControlHost::PrepareForWebTest()
#17 0x56333614b383 content::WebTestBrowserMainRunner::RunBrowserMain()
#18 0x563332835e50 content::ShellMainDelegate::RunProcess()
#19 0x563331f1090c content::ContentMainRunnerImpl::RunServiceManager()
#20 0x563331f10533 content::ContentMainRunnerImpl::Run()
#21 0x563331275d74 content::RunContentProcess()
#22 0x56333127675c content::ContentMain()
#23 0x5633305d44cc main
#24 0x7f2c44825cca __libc_start_main
#25 0x5633305d436a _start
  r8: 0000000000000000  r9: 00007fffb5beaf20 r10: 0000000000000008 r11: 0000000000000246
 r12: 00003249fade1000 r13: aaaaaaaaaaaaaaaa r14: 00003249fade1010 r15: 00007fffb5beb9c0
  di: 0000000000000002  si: 00007fffb5beaf20  bp: 00007fffb5beb170  bx: 00007f2c43d25b80
  dx: 0000000000000000  ax: 0000000000000000  cx: 00007f2c4483adb1  sp: 00007fffb5beaf20
  ip: 00007f2c4483adb1 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
 trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.
Created at 5 days ago
issue comment
Get `yarn test-devtools` to work on linux

It runs on linux now \o/

https://github.com/GoogleChrome/lighthouse/actions/runs/3992078236/workflow#L81-L118

Created at 5 days ago
Disable WebP warning when Cloudflare Polish says WebP is bigger

Feature request summary Cloudflare Polish convert images to WebP from its CDN. However, if Cloudflare finds that converting to WebP results in higher size, it will deliver the original image instead. Lighthouse still gives the warning to serve these images as WebP (next-gen format).

Cloudflare provides a response header cf-polished: origSize=70188, status=webp_bigger.

file-prLn9XAfLT

What is the motivation or use case for changing this? Many customers believe either Polish is not working or converting to WebP will increase the site's speed even more when analyzed with Lighthouse.

How is this beneficial to Lighthouse? Not every image, when converted to WebP, will result in a smaller size; sometimes, it will be higher. It would be great if Lighthouse could check these headers and remove that warning if webp_bigger.

Created at 5 days ago
Proposal: Lighthouse accessibility report should check for inaccessible scrolling pages and regions

Provide a basic description of the audit

This audit would inspect a page for any scrollable elements where browser/system-provided scrolling mechanisms have been disabled, replaced or restyled in a way that hinders the accessibility and ease-of-use for a site.

How would the audit appear in the report?

I am proposing three related audits:

Two "Best practices" audits that inspect the page's CSS styling and rendered element content boxes:

  1. The first BP audit is rather strict, and would recommend against making any changes to browser-provided scrolling mechanisms.

    • This BP audit is raised (fails?) whenever it detects any element el where el.scrollHeight > el.clientHeight AND where getComputedStyle( el ).overflow === 'none' - or if any CSS rule is applied to any element that targets any of the ::-webkit-scrollbar psedo-elements.
  2. The second BP audit would permit the restyling of scrollbars provided their new styles meet necessary ease-of-use criteria:

    • Fitt's Law: The scrollbar track and thumb must be sufficiently large enough to easily target with a mouse cursor or touch input device (ideally based on a CSS media query).

    • If the scrollbar thumb is variable-height then the minimum size of the thumb must still be large enough to comfortably target.

      • Google's Angular documentation site is guilty of this - it's very difficult to click and drag the narrow scrollbar with a mouse: image
    • Ideally the scrollbar track and thumb must have sufficient visual contrast (incl. contrast for colour-blind users) from each other, in addition to having sufficient contrast from the document body.

      • This could be difficult to automate: for example, there's an unfortunate bug in macOS Safari's scrollbars where if a page has a dark background in places (but is still overall light) then Safari will render a dark scrollbar thumb, which will become invisible or very difficult to see when the thumb overlaps the dark portion of the page. This problem exists because Safari no-longer renders scrollbar tracks, only thumbs. If Safari rendered a contrasting track then the thumb would always be visible.

A third audit would be a simple "Additional item to manually check" notice to remind users to ensure their website is accessible to users without scroll input devices (such as touchpads, mice scrollwheels, touchscreens) or users who otherwise can only use on-screen scrollbars to scroll content.

How would the test look when passing?

Green?

Would there be additional details available?

If the audit fails due to a style-rule hiding scrolling UI or altering ::-webkit-scrollbar then a jump-to-source link would be useful. As well as a list of elements that failed the audit.

How would the test look when failing?

Red? Orange?

What additional details are available?

None

If the details are tabular, what are the columns?

N/A

If not obvious, how would passing/failing be defined?

I briefly defined a test criteria above.

How is this audit different from existing ones?

I checked the audit-list and searched the GitHub Issues page and couldn't find any current audits that cover the accessibility of scrolling.

What % of developers/pages will this impact?

This audit will affect 100% of pages that think they're cool by hiding the ugly Windows scrollbar because their webpages were designed by cool kids running macOS who think mouse-draggable scrollbars are for losers.

...while my above remark is firmly facetious, I note that because macOS remains the platform of choice for web-designers it is common for those web-designers to want to export macOS's interaction concepts to the web (because they're genuinely cool, or novel, or work well... on Apple hardware), but they often are inaccessible or otherwise much more difficult to use without the requisite hardware, and this extends far beyond scrollbars.

Ultimately the root cause is web-designers not testing their designs fully on other platforms: while testing in Chrome and Safari on macOS might seem like good coverage for differences in browsers' default styles or for the responsive-mode or mobile-device emulation mode - but unfortunately Chrome/Chromium doesn't offer any kind of desktop emulation mode (e.g. using Windows' default font sizes/metrics, always-visible wide scrollbars, etc) for users on other platforms.

How is the new audit making a better web for end users?

It would mean more web-pages will be accessible to users who cannot use - or do not possess - scrollwheels or touchpads.

What is the resourcing situation?

I need to earn some karma - if it isn't too much work (say more than 20 hours?) I'm happy to implement this audit myself.

Any other links or documentation that we should check out?

Interestingly I couldn't find any direct references or mentions of scrollbars or scrolling-accessibility in WCAG 2.1 - which is odd, but there are plenty of articles that reference WCAG when making recommendations about scrollbar accessibility:

  • https://www.w3.org/WAI/WCAG21/quickref
  • https://adrianroselli.com/2019/01/baseline-rules-for-scrollbar-usability.html

I suppose the concerns about scrollbar-accessibility could come under the "Pointer gestures" section?

  • https://www.w3.org/WAI/WCAG21/quickref/?currentsidebar=%23col_overview#pointer-gestures

Are you willing to work on this yourself?

Yes - provided there's a well-documented onboarding process.

What is the motivation or use case for changing this?

My personal motivation is that my Windows desktop computer's mouse's scrollwheel suddenly stopped working recently (I assume some lint got stuck inside the wheel rotation sensor) which meant I could only scroll webpages by dragging a scrollbar. I came across a few webpages that had hidden their scrollbars entirely (I contacted the author of one website who said he did it because he felt it made his already minimalist site design look even cleaner - but when I explained to him it was making it impossible for me to read his blog's content he agreed to bring back scrollbars to his website).

How is this beneficial to Lighthouse?

It makes Lighthouse more comprehensive.

Created at 5 days ago
issue comment
Proposal: Lighthouse accessibility report should check for inaccessible scrolling pages and regions

If there's renewed interest in this, pinging the axe-core issue is the right place to bring this back.

Created at 5 days ago
issue comment
CLI, Node: Performance Score null if running headless

I would also suggest folk running into this try --headless=chrome which is a new (mode) for headless which has much more headful behavior while still being headless. :)

Created at 5 days ago
SPA with manifest.json console.log Best practices error

Provide the steps to reproduce

  1. Run LH on https://www.google.com/

What is the current behavior?

Under "Best practices" the following error is present when running audits in incognito mode in Chrome "Browser errors were logged to the console". This is not something that is controllable by the website, and shouldn't be punished for it in the audit.

What is the expected behavior?

LH will not count the console.log generated from the browser i.e the Site cannot be installed: Page is loaded in an incognito window error

Environment Information

  • Affected Channels: DevTools
  • Lighthouse version: 5.7.0
  • Chrome version: 81.0.4044.138
  • Node.js version:
  • Operating System: MacOS Mojave 10.14.6
Created at 5 days ago
issue comment
Get `yarn test-devtools` to work on linux

While we completely changed the devtools tests we run.... I believe this still isn't the case. We run our smoketests with the devtools runner. But we do not run the devtools repo's LH e2e tests in our CI. Though we should. This should be significantly easier now than it was before.

Created at 5 days ago