Automated auditing, performance metrics, and best practices for the web.
Type `git open` to open the GitHub page or website for a repository in your browser.
https://chromiumdash.appspot.com/releases
Originally posted by @BreakerZoeBurma in https://github.com/GoogleChrome/lighthouse/issues/14728#issuecomment-1407169404
@robpaveza on the linked crbug.. I'll get back to you, the issue creator is OOO and I don't want to flip it to public without his consent. :) But the gist is that we're going to add basic metadata to traces saved from Perf Panel, and thus will almost certainly use the json format described above.
Aye, it's your call on the naming. I am just attracted to the idea that tools like traceviewer and perfetto could view these enhanced traces as well. I'm less familiar with the 3 memory profiles, but after looking at them, I don't see an obvious solution that'd net us some 'free' compatibility anywhere.
core(trace-processor): refactor processEvents and frameEvents (#14287)
As discovered in #14264 we were severely excluding events in our processEvents
array. And also frameEvents
, frameTreeEvents
, and mainThreadEvents
.
The primary changes here are in trace-processor and this pr contains mostly bug fixes. I envision a second PR which refactors the traceprocessor functions and flow a bit (and wouldn't adjust test results). I can do that here, too, but i'll leave that to the reviewer to opt-in to.
findMainFrameIds()
returns the "starting" pid, but that will not be the pid of the primary navigation necessarily. that needs to be found afterwards.FrameCommittedInBrowser
event is great and spells out the potentially-new processId
of the renderer in the new navigation.args.data.frame
or args.frame
. No consistency. Where I saw our code depending on one location, I expanded it to both.edit: filed trace processor - cleanup and fixes ☂️ · #14708
core: delete util.cjs (#14709)
deps: upgrade axe-core
to 4.6.3 (#14690)
deps(lighthouse-stack-packs): upgrade to 1.9.0 (#14713)
lint
Merge remote-tracking branch 'origin/main' into processEventsfix
deps: upgrade puppeteer to 19.6.0 (#14706)
core: remove util.cjs (#14703)
Merge remote-tracking branch 'origin/main' into processEventsfix
Would there be an easy way to make sure we don't fall back to this in new traces? If somehow the
thread_name
event changed format it would be good if the change wasn't masked by the fallback still working.
frameEvents
is wrong) moved to #14708That resolves the only remaining open threads on this PR.
core(fr): preserve scroll position in gatherers (#14660)
core(full-page-screenshot): remove audit, move to top-level (#14657)
core(user-flow): passively collect full-page screenshot (#14656)
core(legacy): convert some base artifacts to regular gatherers (#14680)
misc: exclude core/util.cjs from code coverage (#14688)
core(scoring): rebalance perf metric weightings for v10 (#14667)
core(scoring): update expected perf score for flow fixtures (#14692)
core(processed-navigation): computed directly from trace (#14693)
core: restructure types for direct import and publishing (#14441)
core: use performance.now
in isolation (#14685)
report(thumbnails): increase res and display, reduce number (#14679)
core(viewport): support interactive-widget (#14664)
report: fix compat for older lighthouse reports (#14617)
core(bf-cache): link to chrome developer docs (#14699)
misc(assets): update logo (#13919)
report: rename i18n to i18n-formatter, move strings to Util (#13933)
core(trace-processing): add backport for pubads (#14700)
report: specify in tooltip that cpu/memory power is unthrottled (#14704)
core: add entity classification of origins to the LHR (#14622)
Merge remote-tracking branch 'origin/main' into processEventsfix
Forking off from #14697 …
trace.traceEvents
as there's potentially a mistake handling pids/frames.isOutermostMainFrame
?pid
reuse case (however unlikely that is while tracing). Having just a map of pid->tid
says nothing about the timing. Seems like we might need a temporal aspect to the tracking as well? Or just step though the trace, subsetting in chunks between any FrameCommittedInBrowser
events.This feature is a great idea and I really like the experience. 👍
WDYT about naming the property holding the trace events traceEvents
rather than payload
?
That'd be consistent with the typical trace JSON format, which is ~canonically{traceEvents: Array<TraceEvent>}
.
about:tracing and Performance Insights both use {traceEvents: [], metadata: {}}
, whereas Chromium Perf panel saves Array<TraceEvent>
traces, but we're soon going to make it match.
An example usecase: currently https://trace.cafe doesn't work on enhanced traces, but with the above change it'd work (minus the enhancements;)
I'm not sure exactly the impact for the memory enhanced traces.. but, a property that's specific to them would help tools ingesting this data. (I can see the .meta.type
prop differentiates, but the object shape being distinctive would assist.)
depend on the native impl that supports
document-policy: unsized-media=?0
Seems compelling. But we quickly looked into the impl and are not sure it has a more robust check than us..
https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/core/html/media/media_element_parser_helpers.cc;l=20-21;drc=fef0781f2b4b0694ba56a0d688802c3c47f12dc9 https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/geometry/length.h;l=246-249;drc=fef0781f2b4b0694ba56a0d688802c3c47f12dc9
Also this header will affect the size of unsized images. and AFAIK you can't engage it in report-only mode. AFAICT that was never implemented. So... nevermind on this thread. :/
MDN states % should not be there. Is it allowed by the HTML spec?
@brendankenny touched base with Domenic to get to the bottom of the spec issue. Domenic says its allowed in the useragent requirements (but not really in the authoring side). https://github.com/whatwg/html/issues/8589
( Apparently 3 of us wrote a comment at the same time for this. Here's my overlapping take...)
First, We could drop our LH unsized-images impl and depend on the native impl that supports document-policy: unsized-media=?0
. see https://github.com/WICG/document-policy/blob/main/document-policy-explainer.md and https://wicg.github.io/document-policy/#reporting We know that the reporting API has CDP support. So.. possible to set a document-policy-report-only
response header and slurp up the findings via the CDP events.
Now.. a few cases we discussed:
width: min-content
. not good. and LH fails. 👍width
, min-width
, max-width
and the combinations...We also discussed the influence of the surrounding elements. A) a fixed height parent has significance for a percent height child. B) If there's no content in the flow following an image, there's no Layout Shift. We (without adam....) decided that LH would not check either of these cases, as.. it's just a lot of complication. :)
LargestImagePaint::Candidate is available and gives us image URL and coordinates. But doesn't help on the GC front.
The new CDP events in the above crbug give us the nodes at runtime. We can collect their nodeDetails as those events come in to ensure they're not GC'd before the TraceElements gatherer runs. For this to be worth it, we'd probably have to collect these nodeDetails right when the events come in. But... overhead is unknown and that may be unwise.
Something to look into, especially if we see this problem happening often.
Related to the FCP-can-be-faster-than-TTFB issue.
Also related: should TBT not be dependent on TTI.
Feature request summary
Using the information such as network transmission time and CPU time of each resource gathered during a test page load, Lighthouse can in fact calculate the best order of resource loading and task execution towards the (nearly) optimal Speed Index (SI) value for the web page. I am willing to develop this feature myself.
What is the motivation or use case for changing this?
The page load performance metric, Speed Index, reflects how quickly web page contents are populated onto the screen during the entire page load, which is more sensible compared with some other simple metrics based on a specific time instant.
Given a web page, if we know its SI-optimal loading order as well as the corresponding SI lower bound, we can decide whether we should change the priorities of resources or reduce the page complexity. In this way, we can provide users with a better visual experience during page loading.
How is this beneficial to Lighthouse?
Although Lighthouse reports the SI of a page load, currently it seems to provide limited suggestions on how to optimize the page for a better SI. It would be helpful to provide more informative content about SI in the report.
This never worked because there was a crash, and it was decided to just focus on Mac. However, @adamraine needs to run this on GCP to do collect data at scale, so let's revisit.
Currently yarn test-devtools
errors loudly complaining that "crashpad_handler does not exist". This is easy to silence with --disable-breakpad
to the webserver python program. However, the crash is still there, this just removes the complaining. (Note: unnecessary, but if we wanted to we could download the crashpad handler from chrome-PLATFORM.zip
and place it next to the content shell).
So with that said, here's the true error. Still investigating.
[501216:501216:1124/122624.392648:WARNING:vaapi_wrapper.cc(534)] VAAPI video acceleration not available for disabled
[501132:501132:1124/122624.384668:FATAL:platform_font_skia.cc(97)] Check failed: InitDefaultFont(). Could not find the default font
#0 0x5633328ce399 base::debug::CollectStackTrace()
#1 0x56333284af13 base::debug::StackTrace::StackTrace()
#2 0x56333285a3f0 logging::LogMessage::~LogMessage()
#3 0x56333285afbe logging::LogMessage::~LogMessage()
#4 0x56333345a3af gfx::PlatformFontSkia::PlatformFontSkia()
#5 0x56333345be4b gfx::PlatformFont::CreateDefault()
#6 0x56333344a3ce gfx::Font::Font()
#7 0x56333237931a content::RenderViewHostImpl::GetPlatformSpecificPrefs()
#8 0x5633324acf96 content::WebContentsImpl::SyncRendererPrefs()
#9 0x5633328390a3 content::Shell::CreateShell()
#10 0x563332839727 content::Shell::CreateNewWindow()
#11 0x56333617a66e content::WebTestControlHost::PrepareForWebTest()
#12 0x56333614b383 content::WebTestBrowserMainRunner::RunBrowserMain()
#13 0x563332835e50 content::ShellMainDelegate::RunProcess()
#14 0x563331f1090c content::ContentMainRunnerImpl::RunServiceManager()
#15 0x563331f10533 content::ContentMainRunnerImpl::Run()
#16 0x563331275d74 content::RunContentProcess()
#17 0x56333127675c content::ContentMain()
#18 0x5633305d44cc main
#19 0x7f2c44825cca __libc_start_main
#20 0x5633305d436a _start
Received signal 6
#0 0x5633328ce399 base::debug::CollectStackTrace()
#1 0x56333284af13 base::debug::StackTrace::StackTrace()
#2 0x5633328cdf3b base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x7f2c45ee1140 (/lib/x86_64-linux-gnu/libpthread-2.31.so+0x1413f)
#4 0x7f2c4483adb1 gsignal
#5 0x7f2c44824537 abort
#6 0x5633328ccec5 base::debug::BreakDebugger()
#7 0x56333285a862 logging::LogMessage::~LogMessage()
#8 0x56333285afbe logging::LogMessage::~LogMessage()
#9 0x56333345a3af gfx::PlatformFontSkia::PlatformFontSkia()
#10 0x56333345be4b gfx::PlatformFont::CreateDefault()
#11 0x56333344a3ce gfx::Font::Font()
#12 0x56333237931a content::RenderViewHostImpl::GetPlatformSpecificPrefs()
#13 0x5633324acf96 content::WebContentsImpl::SyncRendererPrefs()
#14 0x5633328390a3 content::Shell::CreateShell()
#15 0x563332839727 content::Shell::CreateNewWindow()
#16 0x56333617a66e content::WebTestControlHost::PrepareForWebTest()
#17 0x56333614b383 content::WebTestBrowserMainRunner::RunBrowserMain()
#18 0x563332835e50 content::ShellMainDelegate::RunProcess()
#19 0x563331f1090c content::ContentMainRunnerImpl::RunServiceManager()
#20 0x563331f10533 content::ContentMainRunnerImpl::Run()
#21 0x563331275d74 content::RunContentProcess()
#22 0x56333127675c content::ContentMain()
#23 0x5633305d44cc main
#24 0x7f2c44825cca __libc_start_main
#25 0x5633305d436a _start
r8: 0000000000000000 r9: 00007fffb5beaf20 r10: 0000000000000008 r11: 0000000000000246
r12: 00003249fade1000 r13: aaaaaaaaaaaaaaaa r14: 00003249fade1010 r15: 00007fffb5beb9c0
di: 0000000000000002 si: 00007fffb5beaf20 bp: 00007fffb5beb170 bx: 00007f2c43d25b80
dx: 0000000000000000 ax: 0000000000000000 cx: 00007f2c4483adb1 sp: 00007fffb5beaf20
ip: 00007f2c4483adb1 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.
It runs on linux now \o/
https://github.com/GoogleChrome/lighthouse/actions/runs/3992078236/workflow#L81-L118
Feature request summary Cloudflare Polish convert images to WebP from its CDN. However, if Cloudflare finds that converting to WebP results in higher size, it will deliver the original image instead. Lighthouse still gives the warning to serve these images as WebP (next-gen format).
Cloudflare provides a response header cf-polished: origSize=70188, status=webp_bigger
.
What is the motivation or use case for changing this? Many customers believe either Polish is not working or converting to WebP will increase the site's speed even more when analyzed with Lighthouse.
How is this beneficial to Lighthouse?
Not every image, when converted to WebP, will result in a smaller size; sometimes, it will be higher. It would be great if Lighthouse could check these headers and remove that warning if webp_bigger
.
This audit would inspect a page for any scrollable elements where browser/system-provided scrolling mechanisms have been disabled, replaced or restyled in a way that hinders the accessibility and ease-of-use for a site.
I am proposing three related audits:
Two "Best practices" audits that inspect the page's CSS styling and rendered element content boxes:
The first BP audit is rather strict, and would recommend against making any changes to browser-provided scrolling mechanisms.
el
where el.scrollHeight > el.clientHeight
AND where getComputedStyle( el ).overflow === 'none'
- or if any CSS rule is applied to any element that targets any of the ::-webkit-scrollbar
psedo-elements.The second BP audit would permit the restyling of scrollbars provided their new styles meet necessary ease-of-use criteria:
Fitt's Law: The scrollbar track and thumb must be sufficiently large enough to easily target with a mouse cursor or touch input device (ideally based on a CSS media query).
If the scrollbar thumb is variable-height then the minimum size of the thumb must still be large enough to comfortably target.
Ideally the scrollbar track and thumb must have sufficient visual contrast (incl. contrast for colour-blind users) from each other, in addition to having sufficient contrast from the document body.
A third audit would be a simple "Additional item to manually check" notice to remind users to ensure their website is accessible to users without scroll input devices (such as touchpads, mice scrollwheels, touchscreens) or users who otherwise can only use on-screen scrollbars to scroll content.
Green?
If the audit fails due to a style-rule hiding scrolling UI or altering ::-webkit-scrollbar
then a jump-to-source link would be useful. As well as a list of elements that failed the audit.
Red? Orange?
None
N/A
I briefly defined a test criteria above.
I checked the audit-list and searched the GitHub Issues page and couldn't find any current audits that cover the accessibility of scrolling.
This audit will affect 100% of pages that think they're cool by hiding the ugly Windows scrollbar because their webpages were designed by cool kids running macOS who think mouse-draggable scrollbars are for losers.
...while my above remark is firmly facetious, I note that because macOS remains the platform of choice for web-designers it is common for those web-designers to want to export macOS's interaction concepts to the web (because they're genuinely cool, or novel, or work well... on Apple hardware), but they often are inaccessible or otherwise much more difficult to use without the requisite hardware, and this extends far beyond scrollbars.
Ultimately the root cause is web-designers not testing their designs fully on other platforms: while testing in Chrome and Safari on macOS might seem like good coverage for differences in browsers' default styles or for the responsive-mode or mobile-device emulation mode - but unfortunately Chrome/Chromium doesn't offer any kind of desktop emulation mode (e.g. using Windows' default font sizes/metrics, always-visible wide scrollbars, etc) for users on other platforms.
It would mean more web-pages will be accessible to users who cannot use - or do not possess - scrollwheels or touchpads.
I need to earn some karma - if it isn't too much work (say more than 20 hours?) I'm happy to implement this audit myself.
Interestingly I couldn't find any direct references or mentions of scrollbars or scrolling-accessibility in WCAG 2.1 - which is odd, but there are plenty of articles that reference WCAG when making recommendations about scrollbar accessibility:
I suppose the concerns about scrollbar-accessibility could come under the "Pointer gestures" section?
Yes - provided there's a well-documented onboarding process.
My personal motivation is that my Windows desktop computer's mouse's scrollwheel suddenly stopped working recently (I assume some lint got stuck inside the wheel rotation sensor) which meant I could only scroll webpages by dragging a scrollbar. I came across a few webpages that had hidden their scrollbars entirely (I contacted the author of one website who said he did it because he felt it made his already minimalist site design look even cleaner - but when I explained to him it was making it impossible for me to read his blog's content he agreed to bring back scrollbars to his website).
How is this beneficial to Lighthouse?
It makes Lighthouse more comprehensive.
If there's renewed interest in this, pinging the axe-core issue is the right place to bring this back.
I would also suggest folk running into this try --headless=chrome
which is a new (mode) for headless which has much more headful behavior while still being headless. :)
Under "Best practices" the following error is present when running audits in incognito mode in Chrome "Browser errors were logged to the console"
. This is not something that is controllable by the website, and shouldn't be punished for it in the audit.
LH will not count the console.log generated from the browser i.e the Site cannot be installed: Page is loaded in an incognito window
error
While we completely changed the devtools tests we run.... I believe this still isn't the case. We run our smoketests with the devtools runner. But we do not run the devtools repo's LH e2e tests in our CI. Though we should. This should be significantly easier now than it was before.