Web Performance Calendar

The speed geek's favorite time of year
2018 Edition
ABOUT THE AUTHOR

Yoav Weiss (@yoavweiss) has been working on mobile web performance for longer than he cares to admit, on the server side as well as in browsers. He now works as part of Google Chrome developer relations team, helping to fix web performance once and for all.

He takes image bloat on the web as a personal insult, which is why he joined the Responsive Images Community Group and implemented the various responsive images features in Blink and WebKit. That was his gateway drug into the wonderfully complex world of browsers and standards.

When he's not writing code, he's probably slapping his bass, mowing the lawn in the French countryside, or playing board games with his family.

Back in October, the Web Performance Working Group had its annual two day Face-to-Face meeting (or F2F, as the kids call it), as part of TPAC, the W3C annual gathering.

We had a very productive meeting preceded by the plenary day, where we discussed various related issues as well, in two breakout sessions.

If you’re highly curious (and have some time on your hands), detailed minutes from the meeting are available. But to make sure you don’t have to, I summarized the discussions below.

Highlights

  • We had attendees from four large browser vendors (Edge, Firefox, Safari and Chrome), CDN/analytics providers (Akamai) as well as large web properties (Facebook, Baidu, Wikimedia and 360.cn).

  • We discussed and reached consensus on 33 issues on the different specifications that the working group covers:

    • Subresource Integrity (SRI) and responsive resource loading with preload.

    • Processing model for prefetch, and one that works well with service workers and privacy-preserving caching models.

    • Unifying the buffering paradigms for Performance Timeline.

    • Buffering model as well as tracking in-flight requests and service-worker annotations for Resource Timing.

    • More precise Paint Timing definitions.

  • Presented many new and renewed proposals for upcoming metrics:

    • New metrics: Event Timing, Element Timing and Layout stability.

    • JS runtime improvements: scheduling APIs, task worklet, hasPendingUserInput() as well as JS self profiling.

  • Had two related breakout sessions where we discussed Client Hints as well as the security implications of exposing URLs of CSS subresources.

  • Had a productive collaboration session with the Distributed Tracing WG.

Issue discussions

I’m not gonna lie, this section discusses the various issues open on the different specs the group handles, and as such, can be a bit tedious if you’re not following those specifications up close. If you are, read on! If you don’t, feel free to skim this section, or hop over to the proposals section. I won’t judge.

Preload

We discussed a few of preload’s current deficiencies and how to fix them:

SRI support

The lack of preload support for resources that are later fetched using SubResource Integrity (or SRI, for short) has been a long-standing issue. Such resources today get double-downloaded, which everyone agrees is awful. But up until this meeting, we had trouble reaching agreement on what the solution should be. Finally at this F2F, we reached consensus!

We decided that we should:

  • As a short-term solution, enable a full integrity attribute on link rel preloads

    • That would enable developers to include the full integrity hashes on preload links, and enable the browser to properly match those in its cache when a resource with the integrity attribute is requested.
  • In the longer term, we want to add a new attribute that will enable developers to communicate the integrity hash algorithm to the browser, without specifying the full hash.

    • That would enable browsers to calculate the required integrity for the resource without having to store the entire preloaded resource in memory until it is being reused.

    • Developers would then be able to include SRI-enabled preloads without having to specify the same hash in multiple places.

srcset/sizes equivalents

Another deficiency we talked about was the lack of a proper way to preload images that are later loaded using a srcset attribute. So, there’s no current way to preload responsive images for viewport-based or device-pixel-ratio based image selection.

A proposal to tackle this is to add imagesrcset and imagesizes attributes to preload links, enabling developers to define similar syntax in preload links as they have in their <img> tags.

There was general agreement in the room that this use-case is one worth tackling, including members of Wikimedia’s performance team, saying that they currently work around this lack of functionality, and would definitely use this once it is available in browsers. Turns out, Having actual users of your standards in the room is extremely valuable!

A related discussion then sparked around the inefficiencies of preloading responsive resources as link headers, since the browser doesn’t know the exact dimensions of the viewport until a later phase, when the HTML has arrived and meta tags containing the viewport information are parsed. While this is not a problem unique to imagesrcset (the same problem is present when dealing with preloads and media attributes), we agreed that a mechanism that would enable developers to define the viewport as a header would improve the current state of affairs. Following that discussion, we now also have a proposal for a Viewport header to do just that.

<picture> equivalent

Following up the previous discussion, we discussed the issue of tackling the art-direction use case for preloading responsive images. While on the surface it seems like a similar issue, this will require a new mechanism that groups <link> tags together to form a group, and we cannot introduce a new element to do that, as we cannot introduce new elements in the HTML <head>.

The use-case here seemed less urgent than viewport-based selection, and workarounds for it do exist for most cases, even if they require rewriting picture’s media queries in a mutual exclusive way.

Due to the added complexity and lack of clear demand for the use-case, the group decided to punt this issue for now, until demand becomes clearer.

Prefetch

Prefetch is a part of the larger resource-hints specification, but how it should behave in double-key-caching browsers or when service workers are present is not so well-defined (read: not at all).

By double-key-caching, I mean browsers which cache subresources keyed both on the resource’s URL as well as on the top-level page’s origin. That way resources cannot be shared between pages on different domains, and cannot leak cookie-like information to third-party hosts, which is important in browsers which block third-party cookies by default. Safari has been using that technique for a few years now, as part of their Intelligent Tracking Prevention. Firefox recently announced that they may follow suite.

If we want prefetch to gain better adoption, we need to better define how it can co-exist with such protections against third-party information leaks. For that purpose, we discussed a proposal to restrict cross-origin prefetches to navigation requests, which will make sure they cannot breach privacy. We also concluded that for privacy purposes, we would need prefetches to skip the service worker.

We then continued to discuss details such as whether we can use the current Fetch Request keepalive flag for prefetches, or if we’d need to define a new concept for “speculative fetches”. We also discussed if we’d need to make use of as=document to tell apart navigation prefetches from subresource prefetches (so that double-key-caching browsers would know not to prefetch cross-origin subresource prefetches, as they can compromise their privacy restrictions). We concluded that a “speculative fetch” concept is probably the way to go, and as=document is most-probably required.

Otherwise, we agreed that bundling all the different hints as part of resource-hints is likely to hurt the spec from moving forward, as the different hints have different support in the various browsers. So we decided that we should split the different hints into their own specifications or integrate them into HTML and Fetch directly.

Performance Timeline

We discussed making Performance Observer easier to use for developers by adding a way to enumerate the supported entry types. We also talked about the ideal behavior in case someone mixes the old parameters with the new entry-type specific ones. We decided that it’s best to throw an exception in those cases, as other alternatives are likely to cause developer confusion.

We then talked about unifying the buffering paradigm for the different specifications that produce PerformanceEntries, and making sure they allow easy access to entries without encouraging developers to run early blocking JS. We also agreed that for some metrics which introduce measurement overhead, we would need to provide a declarative mechanism that enables developers to turn them on and buffer them initially. Finally, since Resource-Timing’s “clear” semantics are being abused in the wild, we should try to avoid them here for this new buffering model.

Resource Timing

We talked about and agreed on a buffering model for Resource Timing, which has since landed. We agreed that clarifying exact times when entries are added to the timeline will require Fetch integration, so will probably be postponed to Level 3 of the specification. Lastly, we concluded that negative times for navigation preload fetches are probably fine, as these fetches can be triggered before the navigation starts and the time origin gets initialized.

A related breakout session discussion was about exposing the subresource URLs of no-CORS CSS in both Resource Timing and Service Workers. There is a long standing debate on whether the current behavior (where such URLs are exposed) is fine from a security perspective. The conclusion was to gather implementer interest in blocking such resources from appearing in the resource timing timeline as well as SW. Once we have that, we would need to gather data on the level of content breakage this will introduce.

The practical implications, if this goes through, is that service workers and resource timing may stop seeing such subresource URLs (e.g. font and background image URLs, when fetched by a no-CORS cross-origin stylesheet).

We also discussed a couple of Resource Timing related proposals:

FetchEvent Worker Timing

Ben Kelly presented his proposal to enable service worker code to add User-Timing-like annotations to the request’s loading process and have those annotations appear as part of the main page’s Resource Timing performance timeline.

There was general agreement that the use-case is one worth tackling, followed by some discussion about the right shape of the API, which since then resulted in proposal updates.

In-flight resource requests

The use-case of exposing in-flight requests to resource timing is a long standing one, and Nicolás presented multiple options to resolve that use-case.

The group discussed the different options and decided to expose a method that returns an array of the in-progress resource requests. With that input, we have enough to draft yet-another-proposal on that front 🙂

Paint Timing

For Paint Timing we agreed that in order for different implementations to reach the same results given identical pages, we would need to better specify what constitutes “invisible text” and that the time of the first paint should be defined as the first paint after the time origin is set. We also agreed that we need to reduce the complexity of First Contentful Paint by ignoring empty SVGs and clipped canvas, and just focus on non-white canvas paints, as we’re likely to see white canvas in the wild, before content arrives.

New and exciting proposals

While most of the first day was dedicated to issue discussions, we reserved the second day to discuss various proposals for upcoming metrics and APIs that would make it easier to create and measure faster experiences on the web.

New Metrics

Event Timing

Nicolás Pena presented a proposal for a new performance entry type called Event Timing (we hesitated calling it Input Timing, but decided to keep it as “Event Timing” to enable extending it to non-input events in the future).

Its purpose is to enable developers to measure input latency in the wild, from the moment the user’s input gesture is registered in hardware, till the point the event fires and all the way till the next paint operation that occurs as a result.

Due to privacy restrictions, as well as a desire to avoid having too many entries, it will be mostly limited to “long” events, so events that take over 52ms. One exception to that is the first input event, which is planned to be exposed even when it is fast, and is destined to be an “in-the-wild” companion to the Time-To-Interactive lab metric.

As far as browser vendor reception went, Microsoft and Mozilla seemed fairly enthusiastic about tackling the use case, and Apple folks commented that they have no particular objections.

Element timing

With Element Timing, the goal is to create a metric that enables developers to better know when various DOM elements are rendered to screen. The metric will target image elements as a first step, and will enable timings of both user-defined elements and elements that take a large chunk of the user’s viewport. The group agreed that this would be a good first step to tackle that use-case. The next step is probably an origin trial to get more developer feedback.

Layout stability

This proposal‘s goal is to enable the measurement of page re-layouts—when after the initial layout and paint, page elements “jump around” following image downloads, font download, new components injected to the page, scroll-bars, etc.

That experience is user-hostile, so we want to enable developers to measure it in the wild and detect if it happens to their users. (And since this can happen as a result of third party behavior, testing for this in the lab may not be enough.)

The group discussed the performance overhead of measuring this and how the Chrome implementation was optimized to minimize it, whether this measurement will require an opt-in (due to said performance overhead), and concluded that there’s sufficient interest in the metric for Chrome to come back to the group with a concrete proposal for standardization.

JS runtime improvements

Scheduling API

Shubhie presented a proposal to enable better scheduling in the browser, and avoid cases where web apps are blocking the main thread for long periods when executing scripts. Her proposal is split into two parts:

  1. Enable JS-based schedulers to better perform their duties, by providing them better lower-level primitives that will enable them better integration with the browser’s event loop, and more awareness of the user and their actions. Examples for this are APIs which expose the fact that there’s pending user input, provide them with time until the next animation frame, notifications that paint has happened, etc.

  2. Provide a higher level scheduling API that will enable developers to post tasks, notify the browser of their priority, and let the browser do the rest, with the developer needing to know the intricacies of Tasks, Microtasks and the event loop.

hasPendingUserInput

One of the primitives Shubhie mentioned is hasPendingUserInput (formerly shouldYield), which was presented by Tim Dresser.

This small API can indicate interested developers that the browser has pending user input, so they should terminate their current task, and yield to it as soon as possible. There was general consensus in the group that this makes sense, followed by a discussion on whether the API would also need to provide the time the input has been pending, to enable developers to complete “render critical” tasks before handling that user input.

Task Worklet

Another aspect of being able to properly schedule JS tasks is to be able to schedule them off-main-thread. Shubhie presented ideas on that front for Task Worklets. The concept will enable the browser to handle a thread-pool, have the worklet code register specific tasks that it can fulfill, and have the code on the main thread post those tasks to the worklet with significantly lower overhead and boilerplate than if they wanted to do the same using a worker.

JS self profiling

Andrew Comminos of Facebook presented their proposal that will enable developers to profile their site’s JS performance in the wild. Currently they are instrumenting their JS code to get a similar effect, which adds some overhead and can break caching, as they profile only a portion of their users. Concerns were raised regarding the potential performance implications of such a sampling profiler enabled, as well as the size of the traces it may produce. It is also not clear how profiling can be securely enabled in the presence of third party JS, which responses are often opaque.

As next steps, it was decided that we need to gather interest from other web properties, precisely define the stack frame structure in the spec and to prototype an implementation in order to gather data regarding the performance implications of such a profiler.

Client Hints

On Wednesday, I ran a breakout session regarding Client Hints and privacy. The goal of the session was to discuss the privacy implications of client hints as a mechanism, rather than discuss the various different hints that can use it.

Client Hints mechanics were recently changed to reveal less information to third-party hosts by default, in order to avoid increasing the passive fingerprinting surface. We also have a few ideas on how we can use Client Hints in order to actively reduce the passive fingerprinting surface browsers currently expose. So, I wanted to use that opportunity to present the current mechanism to various privacy folks, make sure that everyone has a shared understanding of it, and discuss the various enhancements we’re considering.

We had a good discussion about the mechanism, and the features it enables. Folks in the room were mostly supportive of the mechanism, even if some of them still have various objections to some of the specific hints that are currently supported. (e.g. the network information API related hints, as well as its JS API)

Finally, Mike West presented his ideas on enhancing user privacy with Client Hints—by using them and their third party delegation mechanisms in order to freeze current UA strings and let sites ask for specific UA information they need (e.g. in order to avoid bugs) instead of sending it to everyone by default. The idea is to have an opt-in-based “well-lit path” through which this information is exposed, instead of the situation today where this information is exposed by default. That can help to reduce passive fingerprinting (by requiring an opt-in) and make it easier for browsers to block or lie about that information if they see fit for privacy reasons.

Distributed Tracing

On Thursday we also had a joint session between the web performance working group and the distributed tracing working group. The distributed tracing group is working on standards that will enable backend developers to track the different processing components that a request goes through so that when things go wrong, they’d be able to pinpoint the responsiblew component where the issues originated from.

They have a proposal to do that for backend components, by sending a Trace-Parent-ID request header with upstream backend requests, so that they would be able to reconstruct the backend call tree when they collect logs from all the backend components.

The Distributed Tracing folks also had some ideas on how to extend the same concept to the front-end and correlate the server-side measurements with the browser’s measurements.

Turns out that by using Network-Error-Logging and Server-Timing they can already achieve a large part of that correlation:

  • Network-Error-Logging can be extended to also mirror certain request and response headers in its reports. Distributed tracing can use that by sending Trace-Parent-ID response headers to the browser, get Trace-Parent-ID values back in NEL reports and use that to connect the client side analytics with the server-side request tree.

  • For successful requests that may have been slower than desired, Server-Timing headers can communicate that same information and enable analytics code to upload it when collecting Resource Timing entries. Then site operators can use that information to correlate any resource load timings with its server-side processing logs.

Summary

It’s hard to summarize two days of intensive technical meetings, but I’ll give it a try regardless.

I believe we made a lot of progress on the different issues, and reviewed many exciting new metrics and proposals.

But to me, there were two things about this meeting in particular that make me optimistic about the future:

  • For the first time since I became involved in the WebPerfWG, we had presence from all major browser vendors, for the full duration of the meetings.

  • We had more developer involvement than before, which enabled us to get immediate, high quality feedback regarding some of the proposals. Having developers in the room who actually need the APIs we’re discussing, and have had to work around the lack of them, is so much better than just having browser folks speculating about potential use-cases.

If you made it so far, congrats! And if you think this kind of work is interesting and want to get involved with the Web Performance Working Group, feel free to ping me for more details on how to do that.

This was also my first time acting as chair in those F2F meetings, which turns out to be way more work than it seems from the outside. But I’m super happy about how the meetings turned out and with the progress we made, and hope that next year I’ll be able to sum up how all those great proposals shipped in browsers! 🙂