I’ve read two pieces recently that I’d recommend to you. Both of them get into fairly technical matters, but they’re each written with a very clear, approachable voice; even if you don’t muck around with code very much, there’s a good chance you’ll be able to follow along. But if you’re busy, or disinclined to read, I’ll try to sum them up as best I can.

The first item is a Twitter thread by Steve Workman you might find interesting. In his thread, Steve describes his process of fixing some work he’d done. He noticed that one software library was responsible for huge file sizes in the components he was building, generating considerably more code than he felt they actually needed. And since users would have to literally pay to download that extra code, he spent hours optimizing his work. By tweaking his build setup — and by applying no small amount of forensic cleverness — Steve managed to optimize the code, saving his users from downloading megabytes of cruft.

The second item is this article by Tim Kadlec. In it, Tim conducted some in-depth research across thousands of websites, assessing the performance costs of sites powered by Angular, React, and Vue — the most popular front-end frameworks in use today. And the results aren’t pretty: even in the best cases, there’s more JavaScript delivered to the browser than is ideal; in the most extreme cases, it’s exponentially worse. But across the board, Tim’s data shows that we’re asking our users to pay entirely too much, both in time and in money, to access the sites we build with these frameworks.

I’ve been a bit haunted by both of these pieces. Because Tim and Steve are, I think, telling the same story: they’re just standing at opposite ends of the telescope.

Steve’s piece is, of course, a success story. He noticed a problem, and worked to cleverly correct it. But I can’t help but think: what about the engineers who didn’t or couldn’t make similar optimizations? Maybe their organization didn’t provide them with enough time or resources to do so, or perhaps they didn’t realize their site was so expensive.

And to follow that thread out a bit: why would they? If they’re working with some of the most popular, most widely-deployed frameworks and software libraries in use today, why would they assume the result would be anything but optimized? Why wouldn’t their output be a small, lightweight, performant site? As Tim’s research shows us, it rarely is. We’re dealing with the results of bad defaults, deployed at a terrible scale.

Frankly, it’s hard for me not to see this as a failure of governance. Our industry is, by and large, self-regulated. And right now, producing quality work relies on teams electing to adopt best practices: establishing a performance budget; designing and building their sites in a layered, progressively-enhanced way; testing their work on older hardware instead of their own devices.

In other words, they have to give a damn. But giving a damn doesn’t scale. Time and again, our industry chooses frameworks that weren’t designed for accessibility or for performance. And in the middle of a global health crisis, I think it’s time we talked about how the tech industry — an industry responsible for providing life-saving information and services, now more than ever — can be allowed to self-regulate. If other industries follow building codes and regulations, why shouldn’t we?