When JavaScript Fails

Jason Godesky on 2023-02-15

JavaScript failure can be a difficult thing to study. Most of the analytics software that we’ve come to rely on, rely on JavaScript, so when JavaScript fails, our analytics packages are unaware that anything happened at all. We tend to discount the incidence of JavaScript failures as well as the the toll that it takes because it’s difficult to measure — a prime example of the quantitative fallacy at work.

This October will mark 10 years exactly since the UK Government Digital Service released its study on just how many of its users were missing out on JavaScript enhancements. The study distinguished those who had turned off JavaScript from those for whom it was enabled but simply failed to load for one reason or another. There’s an endless number of reasons why this might happen, from bad networks to meddling ISP’s to browser plugins and extensions. The GDS found that JavaScript was failing for their users in 1.1% of all visits, but only 0.2% of their users had JavaScript turned off. The majority of the people who weren’t receiving JavaScript, 0.9% of all visits, were from users who had not disabled JavaScript. They’d simply had the misfortune of experiencing some problem loading it.

I’ve had the chance to duplicate this experiment on other websites, and generally found that number is closer to 3%, though with even fewer users actually turning JavaScript off. This aligns with numbers that are often passed around colloquially within the web development industry. We often picture this situation like this:

Obviously, we’d love to be able to offer the same service to all of our users, but in the real world, a 97% success rate is quite good. Many developers may have lost many weekends and long nights to fixing a problem for a browser that was used by an even smaller number of users, but ultimately, an organization might reasonably think this an acceptable state of affairs, especially if fixing it would be a major undertaking.

But this doesn’t accurately capture the situation. We’re not talking about 3% of users being affected, but 3% of visits. Remember, the problem isn’t that people have turned JavaScript off, it’s that JavaScript isn’t reliable. Meaning it affects all of us, about 3% of the time. We should be picturing it more like this:

If you’ve ever waited for a page to load, then finally given up, hit the “X,” and then reloaded the page to see it load immediately, congratulations: you’ve experienced what it’s like when JavaScript fails. How many times have you waited for a page to load and simply given up on it? How many of those might simply have been because of a JavaScript failure?

Users come to your website with a certain “reservoir of goodwill,” as Steven Krug put it. It varies from user to user, and even for the same user from one time of day to another, how they’re feeling that day, whether or not they’ve eaten recently, et cetera ad infinitum. That reservoir can be replenished when we help them accomplish a task or give them a delightful experience, but we withdraw from it every time we make their experience difficult, irritating, or frustrating — which is what happens every time our JavaScript fails to load. Do that often enough (say, 3% of the time), and they’ll cease to be our users in short order. So really we should be picturing it more like this:

You’re not going to get a survey back that will tell you that users are leaving your site because your JavaScript is failing. If you’re doing some great user research, you might hear that they think your site is slow, or clunky, or “janky,” or not user-friendly, or it’s just difficult to use. This isn’t the sort of problem that users are good at putting their finger on and accurately describing, but just because they usually can’t describe it to you doesn’t mean that they don’t notice it — or that it doesn’t affect their behavior.

If you’re trying to make Figma, there might not be much you can do about this. You should know that the <noscript> tag will only display to users who have actively turned off JavaScript, so it won’t help most of the people who need it. Instead, you should think about how your page loads. You might want to consider if the base HTML can express your error state, and then you can have JavaScript provide the intended experience when (and if) it loads. But if you’re making a complex app in the browser, providing better error messages may be all that you can really do.

But most of us aren’t making Figma. Most of us are still making documents — the things that the web was originally designed for. Does it make sense for an article on a news site to be an all-or-nothing affair that presents users with a blank page if the comments failed to load?

Progressive enhancement is an approach to building websites that delivers the greatest possible value to your users at all times — even when things go wrong. We often express the basic idea of it with a joke by the late, great Mitch Hedberg: “An escalator can never break; it can only become stairs.”

The major JavaScript frameworks of the moment — Angular, React, Next.js, and so on¹ — are elevators. You can create a great experience with them, but they can be rather fragile, expensive to keep in operation, and when they break, it’s all or nothing. Progressive enhancement doesn’t require any new technologies, frameworks, or libraries. In fact, it arguably works best with vanilla JavaScript, tight, semantic HTML, and beautiful CSS (though I’m partial to Sass, myself).

I once launched a new website with all of the JavaScript written to progressively enhance. As it turned out, there was a rather critical bug in the code for an older version of Internet Explorer — which about 6% of our users were still using. And since JavaScript is generally an all-or-nothing thing where one failure makes all of the JavaScript on the page stop, this meant that none of our scripts were working at all, anywhere in this browser. And yet, it took six months before a QA specialist found it and brought it to our attention. None of our users ever said anything or sent a complaint — because they never realized that anything was broken. They thought that was just how the site was supposed to look. They were able to accomplish everything they wanted to. They were happy and satisfied. And when we fixed the bug, they were even more happy that it nowlooked even better.

When Apple came out with the Apple Watch, my boss asked me what it would take to make our website ready for it. I told him it already was. He didn’t believe me, so I sent him an email with the link to our site to open on his watch, and there it was. How had I designed our website to work on a device that didn’t even exist at the time? “Because I’m a powerful sorcerer,” I told him, but the real answer was progressive enhancement. By delivering a core experience and treating everything else as an enhancement that some users might (or might not) receive, you create a page that’s future-proof. It’s ready for any new device or technology from the moment it appears in the world.

These are examples of progressive enhancement acting as a kind of technical credit—the opposite of the (much) more common technical debt.

Ultimately, if you’re deciding how to approach building a website, any business is going to put that question into terms of ROI. What has become the “standard” approach may ultimately cost you all of your users in the end, but if that process is slow enough, and if the “standard” approach is cheap enough compared to building a website that progressively enhances, does it still makes sense?

The best estimations widely available on the cost of progressive enhancement vs. the “standard” approach come from Aaron Gustafson, who estimated that it generally costs about 40% of the original budget to redo a website with progressive enhancement. Does this mean that progressive enhancement costs only 40% of what a “standard” development effort would cost? Well, maybe not; after all, these are redesign projects, so the original project likely included some work that made it much easier to do the second time around. The same would be true for redoing a progressively enhanced site in the “standard” approach (so that it can break 3% of the time, I guess). What most engineers who’ve worked with both approaches estimate (and I include myself in this category) is that it’s no more work to build a website to progressively enhance than it is to build it in the “standard” way. You have to approach the work differently — you may have to question some assumptions and try some new approaches— but it’s not harder, just different.

Those who haven’t developed with progressive enhancement often argue that it’s twice the effort, and thus, twice the cost. After all, you need to make a whole website that works without JavaScript, and then you have to do it again with JavaScript, right? But this isn’t how progressive enhancement works — at least, not when it’s done intelligently. All the other rules of good software development still apply, including “Don’t Repeat Yourself.” You shouldn’t be doing anything twice, but you may need to think about how you do things and where.

So, if progressive enhancement is no more expensive to create, future-proof, provides us with technical credit, and ensures that our users always receive the best possible experience under any conditions, why has it fallen by the wayside?

Because before, when you clicked on a link, the browser would go white for a moment.

JavaScript frameworks broke the browser to avoid that momentary loss of control. They then had to recreate everything that the browser had provided for free: routing, history, the back button, accessibility features, the ability for search engines to read the page, et cetera iterum ad infinitum. Coming up with solutions to these problems has been the fixation of the JavaScript community for years now, and we do have serviceable solutions for all of these — but all together, they create the incredibly complex ecosystem of modern-day JavaScript that so many JavaScript developers bemoan and lament.

All to avoid having a browser refresh for a moment.

The designer in me can certainly relate to the desperate desire to avoid that loss of control, but is it really commensurate with how much we’ve lost to avoid it? In terms of the experience we provide our users, it’s clear that pages that stall or simply don’t load at all 3% of the time is far worse than pages that refresh the browser window when you click on a link.

Convinced that progressive enhancement is a great idea, but not sure how you’re supposed to do it, especially with modern JavaScript? Don’t worry, I’ve written a follow-up to this article just for you: “A Practical Guide to Progressive Enhancement in 2023.”

¹ I’ve been told that Vue is unique in this regard, and actually works really well with progressive enhancement. I’d love to try it out for myself, but I haven’t really had a good project to try it out with.

The animations above are inspired by those used by Stuart Langridge in his talk, “You don’t need all that JavaScript, I promise” (also linked above). If you like, you can play with the Codepen I used to create them.