Stop Breaking the Web

The year is 2014, a ninja rockstar band goes up against the now long-forgotten progressive enhancement technique, forsaking the origins of the web and everything they once stood for. This article is where I rant about how we are breaking the web, the not-immediately-obvious reasons why we should stop doing this, and how not breaking the web would be a great thing.

TL;DR We are crushing the web. Dedicated client-side rendering sucks. Polyfills are used for all the wrong reasons. Those hideous-looking hash routers are bad and we should feel bad. We have been telling each other for years that progressive enhancement is great, and yet we’re doing very little about it!

Here’s hoping the screenshot below corresponds merely to a publicity stunt, attempting to grab the tech media world by surprise. That being said, the fact that we’re not certain about whether this is a ruse or a permanent decision makes me cringe for the future of the web.


Taco Bell #onlyintheapp — is it a clever publicity stunt to drive app downloads or a symptom of the profusely bleeding web?

Disclaimer: This article is not a rant about Angular 2.0. I started forming these thoughts a while ago, before the Angular 2.0 revelations. The roadmap for Angular merely happened to coincide with the posting of this article. If anything, those news reinforce the points others have made against it, but the statement behind this article goes far beyond the myriad of breaking changes in Angular’s public API.

It makes me sad to point out that we as a community have failed the web. Whatever happened to progressive enhancement? You know, that simple rule where you are supposed to put content at the forefront. Everything else is secondary to content, right? People want to see your content first. Once the content is in place, maybe they’ll want to be able to interact with it. However, if content isn’t there first, because your page is too slow, or because you load fonts synchronously before humans can read anything, or because you decide to use client-side rendering exclusively, then humans are pretty much screwed. Right?

Sure, humans have faster Internet connections now. Or do they? A lot of humans access the web on mobile connections such as 2G and 3G, and they expect your site to be just as fast as on desktop. That’ll hardly be the case if you’re blocking content on a JavaScript download.

Increasingly, this is becoming the norm. Fuck humans, we need all these awesome frameworks to make the web great. Wait, we do need humans. They have money, metadata, and stuff. Oh I know, let’s give them client-side routing even if they’re on IE6. That’s bound to make them happy, right? Oh, stupid IE6 doesn’t support the history API. Well, screw IE6. What? IE9 doesn’t support the history API either? Well, I’ll just support IE10. Wait, that’s bad, I’ll use a hash router and support IE all the way down to IE6! Yes, what a wonderful world, let’s make our site accessible through routes like /#/products/nintendo-game-cube and then require JavaScript to be enabled for our view router to work, and let’s also render the views in the client-side alone. Yes, that will do it!

Meanwhile, we add tons of weight to our pages, levelling the field and making the experience in modern browsers worse as a result of attempting to make the experience in older browsers better. There’s a problem with this fallacy, though. People using older browsers are not expecting the newest features. They’re content with what they have. That’s the whole reason why they’re using an older browser in the first place. Instead of attempting to give those users a better experience (and usually failing miserably), you should enable features only if they’re currently available on the target browser, instead of creating hacks around those limitations.

Humans using older browsers would be more than fine with your site if you only kept the server-side rendering part, so they don’t really need your fancy-and-terribly-complex maintainability-nightmare of a hash router. But no, wait! Hash routing is so-and-so awesome, right? Who needs server-side rendering!

Okay fine let’s assume you agree with me. Hash routing sucks. It does nothing to help modern browsers (except slowing down the experience, it does do that!) and everything to complicate development and confuse humans who are using older browsers.

Do we even care about the web as much as we say we do?

Recently, someone published an article on Medium titled “What’s wrong with Angular.js”. It infuriates me that we don’t seem to care at all about server-side rendering, as long as we are able to develop applications using our favorite framework. While every single other point was refuted in some way or another, the point about server-side rendering went almost unnoticed. As if nobody even cared or even understood the implications.

6. No server side rendering without obscure hacks. Never. You can’t fix broken design. Bye bye isomorphic web apps.

The only place where I would conceive using a framework that relies solely on client-side rendering is for developing prototypes or internal backend apps (just like how we use Bootstrap mostly for internal stuff). In these cases, these negligent frameworks are great because they boost productivity at virtually no cost, since no humans get harmed in the process. Besides the few use cases where neglecting server-side rendering isn’t going to affect any human beings, doing so is undisputably slow, unacceptable, backwards, and negligent.

It is slow because the human now has to download all of your markup, your CSS, and your JavaScript before the JavaScript is able to render the view the user expected you to deliver in the first place. When did we agree to trade performance for frameworks?

It is backwards because you should be delivering the content in human-viewable form first, and not after every single render blocking request out there finishes loading. This means that a human-ready HTML view should be rendered in the server-side and served to the human, then you can add your fancy JavaScript magic on top of that, while the user is busy making sense of the information you’ve presented them with.

Always keep humans busy, or they’ll get uneasy.

It is negligent because we have been telling each other to avoid this same situation for years, but using other words. We’ve been telling ourselves about the importance of deferring script loading by pushing <script> tags to the bottom of the page, and maybe even tack on an async attribute, so that they load last. Using client-side rendering without backing it up with server-side rendering means that those scripts you’ve pushed to the bottom of your page are now harming your experience, because loading is delayed and without JavaScript you won’t have any content to show for. Don’t start moving your <script> tags to the <head> just yet. Just understand how far-reaching the negative implications of using client-side rendering are, when server-side rendering isn’t in the picture.

But don’t take my word for it, here’s what Twitter had to say. Remember Twitter? Yeah, they switched to shared-rendering back in mid 2012 and never looked back.

Looking at the components that make up [the time to first tweet] measurement, we discovered that the raw parsing and execution of JavaScript caused massive outliers in perceived rendering speed. In our fully client-side architecture, you don’t see anything until our JavaScript is downloaded and executed. The problem is further exacerbated if you do not have a high-specification machine or if you’re running an older browser. The bottom line is that a client-side architecture leads to slower performance because most of the code is being executed on our users’ machines rather than our own.

There are a variety of options for improving the performance of our JavaScript, but we wanted to do even better. We took the execution of JavaScript completely out of our render path. By rendering our page content on the server and deferring all JavaScript execution until well after that content has been rendered, we’ve dropped the time to first Tweet to one-fifth of what it was.

We are worrying about the wrong things. Yesterday, Henrik Joreteg raised a few valid concerns about the dire future of AngularJS. These things are disputable, though. You may like the changes, you may think they’re for the best, but what are you really getting out of the large refactor in the road ahead of you? Truth be told, Angular is an excellent framework in terms of developer productivity, and it is “sponsored by Google”, as in they maintain the thing. On the flip side, Angular’s barrier of entry is tremendously high and you have nothing to show for it when you have to jump ship.

We are doing things backwards. We are treating modern browsers as “the status quo”, and logically if someone doesn’t conform to "the status quo", we’ll be super helpful and add our awesome behavioral polyfills. This way, at least they get a hash router!

We are worrying about the wrong things.

Emphasize Content First

What we should be doing instead is going back to basics. Let content down the wire as quickly as possible, using server-side rendering. Then add any extra functionality through JavaScript once the page has already loaded, and the content is viewable and usable for the human. If you want to include a feature older browsers don’t have access to, such as the history API, first think if it makes sense to do it at all. Maybe your users are better off without it. In the history API case, maybe it’s best to let older browsers stick to the request-response model, rather than trying to cram a history API mock onto them by means of a hash router.

The same principle applies to other aspects of web development. Need people to be able to post a comment? Provide a <form> and use AJAX in those cases where JavaScript is enabled and XMLHttpRequest is well supported. Want to defer style loading as to avoid render-blocking and inline the critical CSS instead? That’s awesome, but please use a <noscript> tag as a fallback for those who disabled JavaScript. Otherwise you’ll risk breaking the styles for those humans!

Did I mention the obviously broken aspect of hash routing where you can’t do server-side rendering, as you won’t know the hash part of the request on the server? That’s right, Twitter has to maintain dedicated client-side rendering for the foreseeable future, as long as hash-banged requests are still hitting their servers.

Progressively Enhance All The Things!

In summary, we should stop devising immensely clever client-side rendering solutions that are simply unable to conjure up any sort of server-side rendering. Besides vomit-inducing “strategies” such as using PhantomJS to render the client-side view on the server-side, that is. I’m sure nobody is in love with the “sneak peak” for Angular 2.0 anyways, so many breaking changes for virtually no benefit. Oh wait, there are benefits, you say? I couldn’t hear you through sound of browser support being cut down.

I guess that’s what you get when you don’t care about progressive enhancement.

The next time you pick up a project, don’t just blindly throw AngularJS, Bootstrap and jQuery at it, and call it a day. Figure out ways to do shared rendering, use React or Taunus, or something else that allows you to do shared rendering without repeating yourself. Otherwise don’t do client-side rendering at all.

Strive for simplicity. Use progressive enhancement. Don’t do it for people who disable JavaScript. Don’t do it for people who use older browsers. Don’t even do it just to be thorough. Do it because you acknowledge the important of delivering content first. Do it because you acknowledge that your site doesn’t have to look the same in every device and browser ever. Do it because it improves user experience. Do it because people on mobile networks shouldn’t have to suffer the painful experience of a broken web.

Build up from the pillars of the web, instead of doing everything backwards and demanding your fancy web 2.0 JavaScript frameworks to be loaded, parsed, and executed before you can even begin to consider rendering human-digestible content.

Here’s a checklist you might need.

  • HTML first, get meaningful markup to the human being as soon as possible
  • Deliver some CSS, inline critical path CSS (hey, that one came from Google, too!)
  • Defer the rest of the CSS until onload through JavaScript, but provide a fallback using <noscript>
  • Defer below the fold images
  • Defer font loading
  • Defer all the JavaScript
  • Never again rely on client-side rendering alone
  • Prioritize content delivery
  • Cache static assets
  • Experiment with caching dynamic assets
  • Cache database queries
  • Cache all the things

Also, use <form> elements first, then build some AJAX on top of that. No! It’s not for the no-JavaScript crazies. If the JavaScript is still loading, your site will be useless unless you have <form> elements in place to guarantee the functionality will be available. Some people just have to deal with slow mobile connections, embrace that. You can use Google Chrome to emulate mobile connections, for example.

Don’t lock yourself into a comprehensive technology that may just die within the next few months and leave you stranded. With progressive enhancement you’ll never go wrong. Progressive enhancement means your code will always work, because you’ll always focus on providing a minimal experience first, and then adding features, functionality, and behavior on top of the content.

Do use a framework, but look into frameworks that are progressive-enhancement-friendly, such as Taunus, hyperspace, React, or Backbone with Rendr. All of these somehow allow you to do shared rendering, although the Rendr solution is kind of awkward, it does work. Both Taunus and hyperspace let you do things “the modular way” as they are surrounded by small modules you can take advantage of. React has its own kind of awful, but at least you can use if for server-side rendering, and at least Facebook does use it.

Do look into ways of developing more modular architectures. Progressive enhancement doesn’t mean you’ll get a monolithic application as a result. Quite the contrary really. Progressive means that you’ll get an application that builds upon the principles of the web. It means that your application will work even when JavaScript is disabled, for the most part. It may even be missing a few core aspects of its functionality, say if you forget to add a <form> fallback in an important part of your site’s experience.

Even that would be okay, because you’d have learned the value of progressive enhancement, and you could add that <form>, having your site be a little more lenient with older browsers and humans using mobile phones. You’d have learned not to render your application on the client-side alone, and you’d use shared-rendering or even server-side rendering instead. You’d have learned the value of the little things, like using <noscript> tags or setting up OpenSearch. You’d have learned to respect the web. You’d have gotten back on the road of those who truly care about the web.

You’d have learned to stop breaking the web.

Liked the article? Subscribe below to get an email when new articles come out! Also, follow @ponyfoo on Twitter and @ponyfoo on Facebook.
One-click unsubscribe, anytime. Learn more.

Comments (48)

TheSisb wrote

Wow @TacoBell. And I totally agree with this post!

FlavorScape wrote

Shouldn’t ya’ll be dumping out a server side flat SEO version with redirect anyway so routers don’t matter?

Adam wrote

There is one thing I don’t understand yet. What is the problem with Bootstrap?

Nicolas Bevacqua wrote

Bootstrap is great, but too bloated to be worthwhile in customer-facing apps. Of course, you could use uncss to solve the “whole bunch of unused styles” issue.

Even then, Bootstrap-based sites tend to all look alike, which is why I wouldn’t recommend using it for customer-facing apps.

Kevin wrote

I’m going to be making some client side framework decisions in the near future, do you think you could expand on

React has its own kind of awful

a little? Or point to any articles? Thanks for the blog, really enjoy it!

Tim Douglas wrote

+1 to Kevin’s question.

Nicolas Bevacqua wrote

In the case of React it’s mostly a matter of personal preference, which is why I didn’t spend my time going over my complaints. Mostly, I dislike mixing my views with behavior. I also dislike placing them inside JavaScript code, and having to resort to JSX or using a terribly verbose syntax if I don’t buy into JSX.

Other than that, which I would say is mostly stylistic, I love the DOM diffing algorithm, the fact that they do use it in production systems, even at their scale, and the fact that it allows for shared-rendering.

In conclusion React isn’t that bad, because it enables progressive enhancement, whether I like the framework itself or not.

Abhas wrote

If you really like the dom diffing algorithm alone in React like me, you might like diffDom. It might not be as good as React, but the performance was more than acceptable for me. Used it with Backbone (which is again pretty awesome as its highly flexible), replaced their default rendering engine with a custom one (which is literally 10 lines of code) which re-renders on model change (1), diffs with the current one (2) and pushes the changes (3).

(1) - done by backbone

(2,3)- done by diffDom

Rob wrote

I’m a fan of SoC as well, and can’t bring myself to write anything with JSX. I know it’s largely semantics, but Wix’s React-Templates was pretty much all I needed to assuage my feelings about using React (in fact, I don’t think I’d be using React without it). It looks and acts like HTML, smells faintly of the nice parts of Angular, and fits nicely into how I like to construct my code.

Sylvain wrote

In many cases, client-side rendering is faster than server-side rendering because we can cache templates and only request serialized data, which is much less massive than generated markup. Also, it allows you to provide immediate feedback after user actions, refresh views without requesting the server, and bring smooth and flexible page transitions. That’s why the speech about humans does not make any sense to me, as from my own experience, users love single page applications as much as developpers do. Finally, is there any human left browsing the Web with JavaScript disabled?

John Doe wrote

The problem is that the ‘humans’ and ‘users’ you’re talking about browse your sites under perfect conditions, including modern browsers and fast broadband connections. And one of the reasons your users ‘love’ these sites may be because users who don’t love them simply cannot browse them properly at all.

Sure, some of the benefits you list are unarguably true (immediate feedback, updating views without requesting the server, etc.). Nobody said they didn’t exist. And nobody said you couldn’t have them. The thing is that these benefits don’t rely on the basic ‘layers’ of the Web. They should be added as progressive enhancement, not considered as a hard requirement for everybody.

Nothing is black and white. Using progressive enhancement doesn’t mean going back to the 2000’s. It’s perfectly possible to use the latest cool and shiny technologies while still caring about humans first and ensure that everybody will get the best experience she can get according to its browsing conditions (browser, connection speed, any physical handicap, etc.)

And, yep, lots of people don’t have JavaScript. And it’s not always a choice of their own. They may also use a browser with JavaScript enabled but that is not recent enough for the parts you use. There are countless scenarios in which people don’t have JavaScript, be it by choice or not.

P.S.: the first link after the ‘Do we even care […]’ heading has the wrong URL.

Nicolas Bevacqua wrote

Client-side rendering can never be faster than server-side rendering on first page load. That’s all I’m advocating. Of course you should be using client-side rendering after the first page load. But don’t use client-side rendering for the first page load, because that’s just terrible.

Sylvain wrote

And, yep, lots of people don’t have JavaScript

Nope, below 1%. I know, small % of big numbers are still big numbers. But when IE6 dropped below 1%, Microsoft proclamed its death, and the entire Web celebrated. Why can’t we do the same for JS-disabled users ? JavaScript became a basic layer of the Web. We’re not breaking the Web by asking our users to keep an up-to-date browser and enable JavaScript : we’re moving it forward.

It’s perfectly possible to use the latest cool and shiny technologies while still caring about humans first and ensure that everybody will get the best experience she can get according to its browsing conditions (browser, connection speed, any physical handicap, etc.)

A quote from Stuart Langridge @ http://www.kryogenix.org/ :

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

So yes it its possible, but is it worth it ? For example, you talk a lot about client-side rendering. How do you switch from server-side rendering to client-side rendering, while keeping the advantage of client-side rendering which is not to download huge bunches of generated HTML ? There is no transparent solution I am aware of to do this. So you want us to invest time, money and to sacrify the user experience of 99% users with JS enabled, just to provide a 1995’ browsing experience to the last percent ? No way !

I’m not saying progressive enhancement is a bad concept, it is great in some cases and it is a bad idea in other cases. Isomorphic websites are putting this concept to this extreme, that’s why I think it is a nebulous concept that will not yield any conclusive result.

Nicolas Bevacqua wrote

It’s not just two percent of people, you need to factor in mobile networks that are simply slow, meaning JavaScript may take seconds to kick in. There are transparent solutions, any shared-rendering solution will be way more transparent than relying on client-side rendering alone.

React is one of them, Rendr is another, and Taunus is yet another. I’m sure there’s others I didn’t get to play with, too.

Sylvain wrote

Client-side rendering can never be faster than server-side rendering on first page load. That’s all I’m advocating.

never say never. It is not hard to find examples where loading a lightweight JS templating lib + a template + JSON data is faster than loading generated markup. Especially when you have to repeat a list of many elements, like galleries, product list, lists with individual action buttons…

Robert wrote

I don’t have any witty quotes - or even any quotes from people we might recognize - to support my arguments, but I will appeal to my own authority as having been the one of the lead web developers on the PayPal checkout products for many years, and just because research is not published that does not mean it wasn’t done.

I’m going to jump in here first to address an earlier point. While the actual percentage of users browsing the Internet with JavaScript disabled may be low (numbers I’ve seen at large firms are also around 1%), that number does not count all the garbage that’s is broken by error-infested untested code and it also does not count users affected by users who don’t have access to specific libraries because their ISP decided they were potentially dangerous…and so far all we’ve talked about are people who are not using accessibility software.

Granted, most accessibility software works OK with JavaScript, but without paying special attention to making your JavaScript-enhanced page accessible, e.g., by using aria-live or shifting focus to updated content, users with accessibility issues are still left with a broken experience. Because of the way in which accessibility software works with user-agents, the user-agents are not technically JavaScript-disabled, but the experience is.

That does not address the current topic though, which is that because of the way user-agents are built, client-side rendering is not faster than server-side rendering on first page load. I suppose it might be if you were delivering a lot of duplicated content, but in practice we’re typically not delivering a much smaller payload because most content isn’t duplicated and what we’ve really done is break up the content into separate documents. If we deliver a JS templating lib + a template + JSON data, there are issues of network latency for each of those 3 requests, and - specifically in your example - potentially disastrous re-flow issues as you load containers with HTML. The only time that it’s significantly faster is when you’re delivering partial in-document updates, but again, in those cases you have to take special care to make sure your page/app is still playing well with accessibility software or you’re not only shutting out users you may be violating the law.

Gabe Luethje wrote

I’m just going to jump in here real quick and say that the javascript disabled argument is a straw man fallacy. That hasn’t been a relevant issue for some time, but people love to jump on that so they can say, “Blarghity-balrgh! Nobody turns off javascript anymore, and if they do they should expect a broken experience!”

You know what? I totally agree with that. If you’re consciously disabling javascript or in some other fashion have it completely disabled all the time, I feel bad for you, but I’m not going to spend time developing my sites for that case.

But that’s not what progressive enhancement is saying about JavaScript. It’s saying what if you’re on a slow connection or you soddenly go into a tunnel or a CDN request hangs, or your supercomputer smartphone drops down to 2G or any of the other thousands of things that happen on network connections every minute of every day, all the time.

If your site or app shows a blank scree when something that happens because everything including content depends on JavaScript loading first, then you’re doing it wrong. Period.

Gabe Luethje wrote

Also, typing and proofreading before hitting publish are things…sheesh.

Holger wrote

I agree with you progressive enhancement is important and makes the web page ready for older and upcoming browser.

As for the hash routers, you could have normal URLs and enhance the links by JS later so that the page is not reloaded every time and feels like a single page application.

<a href="/pony/foo" class="js-enhanced">Link</a>

If JS is disabled the page will reload as normal and the server renders the whole template. If JS is enabled you can set a parameter, that the server just renders a selected part of the template and JS replace it on the current page.

Nicolas Bevacqua wrote

Of course, that’s the proper way to do client-side routing. Use regular URLs. Then when your JavaScript code loads hijack links and use AJAX together with the history API to turn the application into an SPA. The big difference is that, as you’ve said, if JavaScript is turned off, people will still be able to navigate through your site without any issues.

Tim Ruffles wrote

Funny that you didn’t mention your site was previously a SPA, which was always a completely inappropriate choice for a blog (i.e it’s absurd to have a loading screen for 14kb of HTML/CSS).

Seems a little hypocritical.

Nicolas Bevacqua wrote

I did mention it in that article, do I need to mention it in every single article that I publish? In that article I explain how it was a terrible mistake, so I don’t see how I’m being a hypocrite.

Here’s what I had to say in the article you’ve linked to.

The largest contributor to the painful Pony Foo experience was rendering views exclusively on the client-side. That was a terrible mistake, and one that troubled me for a long time, until I got around to fixing it. This is not an easy issue to resolve, and I think we’re really missing the target here. We, as web workers, should be doing better. We’ve been relying on client-side rendering for far too long, and fancy frameworks shouldn’t be a valid excuse.

Tim Ruffles wrote

Yes, you must wear the albatross of shame forever ;)

I think it’s the slightly preachy tone of the article combined with the decision not to say “yes, I got overexcited too and made my blog a SPA. I have realised my mistake and repent” that stuck in my craw.

Jonathan Hollin wrote

I mostly agree, with the caveat that it depends very much on the website you’re building whether you will render mostly on the client or server.

I would argue that most web-applications require more client-side rendering. Whereas web-sites should generally be rendered on the server.

That being said. I always, always make sure I have base functionality working without JavaScript. Then, as you suggest, enhance that functionality inline with browser support and facilities. I would be very wary of building a project that relied on client-side rendering - the risk of excluding users is, to me, too great.

Progressive enhancement is the only way to go in my humble opinion, unless you’re building something like this.

Stomme poes wrote

Every developer is convinced their web site, even those whose existence is almost entirely to give some content (for example, newspaper articles) or do a simple HTTP transaction (fill in a form to get train times), is in fact an “app”… I guess because somewhere people can click on something, and society has become deathly allergic to page refreshes. That seems to be the running definition.

And that’s why they all agree with their mouths “progressive enhancement is good” but continue to build things that ignore that principle entirely. They can always hide behind the “but it’s an app!” argument, in which they can happily encompass their strongly-held and absolutely true views that when a developer breaks the web, it’s really the users’ fault, because the users didn’t buy the latest hardware or the users aren’t running the latest Blink or Aurora nightly or the users “chose” to purchase a shitty data plan. Javascript not loading because you have a foobar connection in the train? Well that’s the user’s fault, we know this because Yahoo! once did a study and they said only 1% of people had Javascript actually turned off, therefore slow connections don’t exist— oh, well they exist, but it’s only like 2 people so they don’t count. Don’t exist, don’t count, don’t care, same message over and over again.

(this is also why a lot of the web is inaccessible to disabled users-- no real reason, just a lot of BS, misinformation, and reliance on 3rd-party code that was built by [some creature here] and of course it’s the user’s fault they have a disability of some sort, and besides there’s only one of them anyway so don’t count and aren’t worth it)

(PS while posted as a “reply” this isn’t directed at Jonathan Hollin :P)

Olle Törnström wrote

Thank. You.

Finally some well put words to the subject. I’ve been sold on progressive enhancement since http://www.yuiblog.com/blog/2012/03/19/video-zakas-progressive-enhancement/ (a great talk btw). And I’m trying to get the message out there.

It’s sad though, the many blank stares and open mouths, when I mention that JS-framework X, Y or Z might not be the best place to start when we simply want to get “Information” served to the users.

Also very well put about about server-side rendering. It simply mustn’t be forgotten or discarded as a dated technology. I think there’s a lot of room for innovation or improvement. I’m trying to keep the light burning here: https://github.com/olle/serverside-todomvc

Just My opinion wrote

Cry me a river, things change, others have to adapt. Sometimes there has to be a big jump. People can only design with whats available currently and just speculate the future. Web is constantly growing, newer and bigger things are coming. There is so much more potential, its a different environment than say( C++ land). For things to be stable at this point in time isn’t possible, not until we reach a point where things start to plateau.

JGG wrote

Oh, the sweet irony of this web page causing my CPU to spike (latest Chrome on latest Linux Mint).

Vincent wrote

I build my sites so that the will be usable by 100% of the population. Don’t worry about slow connections, they will be a thing of the past in 20 years. Build for tomorrow people!

Philip wrote

I agree with a lot of your points, but I can’t get on board with the idea that we shouldn’t evolve simply because a small percentage of people use old browsers. “Humans” also need to be educated. If they’re using an outdated browser, they’re using technology that isn’t even supported by it’s vendor and probably has a bunch of security exploits (oops… there goes my credit card details!). The kindest thing we can do as developers is eventually force their hand into upgrading their browsers (take them to browsehappy.com), not bend over and allow users who don’t know any better to continue to make the same mistakes. If they’re not in control of their browsers, give them the knowledge to petition the admins who are.

There are also great use cases for Angular style frameworks in productivity web apps that aim to replace what was traditionally done on the desktop. Depending on the type of app, the absence of a scripting language is a complete deal-breaker. For the title of your article to assume that the entire “web” consists of “pages” that demand complete bottom-up progressive enhancement is a failure to tell the whole story.

In saying all of that, please remember that I do agree with a lot of your points.

Olly wrote

I agree with a lot of your points, but I can’t get on board with the idea that we shouldn’t evolve simply because a small percentage of people use old browsers.

That’s not what’s being said thought. You could be using Chrome Canary or Firefox Nightly with JS, WebGL and hardware acceleration enabled, but if you’re on an intermittent GPRS connection, or for some reason your employer’s firewall filters out some script, or your-CDN-of-choice falls off the face of the web, that amazing client-side rendering technique breaks horribly.

That is, unless you put some nice, simple foundations in place that work almost everywhere.

Hector Virgen wrote

Are you suggesting we architect our web applications around a corner case that most likely affects less than 0.1% of our users?

There are definitely types of sites that benefit from progressive enhancement, but not all of them.

The Web is evolving. It’s not just about consuming text and image content anymore. Web apps are slowly replacing native apps, and while we’re in this state of transition we’ll continue to see more and more advances in web technology. Embrace the change, because whether you like it or not, it’s happening.

Olly wrote

If flakey mobile internet connections only affect 0.1% of your users I’m exceedingly jealous! :-)

There are definitely types of sites that benefit from progressive enhancement, but not all of them.

I’m intrigued to know, where do you draw the line? I understand that if you’re editing 3D models in WebGL or gaming on Canvas it’ll be hard to find a way to progressively enhance. That said (and maybe I’m in the minority here), most of the sites and apps I use are all about editing and consuming text, numbers and images. They seem like prime candidates for P.E. to me.

Philip wrote

most of the sites and apps I use are all about editing and consuming text, numbers and images

A spreadsheet perhaps? No fancy WebGL or Canvas there but still impossible without a scripting language. If a spreadsheet cannot function because it’s required dependencies haven’t been met, as a developer to be told that you’ve “broken the web” is too strong a statement. Progressive Enhancement isn’t always the answer and when it isn’t, the user should be kept informed with the app reporting why it is unable to function (one might argue this is a form of PE).

The web as a whole is so much more these days and will continue to diversify. We as developers can’t tar everything viewed via a browser with the same brush and this article fails to acknowledge that.

Nicolas Bevacqua wrote

It’s all about making your core experience accessible. You might be interested in reading this article: Be Progressive

lakhdar wrote

Thank you for this post , just before readiing i thought im the sole in the world having the same idea ! I agree with you those frameworks are solution for some cases some applications mainly small apps . You can also add some important critics to that SPAs FDAs… WHAt about client side resources management memory leaking ! Lot of JS loaded at once causes browser slow . Not all people working at Google inc and having a machines with infinite capacities lot of them run 2Gb Ram machins And what about security in that SPAs and authorizations… ! You know i saw some people storing menus in there DB just to avoid server side rendring this is a sumit of stupidity !

When we use server side rendring we can scal out our server capacity case there is lot of load !!Now with client side rendring if some clients dont have enaugh resources what do !???

Thank you lot for your post

Tony wrote

Excellent article Nick, I agree that progressive enhancement is the way to go. Screw all the bells and whistles first, they come second. Remember content is king

Luke wrote

I’d be interested in hearing more about these so-called “obscure hacks”. There’s work involved in getting any isomorphic project going and I know twitter is all about the shared-rendering but not only does that sound complex I’m willing to bet that their codebase is not small either.

Nicolas Bevacqua wrote

Well, Taunus is around 15kb (minified and gzipped) and it not only does shared rendering but also provides a pretty fast caching and prefetching engine, all the while letting you write code in MVC style and share code between the server and the client.

The “obscure hacks” they referred to in that article mean that since Angular relies on HTML attributes you can’t do server-side rendering unless you do something like using PhantomJS on every request, or some such. But I bet doing that would be even slower than waiting on all the JavaScript to download.

Exirel wrote

Thanks for this post! It sums all the issues I had with client-side rendering so far.

Also, it reminds me of another aspect of progressive enhancement: instead of building robust things, we should try to build anti-fragile things.

For example, if your JavaScript fails to work (it happens, for any good or bad reasons, even in production), and all your “post comment” or “add to cart” buttons can not work without JavaScript, then your application is just not usable any more. With a simple HTML form that will still works, even without JavaScript, then, for sure, your website will be less “shiny”, but at least the feature will be still there!

Ricardo Matias wrote

Is Meteor capable of this ?

Nicolas Bevacqua wrote

Haven’t had a chance to use Meteor yet. Taunus is capable of not breaking the web :)

Marius Schulz wrote

It very much depends on the site you’re building, but in general, you’re right: You shouldn’t implement client-side rendering just for the sake of it.

With content rendered via JavaScript, you need to manually take care of things like setting the correct top offset on the target page when using the back button — a problem long solved for traditional pages rendered on the server.

Sam Armstrong wrote

When did we agree to trade performance for frameworks?

Seriously? Since the dawn of programming.

William Whitacre wrote

Don’t want to necro-post, but do have relevant ideas to this article. Not sure if anyone will pick it up, but at the company my partner and I started we are taking a very different approach that doesn’t quite match up with any of the approaches mentioned here. Yes, we are 100% client side rendered and using a websockets based architecture, but we have a novel solution to the issue of routing and backwards compatibility that some might find interesting, and also a nice solution for the issue of client-side rendering.

For the fully client-side rendering, we use riot.js, which I highly recommend for it’s simplicity and elegance as an MVC. We wrote a nifty message passing bus that takes care of all the view updates as well that turns riot.js tags in to reactive components, and so we get modularity, clean markup, clean bindings to controller code, reusability, unit testing, and all that lovely stuff.

To solve the issue of backwards compatibility with things like search spiders and browser bookmarks, we simply use the HTML 5 history API to keep a friendly tree-like encoding of the restorable view state in the browser bar which is then used to initialize the application’s state when the URL is recalled.

Overall, this approach seems to give applications that feel wickedly fast, that are easy to maintain, and which don’t “break the web”. I’ll just leave this here in case someone comes along who never thought about this approach before. I found this article, so I guess it’s still relevant!

Zan Lynx wrote

I usually browse with NoScript. If a site shows nothing at all when I hit it from a search result why would I bother allowing it to run script? After all there are plenty more sites in the search result list. I’ll try one of those.

And the ones that try to detect NoScript and cover or blank the mostly functional page are extra annoying. They get ignored or have CSS overrides applied.