ponyfoo.com

The Controversial State of JavaScript Tooling

There have been many different blog posts on the “State of the Web”, problems people face around tooling, and similar-style musings of late. This article is about summarizing my thoughts on recent events. Rather than yet another rant about the state of web development, this piece aims to pour some thought into what we can do to fix the situation we find ourselves in.

Remy Sharp wrote an excellent piece recently about his love for the web. It immediately stuck with me – and many others, I imagine – because of how close to home it hit. He wasn’t the only one to publish his thoughts recently, though.

Unfortunately, the spirit and praise of the web in Remy’s post isn’t shared by many of these articles. To the contrary, the web development community is flooding in skepticism, negativity, and pessimism.

The web community is as opinionated as it is large.

These days however we don’t see as much constructive thinking as we see articles about Angular 2 being too complex, React being “bad”, Babel (or even worse, their maintainers) … – just about anything that’s popular within the community is wide open for criticism. When it comes to providing constructive criticism, though, we’re sorely lacking.

The problems, however, are real. By the end of 2015 many – myself included – agreed that one of the hardest problem the front-end community faces right now is tooling consolidation. That is, arriving at easier-to-use tools without sacrificing firepower.

Earlier this month, there was a Medium rant making the rounds about how using a sledgehammer to crack a nut has become a difficult chore, basically repeating the premise of an older Medium article but dropping the constructive bits (and personally attacking Babel authors – which is why I refuse to link to it). Around the same time, hundreds of open-source advocates signed a petition to GitHub asking for better workflows in large open-source projects. There have also been responses to “Dear GitHub”, Dear “Dear GitHub”, and so on. You get the point. People like to complain about things.

Cracking a nut with a sledgehammer

In this article I hope to look into the negativity rabbit hole, see how deep it gets, while asking ourselves how the hell we got into this mess – and more importantly how we could fix it.

A Hard Place

The front-end development community as a whole has put itself in a hard place. We’ve collectively overlooked issues that arise from using tons of purpose-specific tools, and for good reason. Other languages and ecosystems are victims of all-encompassing standard libraries, but the web development community takes pride in not having that problem. In the past we fell prey to large utility libraries that did just about anything – your jQuery, Underscore, etc.

Large libraries had their benefits – as well as their drawbacks. It was nice being able to drop-in jQuery, do “all the things”, and forget about it. As applications grew in size and complexity, though, we couldn’t just drop any libraries we found on the Internet in our codebases anymore. That’d result in much-larger-than-necessary websites, something we didn’t want. Thus, over time, the community understood how we could benefit from smaller modules, and started working on micro libraries – at a time where most “libraries” were developed as jQuery plugins. Along came Node.js, and its small modules philosophy turned many of us into firm believers that, indeed, writing small modules is the way to go.

Followers of the small module movement started breaking down libraries they built into components. Out came a more modular approach to popular libraries like jQuery and Lodash. True, jQuery had been “modular” for a long time, but the approach they took didn’t lend itself to adoption: it relied on consumers utilizing complicated online build tooling where you had to pick what parts of jQuery your application was going to utilize – in advance. Great in theory, and they certainly could boast about being modular, but not the most user-friendly approach.

In practice, consumers either used the raw, full jQuery library, or – at best – they took out a few highly irrelevant parts and called it a day. jQuery UI was in a similar place, but had an even weaker position due to the fact that it also depended in jQuery.

When Bower came along, – it took a while before jQuery was mirrored onto npm as well – it became even more obvious that the custom build wasn’t a serious option, as you’d have to jump through a considerable number of hoops before you could even arrive at a custom bundle.

Lodash took a different path, which favored adoption even though their approach was detrimental to their own build processes (but not their consumers). Starting with v2, they published hundreds of modules to npmlodash.find, lodash.flatten, etc. – where each module represented one of the utility functions in Lodash. Later on, starting in v3, they improved upon that and allowed you to pull specific functions as CommonJS modules like require('lodash/function/bind'). Even though there’s a tree composed of hundreds of small modules in the Lodash codebase, they’re still available as a single package on npm. That is a good thing.

Hypermodularization

Around the same time came hypermodularization. This is basically the same, in principle, as lodash splitting their functionality in hundreds of small modules, bits and pieces of documentation and tests, and npm packages. A major difference in an hypermodularized scenario is that you don’t see comprehensive distributions anymore. One such example is the original lodash package itself, which to this day contains the whole of their utility functions, even though they’re now also available as individual pieces. When it comes to lodash, you can take the whole thing or a method at a time.

For library authors, hypermodularization makes a lot of sense, as it comes with a wealth of benefits.

  • You only take what you need
  • Reuse functionality across several packages
  • Semantic versioning on individual modules – not just for packages as a whole
  • Extended documentation and front-facing API surface tests
  • Less friction integrating server-side modules in client-side applications

The problem with hypermodularization, though, is that adoption becomes trickier. Undoubtedly, people will immediately point at small modules as the culprit. “Too many API touchpoints” – some say. “Too much plumbing” – others point out. Some might even say that larger bundles of things were better, as you didn’t have to spend time forming an opinion as to how half a dozen modules should be plumbed together, or dealing with boilerplate generators such as yeoman (another non-solution – code generators are hardly ever the answer).

When I first ran into tree-shaking I quickly dismissed it as a nice to have that would prove hardly more beneficial than bundle-collapser, which allows you to save a few hundred bytes in bulky browserify bundles. Nice – sure. Necessary? Hardly. Or that’s what I thought.

Tree-shaking is a game breaker

At the time, I misunderstood its use cases. Tree-shaking is a feature available in modern module bundlers rollup, namely – where ES6 modules are statically analyzed for exports that are being used, and those that are not become left out of the resulting bundle.

Suppose we had the following piece of code:

import _ from 'lodash';
_.keys({ pony: 'foo' });

How cool would it be if a compiler would turn that into something like this?

var _ = {
  keys: function (o) {
    return Object.keys(o);
  }
};
_.keys({ pony: 'foo' });

Disclaimer: Actual lodash code is not as contrived, this is merely an illustration.

I had hoped rollup would do that, but it interprets lodash as an external dependency, presumably because its written under ES5, or maybe just because the package is in node_modules. Nevertheless, if we were able to take code like the previous snippet and turn it into the second – smaller – one, we’d eliminate one of the biggest drawbacks of large distributions such as the lodash package and similar utility libraries: people.

People take libraries like lodash – or jQuery, as we analyzed earlier – and insert the whole thing into their codebases. If a simple bundler plugin could deal with getting rid of everything in lodash they aren’t using, footprint is one less thing we’d have to worry about.

The other set of drawbacks in large distributions can’t be solved by consumers, and should be resolved by implementers instead. Incidentally, these things are already solved by hypermodularization: maintability, documentation, ease of contribution, etc. As modules get smaller, they also become easier to maintain, document, test, and contribute to, lowering the barrier of entry. A large monolith on the other hand usually involves some sort of learning curve, makes people scared of breaking undocumented functionality, and so on.

One drawback when it comes to code contributions and support requests – however (and amusingly) – is that maintaining an hypermodularized ecosystem is a hassle when they are highly related and kept in several different repositories. Babel has an excellent document that outlines their monorepo culture and how it has allowed them to contain issues arising from dealing with support requests against their many hypermodular, interconnected components.

Consolidating Opinions

In a hypermodular ecosystem, it becomes increasingly hard to plumb pieces of code together. In a monolithic ecosystem, it becomes increasingly annoying to deal with large chunks of code that you don’t need. We need to consolidate the two.

I believe in hypermodular components. They are great at what they do, they follow the “do one specific thing very well” philosophy and are primed to thrive in an open-source community such as ours.

I never much liked comprehensive libraries like jQuery in terms of API surface, but a lot of that dislike goes away when a tool such as tree-shaking is effectively applied across the board.

Moreover, when we take a look at application logic that is mostly concerned with plumbing discrete libraries in the application level, we begin to see how hypermodular components should be packaged in larger distributions. In one of the opening lines in this article I stated that “the web community is as opinionated as it is large”. I opine we need to become more opinionated as far as library authorship goes.

Screenshot showing code plumbing hypermodular libraries at the application level

One such example of plumbing at the application level, extracted from a Medium article.

Not only would we be getting rid of obnoxious plumbing, but we’d also steer more users towards best practices, we’d spend less time arguing about what approach is correct, and we’d spend more time being productive in day-to-day development. While flexibility and hyper-modularity are great features, compromising on a set of opinions and reducing noise at the implementation level are similarly honorable objectives to arrive at.

Nothing is to prevent us from keeping the lower-levels of our architectures hypermodularized. This is all good and well – it is one of the fundamental pillars of modern JavaScript development. It is also true that the crux of our problems today stem not from one library or another being hypermodular, but from the ecosystem as a whole being developed this way.

In that sense, we needn’t cry about [email protected] asking us to install a couple more packages. If you dislike doing that every time, build a wrapper around it with some opinions on top. Do the same for React packages you use and are tired of plumbing over and over in all your applications. Avoid generators with a passion, but opinionate your way through the ocean of hypermodules we find ourselves swimming around in.

Write wrappers and intermediate libraries that live outside of your application core while consuming hypermodules. Keep your opinions to those intermediate libraries, while keeping hypermodules discrete. In this sense, you could think of an hypermodule as lodash/function/bind and an intermediate library as lodash. I use lodash as an example because it’s one of the best representations out there today of what constitutes an hypermodular library. One that has hundreds of components, but opinions are not one of them.

An ecosystem that’s founded on intermediate libraries on top of hypermodules could fare much better. The bulk of implementation, testing, and documentation would fall in hypermodules, while opinions and plumbing could be kept in an intermediate layer we are only just yet starting to even consider. Presumably, code in the intermediate layer could be bulkier, but that should be something that – with proper and better tooling – a feature like tree-shaking could take care of.

Liked the article? Subscribe below to get an email when new articles come out! Also, follow @ponyfoo on Twitter and @ponyfoo on Facebook.
One-click unsubscribe, anytime. Learn more.

Comments (22)

john doe wrote

Look at simple and functional existing things, release an experimental npm module that seems to do an extremely important thing while it just provides the least it can behind the scene, and acts as you’re building the “future” ! hashtag JS. c’mon guys ! That seemed very cool, that became awful

Barney Carroll wrote

Hiya Nicolás, rollup can’t tree shake Lodash because anything other than an ES6 module cannot be statically analyzed — many CommonJS & AMD modules make use of dynamic code execution to return different exports based on run time logic, something that ES6 modules specifically forbid. In order to have 3rd party dependencies tree shakeable by rollup, their package.json must specify an ES6 compatible module as jsnext:main. More on that on the rollup README.

Nicolas Bevacqua wrote

Yeah, I assumed that was probably the culprit. Thanks for the README pointers, didn’t know about the jsnext:main standard.

Dwight Vietzke, Jr. wrote

Totally agree modules are good, but the ES6 modules spec. specifically inhibits dynamic loading of code too. Why load whole libraries, when you could load core portions initially on page load, then load modules as needed for single page apps with increasing functionality? I still do, but then I’m still using jQuery ajax methods to do it, so the use of ES6 imports isn’t a big “game changer” anyway.

That too was a missed opportunity in my opinion, but so is the amount of code duplication between frameworks and libraries such as overlapping utilities and general merge, foreach, isSomeType stuff that everyone seems to repeat over and over. AngularJS 1.x was a good example of a missed opportunity to just use jQuery as a dependency and skip the jqlite stuff. I know they felt like it was beneficial at first (required?), but the truth now is that you are probably better off just using the version of jQuery you want (for compatibility issues, etc.) and skip the wondering how jqlite is keeping up. It was just unnecessary duplication of effort maintaining what was very stable code from jQuery to begin with.

In closing, please realize this isn’t a rant, but just a way to highlight some alternative thoughts on the whole modular code environment as it moves forward.

Barney wrote

ES6 doesn’t preclude dynamic loading by any means at all. In fact the ES6 spec intentionally completely left out how modules are resolved and loaded — this leaves us in the current situation where, somewhat ironically, there is no native method for resolving and including a module, and yet we have discovered and implemented tree shaking.

Current solutions to dynamic loading are difficult because you need front end code to do (at least partial) transpilation on the fly — resource intensive! But it is possible. Check out SystemJS for a current implementation.

Piotrek Koszuliński wrote

I’ve been recently doing a lot of research around the topics which you mention, such as bundling ES6, dealing with high modularization, etc. We’re designing the architecture of the next major version of CKEditor (so a much bigger piece of code than jQuery, lodash and perhaps Babel too) and we wanted to address many issues that the old code base had. I won’t be able to go into details in this comment, but I’d like to add my 2 cents regarding two issues which we encountered.

  1. Bundling. There are great tools that you can use when you’re bundling code for your website, but it’s really tough to bundle a library. In general I also believe in tools like Rollup and that’s the thing we’ll definitely recommend. However, they have one major flaw. They require Node.JS. All fine when you’re a frontend dev who can statically generate all the assets, but it doesn’t integrate well with e.g. CMSes written in other languages. Take Drupal for instance. It includes a standard CKEditor build but it is possible to install Drupal addons and some of them must add features to CKEditor. Obviously, it means that they need to be able to import CKEditor modules. So where’s the problem? Most of the bundlers (Rollup included) export just a single entry point. You lose the whole modularity, what makes your app completely inextensible… unless you either develop some internal module system (;/), or transpile your code into AMD and use named modules (here I should tell the story how hard it is to switch from anonymous modules and named modules with Require.JS). In other words, you need to develop a huge build system able to generate the whole range of formats (ES6, AMD, bundled AMD, CJS, standalone), because that’s the only way to make your library useful in the wide range of workflows and use cases. As far as I can tell, looking at lodash and at what we’ve developed, you’ve got to deal with this by yourself and there’s a huge amount of traps. I hope that the JavaScript community will be able to define a proper toolset and workflow for library authors.

  2. NPM. We’ve made a decision that we want to keep each package in a separate repository. I know that Babel and some other projects recommend the monorepo setup, but in our opinion that doesn’t solve all the issues, because you still have to manage many packages (and e.g. installing and upgrading their dependencies, deduping that, symlinking between node_modules and those packages, etc.). We also wanted to avoid the situation where the entire toolset that we’d produce would work for our packages (because they are kept in the same repo) but would be cumbersome for other developers, which will keep their packages in their own repos. Anyway, whichever option we would chose, we would need to deal with multiple repos at some point. I think we’ve spent already a month discovering how to glue all those repositories without totally bypassing NPM. Like in the case of bundling, the easiest solution seems to be developing your simple package management system, tailored for your needs, but that’s exactly what we wanted to avoid. Again, we yet haven’t created proper tools and workflow.

So I totally agree with:

JavaScript community made some great improvements in 2015, but we need to consolidate tooling in 2016.

We have the toys, but we need to choose best and complete the missing fragments of our workflows.

Jane Doe wrote

We have the same problem (re: combining modules at runtime vs. build-time) at a place I work. We used to use the (excellent) YUI module loader to dynamically register and load modules at runtime, but that’s deprecated and discontinued now.

SystemJS, however, is an excellent tool in this space, and with JSPM can do incremental build-time stuff.

Max Sysoev wrote

BTW, did you give a try for webpack and their “require.ensure” and Code Splitting features? I have experience in it, it worked for me very well (7mb codebase, with React, jQuery and CKeditor - all used for administration page)

Piotrek Koszuliński wrote

@Jane Doe: I started with SystemJS and had a good results until I wanted to bundle the code which ended with some errors. The project is still very fresh and implements standard that doesn’t exist yet, so even if it worked perfectly, it would be risky for us to make a decision that we totally stick to it. The setup with non-transpiled ES6 modules would be quite different than setups with transpilation to AMD or CJS so when we realised that we moved to more popular solutions (Babel).

@Max Sysoev: I’ve stumbled upon code splitting feature back then but I thought (and still think after checking it now) that it doesn’t fully meet our requirements. Webpack is a pretty complex project itself and that’s also a thing that I’ve been afraid personally. CKEditor builds will be used in various environments so we must fully understand how the builder works. The more magic inside it and the bigger affect it has on the project, the bigger the risk of having troubles in the future when either it turns out that the builder doesn’t satisfy some odd needs or when the project dies (we’re talking here about 10+ years). In our case it’s therefore safer to compose the building pipeline out of smaller modules, what perfectly matches Nicolas’s point.

drmabuse wrote

Wow you are reporting what iam thinking THX for this soul

Samuel Allen wrote

Ember + Tree-shaking = happiness

Barney wrote

I’d be interested in how you’ve managed to implement this. Ember currently does it’s own magic resolution, and lives in a bizarre world where it implements A) named exports but also B) attaches everything to the Ember god object and then for bonus points C) extends native prototypes, while also D) quietly injecting new global variables anyway.

Correction: I’d be very interested in your implementation. Does it get the boilerplate footprint under 400kb?

Samuel deHuszar Allen wrote

It was more an aspirational comment. I should have stated “will = happiness.” It is a feature on the roadmap and one that makes me excited, but it is not yet released.

I gather that you’re not a fan of Ember. I’m not 100% sure why A-D are necessarily bad, especially in light of what you get from the whole, but I’d be curious to know your thoughts on it in more detail.

ember.min.js weighs in at 452.9k which is a little bit above your target, but with a good Service Worker and a Manifest everything past the initial load can be mitigated. Sites like Bustle pull a lot of content on top of that initial payload but are pretty quick to screen, and are then lightning fast once fully loaded.

But the promise of bringing tree-shaking to Ember is that only pieces of Ember that you use in your components, routes, adapters, etc. would be compiled out when building for production. To my mind the only real downside of Ember compared to say React, or Vanilla JS, is that if you’re building something simple, it may be overkill, but being able to shake out areas of the core library that don’t get used in any given implementation could reduce that concern quite a bit.

Andrew Myers wrote

Tree-shaking reminds me of UnCSS. Using monolithic CSS frameworks can really slow down a site, but UnCSS fixes it by loading the site with PhantomJS and eliminating CSS selectors that don’t match any element on the page. The result (with a little tweaking to fix false positives) is a CSS file that has all of the useful parts and none of the rest. I wish it had always been that simple with JS…

Todd Geist wrote

Nice article!

Your description of using intermediate libraries to wrap hypermodular smaller libraries with opinions reminds me of the container components and presentational components pattern in react.

Build small reusable things then wrap them In things that express the more complex and opinionated aspects of your app. Those complex opinionated things aren’t as reusable but they don’t need to be. They have more vertical concern.

Daniel Abraham wrote

The real problem is not hypermodularization, library smell, or open source workflows. It’s the language,

I love JavaScript, but let’s take off the rose colored goggles for a minute and acknowledge that web devs lack insanely great tooling available in other languages, that were better engineered from the ground up to scale. Since JavaScript lacks reflection, and anything pretty much goes, we can only do so much with regard to tooling. That is, until the language really improves (beyond band-aides).

If your doing somethign trivial, your lucky. The projects are getting more and more massive, and a scripting language simply doesn’t scale very easily. Sooner or later, everyone will figure this out the hard way. When you get discouraged and go looking for an answer, you might then entertain the only thing that even comes close to addressing the real problems. And that is TypeScript, dude.

At the end of the day, we are all only human and each have limited cognitive resources. What’s truly frustrating you is hitting that wall repeatedly. Once you add genuine intellisense, refactoring, generics, structural typing, code navigation, and dare I say validation to your projects, all the whining and despair come to an abrupt halt, and hope is once again restored.

I will ad one caveat, however; it is easier to pick up TypeScript if you have at least some experience in a typed language (C#, Java, etc). And also make sure you don’t assume you know how it works. Without good guidance it’s easy to go wrong. They had to bake in some idiomatic stuff to accommodate the entire ecosystem, unfortunately. But the project has great support and keeps getting better and better all the time.

I predict (publicly) that ECMAScript will resemble TypeScript eventually. As far as I am concerned, the future is already here, and myself and many others are living in it quite happily. In fact, let me go even further, to help remove any doubts about where this train is headed.

Every nascent technology has room for and accommodates amateurs (hacks), for a time. Sooner or later, if enough serious professionals emerge, the squeeze becomes real. So if you owe your admirable position to demand or desperation, then you may want to stop scripting (and whining) and start engineering, for a change.

hal9zillion wrote

I think this is probably the best of these sorts of articles thusfar and I agree with pretty much all you said. I like that you touch on the monorepo thing and how modules/dependencies in version control just dont seem to work yet.

I dont think the solution to peoples problems is a retreat to big, opinionated, monolithic frameworks but rather we need to get better at scaffolding/generator type tooling (e.g. Yeoman). By separating these “meta-concerns” - keeping the opinions separate from the tools - i think we can get the best of all worlds. I’ve got some ideas on how this might be improved that I’m working on and I’m sure a few people will be looking at the same sort of thing in 2016.

Jim wrote

I think this post resonates with a lot of people. I really think System.js has a good future as far as tools go. The problem is no one fixes a library. Developers just create new ones when the old one could be fixed or a new feature could be added.

Also I see a lot of people talking about a personal attack in the unmentionable article(haha). I found the original on an archive site when every one was talking about it. I actually didn’t interpret it as an attack. That’s just me though.

image description

hden wrote

Just supplementing some history: Module bundling, tree-shaking, statical code analysis, dead code elimination, all these have been implemented by the Google closure compiler.

The majority of JavaScript developers have rejected Google Closure because of a major drawback: for dead-code elimination to work, they had to follow strict discipline about the JavaScript they write (in particular, it ended up looking a lot like Java). – Clojurescript FAQ (for JavaScript developers)

Zoltan wrote

I’ve been using ES6 modules in Ember.js development for two years now. Using Ember CLI is simplified down the dev process extreamly. The whole dev app development is so simple and fun with Ember. I don’t understand why people cry about it, the solution is exist, they should just use it. Thats all.