ponyfoo.com

Why I Write Plain JavaScript Modules

These are short-form “thoughts”, in addition to the usual longer-form articles in the blog. The goal is to publish one of these every weekday. I’d love to know what you think. You may send your questions to [email protected]. I’ll try to answer them over email and I may publish them here, with your approval. I also write thoughts about the current state of front-end development, and opinions on other people’s articles. You can use the form to the right (near the bottom in mobile) to subscribe via email.

Our web needs better primitive libraries. We’ve been relying for too long – far too long – on jQuery. Most popular UI components are tied to jQuery, part of a comprehensive framework – and it’s usually hard to extract the component as a standalone library. Nowadays we may not develop as many jQuery plugins as we’ve used to, but the situation is far more severe now.

Today, many popular libraries – UI components or otherwise shiny client-side JavaScript things – are bound to the author’s preferred coding style. Thus, we create things like react-dnd, angular-dragdrop, or backbone-draganddrop-delegation. Out of the three, none are backed by a library providing the primitives into drag and drop, without the tight framework bindings – such as dragula.

It all comes down to composability and portability.

While dragula needs to be integrated into each one of those frameworks (React, Angular, Backbone), doing so usually takes few lines of code. Here’s one such example using react, dragula, and plain JavaScript someone posted on a GitHub issue for dragula.

dragula([...], {
  direction: 'horizontal',
}).on('cloned', function (clone) {
  clone.removeAttribute('data-reactid');
  var descendents = clone.getElementsByTagName('*');
  Array.prototype.slice.call(descendents).forEach(function (child) {
    child.removeAttribute('data-reactid');
  });
});

Could dragula get rid of data-reactid attributes when it clones things? Allow me to answer that question using a meme.

This is JavaScript!

That being said, there’s no good reason for dragula to do so. Why should dragula know about React? Instead, we introduced a feature where whenever dragula clones a DOM element, it emits an event. In practical terms, this is about the same as getting rid of data-reactid ourselves, but we’ve moved the responsibility of knowing about that particular attribute to whoever uses React.

As I wrote this post, I made react-dragula into a library. It’s just a wrapper – a mighty thin wrapper – around dragula.

function reactDragula () {
  return dragula.apply(this, atoa(arguments)).on('cloned', cloned);
  function cloned (clone) {
    rm(clone);
    atoa(clone.getElementsByTagName('*')).forEach(rm);
  }
  function rm (el) {
    el.removeAttribute('data-reactid');
  }
}

* atoa casts array-like objects into true arrays.

Going through the trouble is worth it because if somebody wants to write an Angular directive for dragula it’s also easy for them to do so, and they seldom have to do anything to integrate it with Backbone – Backbone isn’t that “smart”, so we don’t have to rewrite our code to fit its awkward architecture.

Portability across Frameworks

Without lower level libraries like dragula or xhr we’ll end up reinventing the wheel for an entire afterlife of eternity in hell. Don’t get me wrong – I’m a big fan of reinventing the wheel. I’ve reinvented my fair share of wheels, Twitter reinvented RSS, etc. But, reinventing the wheel as a pointless exercise in porting a library from a framework to another is just wasteful.

When I wrote react-dragula, I didn’t have to fork dragula and repurpose it for React. When I wrote angular-dragula, I didn’t have to fork dragula either. I guess at this point you might argue that “nobody seems to be forking things and repurposing them for other frameworks”, but that’s beside the point.

The point in question is that developing a library that specifically targets a framework is a waste of your time, because when you eventually move on to the next framework (this is JavaScript, you will) you’ll kick yourself over tightly coupling the library to the framework.

Sure, it involves a bit more of work and design thinking. You have to figure out how the library would work without your framework or choice (or any framework for that matter) first – and then wrap that in another module that molds them into the framework’s paradigm.

The react-dragula example was too easy, right? All I did was add an event listener, getting rid of data-reactid attributes. Even though it “looks easy” in hindsight, the naïve approach would’ve been to just write it to conform to React right off the bat – skipping the vanilla implementation entirely. Thus, we’d be ignoring the opportunity to provide hooks where we could later adjust the library to play nice with React, saving ourselves from the painful experience of maintaining multiple libraries that do essentially the same thing.

In the case of angular-dragula, I could’ve come up with a directive that just passed the options onto dragula, but that wouldn’t have been very "Angular way"ish. Thus, I came up with the idea of trying to replicate the simple API in dragula in an Angular way. Instead of defining the containers as an array passed to dragula, you could also use directives to group different containers under the same angular-dragula instance.

The example below would use angular-dragula to create two instances of drake, identified as 'foo', and 'bar'.

<div ng-controller='ExampleCtrl'>
  <div dragula='"foo"'></div>
  <div dragula='"foo"'></div>
  <div dragula='"foo"'></div>
  <div dragula='"bar"'></div>
  <div dragula='"bar"'></div>
</div>

If you wanted to listen for events emitted by one of these drake instances, you could do so on the $scope, prefixing the event with the “bag name” and a dot. Here again, I conformed to the Angular style by propagating drake events across the $scope chain, allowing the consumer to leverage Angular event engine. While events in dragula are raised using raw DOM elements, the events emitted across the $scope chain wrap them in angular.element calls, staying consistent with what you’ve come to expect of Angular components.

app.controller('ExampleCtrl', ['$scope',
  function ($scope) {
    $scope.$on('foo.over', function (e, el, container) {
      container.addClass('dragging');
    });
    $scope.$on('foo.out', function (e, el, container) {
      container.removeClass('dragging');
    });
  }
]);

To configure the instances, you’d use the dragulaService in the controller for these containers. The example below makes it so that items in foo containers are copied instead of moved.

app.controller('ExampleCtrl', ['$scope', 'dragulaService',
  function ($scope, dragulaService) {
    dragulaService.options($scope, 'foo', {
      copy: true
    });
  }
]);

In the future, I might add more directives, moving away from the native dragula implementation and towards a more Angular way of handling things. For example, one such directive could be dragula-accepts='method', and it could configure the accepts callback in such a way that the container where the directive is added to only accepts elements that return true when method(item, source) is invoked. A similar dragula-moves='method' directive could determine whether an item can be dragged away from a container, based on the result of calling method(item).

A few more aspects of dragula can be “molded into Angular” in this way.

While dragula doesn’t have a native way of treating containers individually – even when they take part of the same logical unit in the underlying implementation (a drake can have as many containers as needed), we can build the functionality into angular-dragula. That helps us achieve the “Angular way” of writing directives that affect containers individually, rather than writing directives on a container that have knowledge of a series of unrelated DOM elements. Or, even worse, creating a directive where every immediate child element is a dragula container, constraining the use cases for the consumer.

It might involve some extra work, but being able to reuse the code in any future projects makes plain JavaScript modules well worth your time.

Portability across Platforms

Portability isn’t just a matter of writing vanilla client-side JavaScript libraries. An equivalent case may be made for writing libraries that work well in both Node.js and the browser. Consider async: an amazing piece of software in Node.js, that’s just garbage in the client-side. Granted, it was written well before ES6 modules (or even Browserify) became a thing. A similar story can be told about fast-url-parser, a URL parser which underlies many server-side routers but is insanely large for the client-side. Talking about insane, I’ve used sanitize-html in countless opportunities to sanitize HTML on the server-side, but again – repeat with me: freaking huge for the client-side (depends on htmlparser2).

I’ve worked on reimplementing a few of those to work well on the client-side. Naturally, their server-side counterparts are more comprehensive, as they should be. Use cases for server-side JavaScript far outnumber what you need to do on a given site on the client-side for a single visitor. On the client-side, we can get away (should get away) with much smaller libraries and modules.

Here are some examples.

  • contra (2k) is like async for the browser – It’s modular, too. You can just require individual methods (ala lodash)
  • omnibox (1.6k) is like fast-url-parser for the browser
  • insane (2k) is like sanitize-html (100k) for the browser

Then again – huge JavaScript libraries are only worrisome if we actually care about performance when it comes to serving images in the first place – right?

We’re a far way from the “universal JavaScript” fairytale we keep telling ourselves.

Have any questions or thoughts you’d like me to write about? Send an email to [email protected]. Remember to subscribe if you got this far!

Liked the article? Subscribe below to get an email when new articles come out! Also, follow @ponyfoo on Twitter and @ponyfoo on Facebook.
One-click unsubscribe, anytime. Learn more.

Comments (22)

Nicolas Bevacqua wrote

Another good example would be dominus, a micro library I wrote that mimics the DOM selection and manipulation aspects of jQuery, for only a tenth of the size.

qgustavor wrote

Is the this atoa function defined as const atoa = Array.from || e => Array.prototype.slice.call(e);?

Nicolas Bevacqua wrote

Close!

module.exports = function atoa (a, n) { return Array.prototype.slice.call(a, n); }
qgustavor wrote

With that second argument it’s a slice function with support for Array likes: turns the “casts array-like objects into true arrays” is just an side effect.
By the way the second argument of Array.from is a replacement function, example: Array.from({length:17},e=>e+1).join('')+' Batman'

Jason wrote

First off, great article. I agree wholeheartedly.

And now I’m going to nitpick: Why does insane have a bundled parser? Isn’t that the type of module that should exist separately? The purpose of insane is to remove black-listed subtrees from a DOM structure. Why not just accept a dom as input and operate off of that? So users can pick a parser they like (and the end users doesn’t end up with duplicated parser code if they also have a 3rd party parser already in their bundle).

Regardless, great post!

Nicolas Bevacqua wrote

I didn’t have any use case where I needed a synchronous stream parser so I just bundled them together. It’d be very easy to decouple them as the parser takes a handler that does the sanitizing. If, instead, the parser worked as an event emitter, then they’d make even more sense as a decoupled pair.

Example insane pseudo-code where the parser is completely external:

parser(html)
  .on('start', start)
  .on('end', end)
  .on('content', content)
  .on('comment', comment)
  .start();
ncreensame wrote

React-DND is backed by https://github.com/gaearon/dnd-core, which does not depend on react

Nicolas Bevacqua wrote

That’s not a standalone module by any stretch of the imagination.

  • It lacks API documentation
  • Subscribes to Flux architecture model
  • Specifically tailored for React, React native and friends
  • It still needs a ridiculous amount of work on top, at react-dnd to do anything useful
  • It lacks API documentation
Cameron wrote

Great article – I’ve come to similar conclusions myself lately. It would be nice to see the JS community move towards this more. I’m always worried when I see things like “native angular” touted on libraries.

John Doe wrote

This is the most sane article about Jazzcript around these days, because Jazzcript powers come from it’s expressivity, which favors convenience over corporate overengineering, and it has been possible since forever. ES5 is good enough for everything, but syntatic sugar noise is poured on our heads in order to cast a shadow over all the possibilities one can harness to summon the arcane function of Jazzcript domination right now.

Richard Kennard wrote

Hi Nicolas,

Really enjoying your blog articles lately - thanks for writing them. I’ve become a daily reader :)

Perhaps I could trouble you to take a look at Metawidget? It’s a pretty advanced form of what you describe. It has a pure JavaScript core, then a bunch of lightweight wrappers for frameworks like AngularJS, JQuery Mobile, Web Components etc.

To achieve high fidelity with each framework, the core had to be pluggable in lots of different dimensions. Therefore it implements a pipeline of plugins so we can hook in lots of native framework features like data binding, validation, layouts - and thus appear as much as possible to be a native framework component. For example, compare this pure JavaScript example with this Angular example.

Regards,

Richard.

David wrote

Great article there. We are developing huge rails/react project, where we use few 3rd party libraries like iScroll or GSAP and this is exactly how we do it. We released one npm package react-iscroll like that so far :)

JohnDubya wrote

How do you get around browser incompatibilities and inconsistencies without some form of a framework? Maybe jQuery isn’t as useful in this arena as when I started with it back in 2008 or so, but it was a lifesaver back then. It did all the work of making all browsers performed the same.

Nicolas Bevacqua wrote

With small and focused modules. If you need to get around a particular browser quirk, the fix for that quirk is probably somewhere around 20 LOC, and not over 30kb minified and gzipped.

John-David Dalton wrote

Minor nit, the uses of atoa and Array.prototype.slice.call can be avoided. The arguments don’t need to be converted to an array to be used with apply and you can leverage the generic this support of Array#forEach via Array.prototype.forEach.call(nodeList, ...).

Andrew Ingram wrote

I tend to think of these types of libraries as being engines, though that’s just my preference. Essentially a library that provides mathematical modelling of a pattern without caring about rendering or event handling.

kirilloid wrote

You have link to fast-url-parser at the omnibox. As for the size, I think there’s a varying level of size/edge-cases balance.

E.g. these functions are enough for most cases of url parsing.

  var loc = {};
  url.replace(/^https?:/, function (protocol) {
    loc.protocol = protocol.replace(/:$/, '');
    return '';
  }).replace(/^\/\/(.*?)(?=\/|$)/, function (_, host) {
    var parts = host.split(':');
    loc.host = host;
    loc.port = parts[1] || '';
    loc.hostname = parts[0];
    return '';
  }).replace(/^\/.*?(?=[?#]|$)/, function (absPath) {
    loc.pathname = absPath;
    return '';
  }).replace(/^.*?(?=[?#]|$)/, function (relPath) {
    loc.pathname = location.pathname + relPath;
    return '';
  }).replace(/\?.*?(?=[#]|$)/, function (search) {
    loc.search = search;
    return '';
  }).replace(/#.*/, function (hash) {
    loc.hash = hash;
    return '';
  });
  return loc;
}

function parseQuery(query) {
  var hash = {};
  query.replace(/^\?/, '').split(/[&;]/).forEach(function (pair) {
    let [key, value] = pair.split('=').map(decodeURIComponent);
    hash[key] = value;
  });
  return hash;
}```

Or if you don't care much about performance, but care about size, you can use native parsing ability.
```var parseUrl = (function () {
  var a = document.createElement('a'); // one doesn't even need to add it to DOM
  var fields = ["hash", "search", "pathname", "port", "hostname", "host", "protocol"];
  return function (url) {
    a.setAttribute('href', url);
    return fields.map(name => a[name]);
  };
}());```
Should work even in IE6.
Benjamin wrote

I really like the essence of the article and I do often use a few small modules instead of instantly buying the the pig in a poke. It’s so much easier to control this way. But I often struggle with how to structure / implement the code bits, that cannot be reusable, but need to be site specific. Do you have any insights on that?

William wrote

If you named atoa something like toArray, then you wouldn’t need a comment explaining what it does. Just a thought.

Zeniface wrote
Cristian wrote

First, let me say congratulations for an awesome library!

You were asking why should Dragula know about react. Let me tell you why: because if dragula doesn’t know about react, it can’t return a react component and needs to do hacks like striping the data-reactid attributes so it can be used with react. I know that with angular it was easier to integrate, but it doesn’t seem to me that it provides a real alternative to react-dnd (wich is painfully hard to learn) when we’re speaking about react. see: https://github.com/bevacqua/react-dragula/issues/14

Keep up the good work!