ponyfoo.com

Mastering Modular JavaScript

Module thinking, principles, design patterns and best practices — Modular JavaScript Book Series
O’Reilly Media164 PagesISBN 978-1-4919-5568-0

Tackle two aspects of JavaScript development: modularity and ES6. With this practical guide, front-end and back-end Node.js developers alike will learn how to scale out JavaScript applications by breaking codebases into smaller modules. We’ll specifically cover features in ES6 from the vantage point of how we can leverage them to improve modularity in our programs. Learn how to manage complexity, how to drive it down, and how to write simpler code by building small modules.

If you’re a frontend developer or backend Node.js developer with a working knowledge of JavaScript, this book is for you. The book is ideal for semi-senior developers, senior developers, technical leaders, and software architects.

This book is part of the Modular JavaScript series.

🗞 Start with the book series launch announcement on Pony Foo
💳 Participate in the crowdfunding campaign on Indiegogo
🌩 Amplify the announcement on social media via Thunderclap
🐤 Share a message on Twitter or within your social circles
👏 Contribute to the source code repository on GitHub
🦄 Read the free HTML version of the book on Pony Foo
📓 Purchase the book from O’Reilly on Amazon

Chapter 3

Module Design

Thinking in terms of API-driven and documentation-driven design will yield more usable modules than not doing so. You might argue that internals are not that important: “as long as the interface holds, we can put anything we want in the mix!” A usable interface is only one side of the equation; it will do little to keep the maintainability of your applications in check. Properly designed module internals help keep our code readable and its intent clear. In this chapter, we’ll debate about what it takes to write modules with scalability in mind but without getting too far ahead of our current requirements. We’ll discuss the CRUST constraints in more depth, and finally elaborate on how to prune modules as they become larger and more complex over time.

Growing a Module

Small, single-purpose functions are the lifeblood of clean module design. Purpose-built functions scale well because they introduce little organizational complexity into the module they belong to, even when that module grows to 500 lines of code. Small functions are not necessarily less powerful than large functions, but their power lies in composition.

Suppose that instead of implementing a single function with 100 lines of code, we break it up into three or more smaller functions. We might later be able to reuse one of those smaller functions somewhere else in our module, or it might prove a useful addition to its public interface.

In this chapter, we’ll discuss design considerations aimed at reducing complexity at the module level. While most of the concerns we’ll discuss here have an effect on the way we write functions, it is in the next chapter where we’ll be specifically devoting our time to the development of simple functions.

Composability and Scalability

Cleanly composed functions are at the heart of effective module design. Functions are the fundamental unit of our code. We could get away with writing the smallest possible number of functions required, the ones that are invoked by consumers or need to be passed for other interfaces to consume, but that wouldn’t get us much in the way of maintainability.

We could rely solely on intuition to decide what deserves to be its own function and what is better left inlined as part of a larger body of code, but this might leave us with inconsistencies that depend on our frame of mind, as well as how each member of a team perceives functions are to be sliced. As we’ll see in the next chapter, pairing a few rules of thumb with our own intuition is an effective way of keeping functions simple, limiting their scope.

At the module level, it’s required that we implement features with the API surface in mind. When we plan out new functionality, we have to consider whether the abstraction is right for our consumers, how it might evolve and scale over time, and how narrowly or broadly it can support the use cases of its consumers.

When considering whether the abstraction is right, suppose we have a function that’s a draggable object factory for DOM elements. Draggable objects can be moved around and then dropped in a container, but consumers often have to impose different limitations on the conditions under which the object can be moved, some of which are outlined in the following list:

  • Draggable elements must have a parent with a draggable-list class.

  • Draggable elements mustn’t have a draggable-frozen class.

  • Dragging must initiate from a child with a drag-handle class.

  • Elements may be dropped into containers with a draggable-dropzone class.

  • Elements may be dropped into containers with at most six children.

  • Elements may not be dropped into the container they’re being dragged from.

  • Elements must be sortable in the container they’re dragged from, but they can’t be dropped into other containers.

We’ve now spent quite a bit of time thinking about use cases for a drag-and-drop library, so we’re well equipped to come up with an API that will satisfy most or maybe even every one of these use cases, without dramatically broadening our API surface.

Consider, in contrast, the situation if we were to go off and implement a way of checking off each use case in isolation without taking into account similar use cases, or cases that might arise but are not an immediate need. We would end up with seven ways of introducing specific restrictions on how elements are dragged and dropped. Since we’ve designed their interfaces in isolation, each of these solutions is likely to be at least slightly different from the rest. Maybe they’re similar enough that each of them is an option flag, but consumers still can’t help but wonder why we have seven flags for such similar use cases, and they can’t shake the feeling that we’ve designed the interface poorly. But there wasn’t much in the way of design; we’ve mostly tacked requirement upon requirement onto our API surface as each came along, never daring to look at the road ahead and envision how the API might evolve in the future. If we had designed the interfaces with scalability in mind, we might’ve grouped many similar use cases under the same feature, and would’ve avoided an unnecessarily large API surface in the process.

Now let’s go back to the case where we do spend some time thinking ahead and create a collection of similar requirements and use cases. We should be able to find a common denominator that’s suitable for most use cases. We’ll know when we have the right abstraction because it’ll cater to every requirement we have, and a few we didn’t even have to fulfill but that the abstraction satisfies anyhow. In the case of draggable elements, once we’ve taken all the requirements into account, we might choose to define a few options that impose restrictions based on a few CSS selectors. Alternatively, we might introduce a callback whereby the user can determine whether an element can be dragged and another whereby they can determine whether the element can be dropped. These choices also depend on how heavily the API is going to be used, how flexible we want it to be, and how frequently we intend to make changes to it.

Sometimes we won’t have the opportunity to think ahead. We might not be able to foresee all possible use cases. Our forecasts may fail us, or requirements may change, pulling the rug from under our feet. Granted, this never is the ideal situation to find ourselves in, but we certainly wouldn’t be better off if we hadn’t paid attention to the use cases for our module in aggregate. On the other hand, extra requirements may fit within the bounds of an abstracted solution, provided the new use case is similar enough to what we expected when designing the abstraction.

Abstractions aren’t free, but they can shield portions of code from complexity. Naturally, we could boldly claim that an elegant interface such as fn => fn() solves all problems in computing; the consumer needs to provide only the right fn callback. The reality is, we wouldn’t be doing anything but offloading the problem onto the consumers, at the cost of implementing the right solution themselves while still consuming our API in the process.

When we’re weighing whether to offer an interface like CSS selectors or callbacks, we’re deciding how much we want to abstract, and how much we want to leave up to the consumer. When we choose to let the user provide CSS selectors, we keep the interface short, but the use cases will be limited as well. Consumers won’t be able, for example, to decide dynamically whether the element is draggable, beyond what a CSS selector can offer. When we choose to let users provide callbacks, we make it harder for them to use our interface, since they now have to provide bits and pieces of the implementation themselves. However, that expense buys them great flexibility in deciding what is draggable and what is not.

Like most things in program design, API design is a constant trade-off between simplicity and flexibility. For each particular case, it is our responsibility to decide how flexible we want the interface to be, but at the expense of simplicity. We can also decide how simple we want an interface to be, but at the expense of flexibility. Going back to jQuery, it’s interesting to note how it always favors simplicity, by allowing you to provide as little information as needed for most of its API methods. Meanwhile, it avoids sacrificing flexibility by offering countless overloads for each of its API methods. The complexity lies in its implementation, balancing arguments by figuring out whether they’re a NodeList, a DOM element, an array, a function, a selector, or something else (not to mention optional parameters) before even starting to fulfill the consumer’s goal when making an API call. Consumers observe some of the complexity at the seams when sifting through documentation and finding out about all the ways of accomplishing the same goals. And yet, despite all of jQuery’s internal complexity, code that consumes the jQuery API manages to stay ravishingly simple.

Design for Today

Before we go off and start pondering the best ways of abstracting a feature that we need to implement so that it caters to every single requirement that might come in the future, it’s necessary to take a step back and consider simpler alternatives. A simple implementation means we pay smaller up-front costs, but it doesn’t necessarily mean that new requirements will result in breaking changes.

Interfaces don’t need to cater to every conceivable use case from the outset. As we’ve analyzed in Modularity Principles, sometimes we may get away with first implementing a solution for the simplest or most common use case, and then adding an options parameter through which newer use cases can be configured. As we get to more-advanced use cases, we can make decisions as outlined in the previous section, choosing which use cases deserve to be grouped under an abstraction and which are too narrow for an abstraction to be worthwhile.

Similarly, the interface could start off supporting only one way of receiving its inputs, and as use cases evolve, we might bake polymorphism into the mix, accepting multiple input types in the same parameter position. Grandiose thinking may lead us to believe that, in order to be great, our interfaces must be able to handle every input type and be highly configurable with dozens of configuration options. This might well be true for the most advanced users of our interface, but if we don’t take the time to let the interface evolve and mature as needed, we might code our interface into a corner that can then be repaired only by writing a different component from the ground up with a better thought-out interface, and later replacing references to the old component with the new one.

A larger interface is rarely better than a smaller interface that accomplishes the job consumers need it to fulfill. Elegance is of the essence here: if we want our interface to remain small but predict that consumers will eventually need to hook into different pieces of our component’s internal behavior so that they can react accordingly, we’re better off waiting until this requirement materializes than building a solution for a problem we don’t yet have.

Not only will we be focusing development hours on functionality that’s needed today, but we’ll also avoid creating complexity that can be dispensed with for the time being. It might be argued that the ability to react to internal events of a library won’t introduce a lot of complexity. Imagine, however, that the requirement never materializes. We’d have burdened our component with increased complexity to satisfy functionality we never needed. Worse yet, say the requirement changes between the moment we’ve implemented a solution and the time it’s actually needed. We’d now have functionality we never needed, which clashes with different functionality that we do need.

Suppose we don’t need hooks only to react to events, but we need those hooks to be able to transform internal state. How would the event hooks’ interface change? Chances are, someone might’ve found a use for the event listeners we implemented earlier, and so we cannot dispose of them with ease. We might be forced to change the event listener API to support internal state transformations, which would result in a cringe-worthy interface that’s bound to frustrate implementors and consumers alike.

Falling into the trap of implementing features that consumers don’t yet need might be easy at first, but it’ll cost us dearly in terms of complexity, maintainability, and wasted developer hours. The best code is no code at all. This means fewer bugs, less time spent writing code, less time writing documentation, and less time fielding support requests. Latch onto that mentality and strive to keep functionality to exactly the absolute minimum that’s required.

1
In A/B testing, a form of user testing, a small portion of users are presented with a different experience than that used for the general user base. We then track engagement among the two groups, and if the engagement is higher for the users with the new experience, then we might go ahead and present that to our entire user base. It is an effective way of reducing risk when we want to modify our user experience, by testing our assumptions in small experiments before we introduce changes to the majority of our users.
2
Monkey-patching is the intentional modification of the public interface of a component from the outside in order to add, remove, or change its functionality. Monkey-patching can be helpful when we want to change the behavior of a component that we don’t control, such as a library or dependency. Patching is error-prone because we might be affecting other consumers of this API who are unaware of our patches. The API itself or its internals may also change, breaking the assumptions made about them in our patch. Although it’s generally best avoided, sometimes it’s the only choice at hand.
Unlock with one Tweet!
Grants you full online access to Mastering Modular JavaScript!
You can also read the book on the public git repository, but it won’t be as pretty! 😅