ponyfoo.com

Mastering Modular JavaScript

Module thinking, principles, design patterns and best practices — Modular JavaScript Book Series
O’Reilly Media164 PagesISBN 978-1-4919-5568-0

Tackle two aspects of JavaScript development: modularity and ES6. With this practical guide, front-end and back-end Node.js developers alike will learn how to scale out JavaScript applications by breaking codebases into smaller modules. We’ll specifically cover features in ES6 from the vantage point of how we can leverage them to improve modularity in our programs. Learn how to manage complexity, how to drive it down, and how to write simpler code by building small modules.

If you’re a frontend developer or backend Node.js developer with a working knowledge of JavaScript, this book is for you. The book is ideal for semi-senior developers, senior developers, technical leaders, and software architects.

This book is part of the Modular JavaScript series.

🗞 Start with the book series launch announcement on Pony Foo
💳 Participate in the crowdfunding campaign on Indiegogo
🌩 Amplify the announcement on social media via Thunderclap
🐤 Share a message on Twitter or within your social circles
👏 Contribute to the source code repository on GitHub
🦄 Read the free HTML version of the book on Pony Foo
📓 Purchase the book from O’Reilly on Amazon

Chapter 2

Modularity Principles

Modularity can be the answer to complexity, but what exactly do we mean when we’re talking about complexity?

Complexity is a loaded term for a nuanced topic. What does complex mean? A dictionary defines complex as something that’s “composed of many interconnected parts” but that’s not the problem we generally refer to when we speak of complexity in the context of programming. A program may have hundreds or thousands of files and still be considered relatively simple.1

The next two definitions, offered by that same dictionary, might be more revealing in the context of program design.

  • “Characterized by a very complicated or involved arrangement of parts, units, etc.”

  • “So complicated or intricate as to be hard to understand or deal with”

The first definition indicates that a program can become complex when its parts are arranged in a complicated manner; the interconnections among parts become a pain point. This could stem from convoluted interfaces or a lack of documentation, and it’s one of the aspects of complexity that we’ll tackle in this book.

We can interpret the second definition as the other side of the complexity coin. Components can be so complicated that their implementation is hard to understand, debug, or extend. Most of the book is devoted to counterbalancing and avoiding this aspect of complexity.

In broad terms, something is complex when it becomes hard to grasp or fully understand. By that definition, anything in a typical program can be complex: a block of code, a single statement, the API layer, its documentation, tests, the directory structure, coding conventions, or even a variable’s name.

Measuring complexity by lines of code proves to be trite: a file with thousands of lines of code can be simple if it’s just a list of constants like country codes or action types. Conversely, a file with two dozen lines of code could be insurmountably complex, not only in its interface but particularly in its implementation. Add together a few complex components and soon you’ll want nothing to do with the codebase.

Cyclomatic complexity is the number of unique code paths a program can take, and it may be a better metric when measuring the complexity of a component. Cyclomatic complexity allows us to measure only how complex a component has become. On its own, however, tracking this metric does little to significantly reduce complexity across our codebase or improve our coding style.

We must acknowledge that codebases are not fixed in time. Code­bases typically grow along with time, much like the products we build with them. There is no such thing as a finished product or the perfect codebase. We should develop application architecture that embraces the passage of time through the ability to adjust to new conditions.

A significant body of changes to an implementation should be able to leave the API in front of that implementation unmodified. It should be possible to extend the API surface of a component with ease, and ironing out the wrinkles of an outdated API shouldn’t be fraught with confusion or frustration. When we want to horizontally scale our program beyond single components, it should be straightforward instead of having to modify several existing components in order to accommodate each new one. How can modular design help us manage complexity both at the component level and at scale?

Modular Design Essentials

Modularity tackles the complexity problem in program design by opting for small modules with a clear-cut and well-tested API that’s also documented. Defining a precise API attacks interconnection complexity, while small modules aim to make programs easier to understand and work with.

Single Responsibility Principle

The single responsibility principle (SRP) is perhaps the most widely agreed upon principle of successful modular application design. Components are said to follow SRP when they have a single, narrow objective.

Modules that follow SRP do not necessarily have to export a single function as their API. As long as the methods and properties we export from a component are related, we aren’t breaking SRP.

When thinking in terms of SRP, it’s important to figure out what the responsibility is. Consider, as an example, a component used to send emails through the Simple Mail Transfer Protocol (SMTP). The choice to send emails using SMTP could be considered an implementation detail. If we later want the ability to render the HTML to be sent in those emails by using a template and a model, would that also pertain to the email-sending responsibility?

Imagine we developed email sending and templating in the same component. These would be tightly coupled. Furthermore, if we later wanted to switch from SMTP to the solution offered through the API for a transactional email provider, we’d have to be careful not to interfere with the templating capability that lies in the same module.

The following code snippet represents a tightly coupled piece of code that mixes templating, sanitization, email API client instantiation, and email sending:

import insane from 'insane'
import mailApi from 'mail-api'
import { mailApiSecret } from './secrets'
function sanitize (template, ...expressions) {
  return template.reduce((result, part, i) =>
    result + insane(expressions[i - 1]) + part
  )
}
export default function send (options, done) {
  const {
    to,
    subject,
    model: { title, body, tags }
  } = options
  const html = sanitize`
    <h1>${ title }</h1>
    <div>${ body }</div>
    <div>
    ${
      tags
        .map(tag => `${ <span>${ tag }</span> }`)
        .join(` `)
    }
    </div>
  `
  const client = mailApi({ mailApiSecret })
  client.send({
    from: `hello@mjavascript.com`,
    to,
    subject,
    html
  }, done)
}

It might be better to create a separate component that’s in charge of rendering HTML based on a template and a model, instead of adding templating directly in the email-sending component. We could then add a dependency on the email module so that we can send that HTML, or we could create a third module where we’re concerned only with the wiring.

Provided its consumer-facing interface remained the same, an independent SMTP email component would be interchangeable with a component that sent emails some other way, such as via an API, logging to a data store, or writing to standard output. In this scenario, the way in which emails are sent would be an implementation detail, while the interface becomes more rigid as it’s adopted by more modules. An inflexible interface gives us flexibility in the way the task is performed, while allowing implementations to be replaced with ease according to the use case at hand.

The following example shows an email component that’s concerned only with configuring the API client and adhering to a thoughtful interface that receives the to recipient, the email subject, and its html body, and then sends the email. This component has the sole purpose of sending email:

import mailApi from 'mail-api'
import { mailApiSecret } from './secrets'

export default function send(options, done) {
  const { to, subject, html } = options
  const client = mailApi({ mailApiSecret })
  client.send({
    from: `hello@mjavascript.com`,
    to,
    subject,
    html
  }, done)
}

It wouldn’t be hard to create a drop-in replacement by developing a module that adheres to the same send API but sends email in a different way. The following example uses a different mechanism, whereby we simply log to the console. Even though it doesn’t actually send any emails, this component could be useful for debugging purposes:

export default function send(options, done) {
  const { to, subject, html } = options
  console.log(`
    Sending email.
    To: ${ to }
    Subject: ${ subject }
    ${ html }`
  )
  done()
}

By the same token, a templating component could be developed orthogonally, with an implementation that’s not directly tied into email sending. The following example is extracted from our original coupled implementation, but is concerned only with producing a piece of sanitized HTML by using a template and the user-provided model:

import insane from 'insane'

function sanitize(template, ...expressions) {
  return template.reduce((result, part, i) =>
    result + insane(expressions[i - 1]) + part
  )
}

export default function compile(model) {
  const { title, body, tags } = model
  const html = sanitize`
    <h1>${ title }</h1>
    <div>${ body }</div>
    <div>
    ${
      tags
        .map(tag => `${ <span>${ tag }</span> }`)
        .join(` `)
    }
    </div>
  `
  return html
}

Slightly modifying the API shouldn’t be an issue, as long as it remains consistent across the components we want to make interchangeable. For instance, a different implementation could take a template identifier, in addition to the model object, so that the template itself is also decoupled from the compile function.

When we keep the API consistent across implementations,2 using the same signature across every module, it’s easy to swap out implementations depending on context such as the execution environment (development versus staging versus production) or any other dynamic context that we need to rely upon.

As mentioned earlier, a third module could plumb together different components that handle separate concerns, such as templating and email sending. The following example leverages the logging email provider and the static templating function to join both concerns together. Interestingly, this module doesn’t break SRP either, as its only concern is to plumb other modules together:

import { send } from './email/log-provider'
import { compile } from './templating/static'

export default function send (options, done) {
  const { to, subject, model } = options
  const html = compile(model)
  send({ to, subject, html }, done)
}

We’ve been discussing API design in terms of responsibility, but something equally interesting is that we’ve hardly worried about the implementation of those interfaces. Is there merit to designing an interface before digging into its implementation?

API First

A module is only as good as its public interface. A poor implementation may hide behind an excellent interface. More important, a great interface means we can swap out a poor implementation as soon as we find time to introduce a better one. Since the API remains the same, we can decide whether to replace the existing implementation altogether or whether both should coexist while we upgrade consumers to use the newer one.

A flawed API is a lot harder to repair. Several implementations may follow the interface we intend to modify, meaning that we’d have to change the API calls in each consumer whenever we want to make changes to the API itself. The number of API calls that potentially have to adapt increases with time, entrenching the API as the project grows.

Having a mindful design focus on public interfaces is paramount to developing maintainable component systems. Well-designed interfaces can stand the test of time by introducing new implementations that conform to that same interface. A properly designed interface should make it simple to access the most basic or common use cases for the component, while being flexible enough to support other use cases as they arise.

An interface often doesn’t have the necessity of supporting multiple implementations, but we must nonetheless think in terms of the public API first. Abstracting the implementation is only a small part of the puzzle. The answer to API design lies in figuring out which properties and methods consumers will need, while keeping the interface as small as possible.

When we need to implement a new component, a good rule of thumb is drawing up the API calls we’d need to make against that new component. For instance, we might want a component to interact with the Elasticsearch REST API. Elasticsearch is a database engine with advanced search and analytics capabilities, and its documents are stored in indices and arranged by type.

In the following piece of code, we’re fantasizing with an ./elasticsearch component that has a public createClient binding, which returns an object with a client#get method that returns a Promise. Note how detailed the query is, making up what could be a real-world keyword search for blog articles tagged modularity and javascript:

import { createClient } from './elasticsearch'
import { elasticsearchHost } from './secrets'

const client = createClient({
  host: elasticsearchHost
})
client
  .get({
    index: `blog`,
    type: `articles`,
    body: {
      query: {
        match: {
          tags: [`modularity`, `javascript`]
        }
      }
    }
  })
  .then(response => {
    // ...
  })

Using the createClient method, we could create a client, establishing a connection to an Elasticsearch server. If the connection is dropped, the component we’re envisioning will seamlessly reconnect to the server, but on the consumer side, we don’t necessarily want to worry about that.

Configuration options passed to createClient might tweak how aggressively the client attempts to reconnect. A backoff setting could toggle whether an exponential back-off mechanism should be used: the client waits for increasing periods of time if it’s unable to establish a connection.

An optimistic setting that’s enabled by default could prevent queries from settling in rejection when a server connection isn’t established, by having them wait until a connection is established before they can be made.

Even though the only setting explicitly outlined in our imagined API usage example is host, it would be simple for the implementation to support new settings in its API without breaking backward compatibility.

The client#get method returns a promise that’ll settle with the results of asking Elasticsearch about the provided index, type, and query. When the query results in an HTTP error or an Elasticsearch error, the promise is rejected. To construct the endpoint, we use the index, type, and the host that the client was created with. For the request payload, we use the body field, which follows the Elasticsearch Query DSL.3 Adding more client methods, such as put and delete, would be trivial.

Following an API-first methodology is crucial in understanding how the API might be used. By placing our foremost focus on the interface, we are purposely avoiding the implementation until there’s a clear idea of what interface the component should have. Then, once we have a desired interface in mind, we can begin implementing the component. Always write code against an interface.

Note how the focus is not only on what the example at hand addresses directly but also on what it doesn’t address: room for improvement, corner cases, how the API might change going forward, and whether the existing API can accommodate more uses without breaking backward compatibility.

1
Further details of the dictionary definition might help shed light on this topic.
2
For example, one implementation might merely compile an HTML email by using inline templates, another might use HTML template files, another could rely on a third-party service, and yet another could compile emails as plain-text instead.
4
The options parameter is an optional configuration object that’s relatively new to the web API. We can set flags such as capture, which has the same behavior as passing a useCapture flag; passive, which suppresses calls to event.preventDefault() in the listener; and once, which indicates that the event listener should be removed after being invoked for the first time.
5
You can find request on GitHub.
6
For a given set of inputs, an idempotent function always produces the same output.
7
When a function has overloaded signatures which can handle two or more types (such as an array or an object) in the same position, the parameter is said to be polymorphic. Polymorphic parameters make functions harder for compilers to optimize, resulting in slower code execution. When this polymorphism is in a hot path—that is, a function that gets called very often—the performance implications have a larger negative impact. Read more about the compiler implications in “What’s Up with Monomorphism” by Vyacheslav Egorov.
9
Assuming we have a createButton(size = 'normal', type = 'primary', color = 'red') method and we want to change its color, we’d have to use createButton('normal', 'primary', 'blue') to accomplish that, only because the API doesn’t have an options object. If the API ever changes its defaults, we’d have to change any function calls accordingly as well.
Unlock with one Tweet!
Grants you full online access to Mastering Modular JavaScript!
You can also read the book on the public git repository, but it won’t be as pretty! 😅