“I am delighted to support Nicolás’ endeavor because his book looks exactly like what people who are coming to JavaScript with fresh eyes need.”
– Brendan Eich
Ideal for professional software developers with a basic understanding of JavaScript, this practical book shows you how to build small, interconnected ES6 JavaScript modules that emphasize reusability. You’ll learn how to face a project with a modular mindset, and how to organize your applications into simple pieces that work well in isolation and can be combined to create a large, robust application.
This book focuses on two aspects of JavaScript development: modularity and ES6 features. You’ll learn how to tackle application development by following a scale-out approach. As pieces of your codebase grow too big, you can break them up into smaller modules.
The book can be read online for free or purchased through Amazon.
This book is part of the Modular JavaScript series.
Start with the book series launch announcement on Pony Foo
Participate in the crowdfunding campaign on Indiegogo
Amplify the announcement on social media via Thunderclap
Share a message on Twitter or within your social circles
Contribute to the source code repository on GitHub
Read the free HTML version of the book on Pony Foo
Purchase the book from O’Reilly on Amazon
Chapter 9
Practical Considerations
JavaScript is an ever-evolving language. Its development rhythm has had different paces throughout the years, entering a high-velocity phase with the introduction of ES5. Thus far, this book has taught you about dozens of language features and syntax changes introduced in ES6, and a few that came out afterwards, in ES2016 and ES2017.
Reconciling all of these new features with our existing ES5 knowledge may seem like a daunting task: what features should we take advantage of, and how? This chapter aims to rationalize the choices we have to make when considering whether to use specific ES6 features.
We’ll take a look at a few different features, the use cases where they shine, and the situations where we might be better off using features that were already available in the language. Let’s go case by case.
Variable Declarations
When developing software, most of our time is spent reading code, instead of writing it. ES6 offers let
and const
as new flavors of variable declaration, and part of the value in these statements is that they can signal how a variable is used. When reading a piece of code, others can take cues from these signals in order to better understand what we did. Cues like these are crucial to reducing the amount of time someone spends interpreting what a piece of code does, and as such we should try and leverage them whenever possible.
A let
statement indicates that a variable can’t be used before its declaration, due to the Temporal Dead Zone rule. This isn’t a convention, it is a fact: if we tried accessing the variable before its declaration statement was reached, the program would fail. These statements are block-scoped and not function-scoped; this means we need to read less code in order to fully grasp how a let
variable is used.
The const
statement is block-scoped as well, and it follows TDZ semantics too. The upside is that a const
binding can only be assigned during declaration.
Note that this means that the variable binding can’t change, but it doesn’t mean that the value itself is immutable or constant in any way. A const
binding that references an object can’t later reference a different value, but the underlying object can indeed mutate.
In addition to the signals offered by let
, the const
keyword indicates that a variable binding can’t be reassigned. This is a strong signal. You know what the value is going to be; you know that the binding can’t be accessed outside of its immediately containing block, due to block scoping; and you know that the binding is never accessed before declaration, because of TDZ semantics.
You know all of this just by reading the const
declaration statement and without scanning for other references to that variable.
Constraints such as those offered by let
and const
are a powerful way of making code easier to understand. Try to accrue as many of these constraints as possible in the code you write. The more declarative constraints that limit what a piece of code could mean, the easier and faster it is for humans to read, parse, and understand a piece of code in the future.
Granted, there are more rules to a const
declaration than to a var
declaration: block-scoped, TDZ, assign at declaration, no reassignment, whereas var
statements only signal function scoping. Rule-counting, however, doesn’t offer a lot of insight. It is better to weigh these rules in terms of complexity: does the rule add or subtract complexity? In the case of const
, block scoping means a narrower scope than function scoping, TDZ means that we don’t need to scan the scope backward from the declaration in order to spot usage before declaration, and assignment rules mean that the binding will always preserve the same reference.
The more constrained statements are, the simpler a piece of code becomes. As we add constraints to what a statement might mean, code becomes less unpredictable. This is one of the reasons why statically typed programs are, generally speaking, a bit easier to read than their dynamically typed counterparts. Static typing places a big constraint on the program writer, but it also places a big constraint on how the program can be interpreted, making its code easier to understand.
With these arguments in mind, it is recommended that you use const
where possible, as it’s the statement that gives us the fewest possibilities to think about.
if
(
condition
)
{
// can't access `isReady` before declaration is reached
const
isReady
=
true
// `isReady` binding can't be reassigned
}
// can't access `isReady` outside of its containing block scope
When const
isn’t an option, because the variable needs to be reassigned later, we may resort to a let
statement. Using let
carries all the benefits of const
, except that the variable can be reassigned. This may be necessary in order to increment a counter, flip a Boolean flag, or defer initialization.
Consider the following example, where we take a number of megabytes and return a string such as 1.2 GB
. We’re using let
, as the values need to change if a condition is met.
function
prettySize
(
input
)
{
let
value
=
input
let
unit
=
'MB'
if
(
value
>=
1024
)
{
value
/=
1024
unit
=
'GB'
}
if
(
value
>=
1024
)
{
value
/=
1024
unit
=
'TB'
}
return
`
${
value
.
toFixed
(
1
)
}
${
unit
}
`
}
Adding support for petabytes would involve a new if
branch before the return
statement.
if
(
value
>=
1024
)
{
value
/=
1024
unit
=
'PB'
}
If we were looking to make prettySize
easier to extend with new units, we could consider implementing a toLargestUnit
function that computes the unit
and value
for any given input
and its current unit. We could then consume toLargestUnit
in prettySize
to return the formatted string.
The following code snippet implements such a function. It relies on a list of supported units
instead of using a new branch for each unit. When the input value
is at least 1024
and there are larger units, we divide the input by 1024
and move to the next unit. Then we call toLargestUnit
with the updated values, which will continue recursively reducing the value
until it’s small enough or we reach the largest unit.
function
toLargestUnit
(
value
,
unit
=
'MB'
)
{
const
units
=
[
'MB'
,
'GB'
,
'TB'
]
const
i
=
units
.
indexOf
(
unit
)
const
nextUnit
=
units
[
i
+
1
]
if
(
value
>=
1024
&&
nextUnit
)
{
return
toLargestUnit
(
value
/
1024
,
nextUnit
)
}
return
{
value
,
unit
}
}
Introducing petabyte support used to involve a new if
branch and repeating logic, but now it’s only a matter of adding the 'PB'
string at the end of the units
array.
The prettySize
function becomes concerned only with how to display the string, as it can offload its calculations to the toLargestUnit
function. This separation of concerns is also instrumental in producing more readable code.
function
prettySize
(
input
)
{
const
{
value
,
unit
}
=
toLargestUnit
(
input
)
return
`
${
value
.
toFixed
(
1
)
}
${
unit
}
`
}
Whenever a piece of code has variables that need to be reassigned, we should spend a few minutes thinking about whether there’s a better pattern that could resolve the same problem without reassignment. This is not always possible, but it can be accomplished most of the time.
Once you’ve arrived at a different solution, compare it to what you used to have. Make sure that code readability has actually improved and that the implementation is still correct. Unit tests can be instrumental in this regard, as they’ll ensure you don’t run into the same shortcomings twice. If the refactored piece of code seems worse in terms of readability or extensibility, carefully consider going back to the previous solution.
Consider the following contrived example, where we use array concatenation to generate the result
array. Here, too, we could change from let
to const
by making a simple adjustment.
function
makeCollection
(
size
)
{
let
result
=
[]
if
(
size
>
0
)
{
result
=
result
.
concat
([
1
,
2
])
}
if
(
size
>
1
)
{
result
=
result
.
concat
([
3
,
4
])
}
if
(
size
>
2
)
{
result
=
result
.
concat
([
5
,
6
])
}
return
result
}
makeCollection
(
0
)
// <- []
makeCollection
(
1
)
// <- [1, 2]
makeCollection
(
2
)
// <- [1, 2, 3, 4]
makeCollection
(
3
)
// <- [1, 2, 3, 4, 5, 6]
We can replace the reassignment operations with Array#push
, which accepts multiple values. If we had a dynamic list, we could use the spread operator to push as many ...items
as necessary.
function
makeCollection
(
size
)
{
const
result
=
[]
if
(
size
>
0
)
{
result
.
push
(
1
,
2
)
}
if
(
size
>
1
)
{
result
.
push
(
3
,
4
)
}
if
(
size
>
2
)
{
result
.
push
(
5
,
6
)
}
return
result
}
makeCollection
(
0
)
// <- []
makeCollection
(
1
)
// <- [1, 2]
makeCollection
(
2
)
// <- [1, 2, 3, 4]
makeCollection
(
3
)
// <- [1, 2, 3, 4, 5, 6]
When you do need to use Array#concat
, you might prefer to use [...result, 1, 2]
instead, to make the code shorter.
The last case we’ll cover is one of refactoring. Sometimes, we write code like the next snippet, usually in the context of a larger function.
let
completionText
=
'in progress'
if
(
completionPercent
>=
85
)
{
completionText
=
'almost done'
}
else
if
(
completionPercent
>=
70
)
{
completionText
=
'reticulating splines'
}
In these cases, it makes sense to extract the logic into a pure function. This way we avoid the initialization complexity near the top of the larger function, while clustering all the logic about computing the completion text in one place.
The following piece of code shows how we could extract the completion text logic into its own function. We can then move getCompletionText
out of the way, making the code more linear in terms of readability.
const
completionText
=
getCompletionText
(
completionPercent
)
// …
function
getCompletionText
(
progress
)
{
if
(
progress
>=
85
)
{
return
'almost done'
}
if
(
progress
>=
70
)
{
return
'reticulating splines'
}
return
'in progress'
}