A very early draft was published last week by the ECMAScript editor Brian Terlson. When I say very early I mean it’s considered a “stage -1” proposal, meaning it’s not even a formal proposal yet, just a very early draft.
That being said, I’m always excited about new Array.prototype methods so I decided to write an article nonetheless. These kinds of methods were popularized in JavaScript by libraries like Underscore and then Lodash – and some of them – such as .includes, have eventually started finding their way into the language.
Shall we take a look?

Array.prototype.flatten
The .flatten proposal will take an array and return a new array where the old array was flattened recursively. The following bits of code represent the Array.prototype.flatten API.
[1, 2, 3, 4].flatten() // <- [1, 2, 3, 4]
[1, [2, 3], 4].flatten() // <- [1, 2, 3, 4]
[1, [2, [3]], 4].flatten() // <- [1, 2, 3, 4]
One could implement a polyfill for .flatten thus far like below. I separated the implementation of flatten from the polyfill so that you don’t necessarily have to use it as a polyfill if you just want to use the method without changing Array.prototype.
Array.prototype.flatten = function () {
return flatten(this)
}
function flatten (list) {
return list.reduce((a, b) => (Array.isArray(b) ? a.push(...flatten(b)) : a.push(b), a), [])
}
Keep in mind that the code above might not be the most efficient approach to array flattening, but it accomplishes recursive array flattening in a few lines of code. Here’s how it works.
- A consumer calls
x.flatten() - The
xlist is reduced using.reduceinto a new array[]nameda - Each item
binxis evaluated throughArray.isArray - Items that aren’t an array are pushed to
a - Items that are an array are flattened into a new array
- Those items are spread over a
.pushcall fora
- Those items are spread over a
- This eventually results in a flat array
The proposal also comes with an optional depth parameter – that defaults to Infinity – which can be used to determine how deep the flattening should go.
[1, [2, [3]], 4].flatten() // <- [1, 2, 3, 4]
[1, [2, [3]], 4].flatten(2) // <- [1, 2, 3, 4]
[1, [2, [3]], 4].flatten(1) // <- [1, 2, [3], 4]
[1, [2, [3]], 4].flatten(0) // <- [1, [2, [3]], 4]
Adding the depth option to our polyfill wouldn’t be that hard, we pass it down to recursive flatten calls and ensure that, when the bottom is reached, we stop flattening and recursion.
Array.prototype.flatten = function (depth=Infinity) {
return flatten(this, depth)
}
function flatten (list, depth) {
if (depth === 0) {
return list
}
return list.reduce((accumulator, item) => {
if (Array.isArray(item)) {
accumulator.push(...flatten(item, depth - 1))
} else {
accumulator.push(item)
}
return accumulator
}, [])
}
Alternatively – for Internet points – we could fit the whole of flatten in a single expression.
function flatten (list, depth) {
return depth === 0 ? list : list.reduce((a, b) => (Array.isArray(b) ?
a.push(...flatten(b, depth - 1)) :
a.push(b), a), [])
}
Then there’s .flatMap.
Array.prototype.flatMap
This method is convenient because of how often use cases come up where it might be appropriate, and at the same time it provides a small boost in performance, as we’ll note next.
Taking into account the polyfill we created earlier for flattening through Array.prototype.flatten, the .flatMap method can be represented in code like below. Note how you can provide a mapping function fn and its ctx context as usual, but the flattening is fixed at a depth of 1.
Array.prototype.flatMap = function (fn, ctx) {
return this.map(fn, ctx).flatten(1)
}
Typically, the code shown above is how you would implement .flatMap in user code, but the native .flatMap trades a bit of readability for performance, by introducing the ability to map items directly in the internal flatten procedure, avoiding the two-pass that’s necessary if we first .map and then .flatten an Array.
A possible example of using .flatMap can be found below.
[{ x: 1, y: 2 }, { x: 3, y: 4 }, { x: 5, y: 6 }].flatMap(c => [c.x, c.y])
// <- [1, 2, 3, 4, 5, 6]
The above is syntactic sugar for doing .map(c => [c.x, c.y]).flatten() while providing a small performance boost by avoiding the aforementioned two-pass when first mapping and then flattening.
Note that our previous polyfill doesn’t cover the performance boost, let’s fix that by changing our own internal flatten function and adjust Array.prototype.flatMap accordingly. We’ve added a couple more parameters to flatten, where we allow the item to be mapped into a different value right before flattening, and avoiding the extra loop over the array.
Array.prototype.flatMap = function (fn, ctx) {
return flatten(this, 1, fn, ctx)
}
function flatten (list, depth, mapperFn, mapperCtx) {
if (depth === 0) {
return list
}
return list.reduce((accumulator, item, i) => {
if (mapperFn) {
item = mapperFn.call(mapperCtx || list, item, i, list)
}
if (Array.isArray(item)) {
accumulator.push(...flatten(item, depth - 1))
} else {
accumulator.push(item)
}
return accumulator
}, [])
}
Since the
mapperFnandmapperCtxparameters offlattenare entirely optional, we could still use this same internalflattenfunction to polyfill both.flattenand.flatMap.
Comments (10)
Depth argument does not feel right:
If you expect one-dimensional arrays, just use
.flatten. If you know the exact structure ofaand want abarray flattened to depthc, you can dob = a.flatten(c).The API for
.flatMapwould become very confusing with adepthlarger than1.Sorry, ctx…
Now we can re-implement map! :)
I think I figured out how to turn this into an ES3-compatible polyfill with all the special spec-related code-checking, but it definitely is less elegant to use
Function#applythan to use the spread operator, and it also relies on earlier polyfills for things likeArray.isArrayand the spec-related utilitytoObjectCoercible(easy) andArray#reduce(hard).It is coming from RxJs?
This is stupid. Why do we need to snandardize so many “basic” methods, which can be written in few lines of our own code? To make specifications longer and browsers bigger, I get it.
Well I understand standardizing “indexOf”, “concat” or “sort”, but methods from above are too exotic. Such methods are implemented inside a browser also using JS, so do not expect it to be faster than your own method.
While I agree that flatten and flatMap are getting (at the least) pretty edgy, I could argue that I could probably have written my own version of the C standard library strpbrk as well–but I’m glad I didn’t have to. One half of my brain wants to applaud those who wish to make JavaScript more of a general-purpose language rather than just a web-page language, but at the same time, when I see all the newfangled additions and straw-man proposals for additions to the language, the other half wonders exactly what class of problems people expect to solve with JavaScript over the next five years. If Array.prototype.flatMap ends up getting a lot of use, I’ll see that as a sign that a lot of people need to go back to Data Structures 101 for a refresher (YMMV). (Or maybe that they really wanted to use another language more suited to the task.)
The problem is that Arrays are a native type, so extending them is dangerous for anyone to actually do on their own, lest their prototype extension later turns out to differ from the spec. Once they’re officially extended in the spec though, it opens up a lot of native functionality (not to mention making them Monadic, which is far far from an edge case). Not sure why this strawman wouldn’t also seek to standardize/implement .ap, since it’s basically “for fee” at this point.