tobias-barth.net

Moderne Webentwicklung aus Köln

How to bundle your library and why

Preface

This article is part 6 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Publishing formats – do you even need a bundle?

At this point in our setup we deliver our library as separate modules. ES Modules to be exact. Let’s discuss what we achieve with that and what could be missing.

Remember, we are publishing a library to be used within other applications. Depending on your concrete use case the library will be used in web applications in browsers or in Node.js applications on servers or locally.

Web applications (I)

In the case of web applications we can assume that they will get bundled with any of the current solutions, Webpack for example. These bundlers can understand ES Module syntax and since we deliver our code in several modules, the bundler can optimize which code needs to be included and which code doesn’t (tree-shaking). In other words, for this use case we already have everything we need. In fact, bundling our modules together into one blob could defeat our goal to enable end-users to end up with only the code they need. The final application bundlers could maybe no longer differentiate which parts of the library code are being used.

Conclusion: No bundle needed.

Node.js applications

What about Node.js? Typically, Node.js applications consist of several independent files; source files and their dependencies (node_modules). The modules will get imported during runtime when they are needed. But does it work with ES Modules? Sort of.

Node.js v12 has experimental support for ES Modules. “Experimental” means we must “expect major changes in the implementation including interoperability support, specifier resolution, and default behavior.” But yes, it works and it will work even better and smoother in future versions.

Since Node.js has to support CommonJS modules for the time being and since the two module types are not 100% compatible, there are a few things we have to respect if we want to support both ways of usage. First of all, things will change. The Node.js team even warns to “publish any ES module packages intended for use by Node.js until [handling of packages that support CJS and ESM] is resolved.”

That means, if your library is intended only for Node.js (so no browser optimization necessary), you don’t want to rely on your library’s users reading your installation notes (they will have to know what to import/require) or you are not interested in investing in features that are only almost there: Please just don’t publish ES Modules. Change the configuration of Babel’s env preset to { modules: 'commonjs' } and ship only CommonJS modules.

But with a bit of work we can make sure everthing will be fine. For now the ESM support is behind a flag (--experimental-modules). When the implementation changes, I will update this post as soon as possible.

Node.js uses a combination of declaring a module type inside of package.json and filename extensions. I won’t lay out every detail and combination of these variants but rather show the (in my opinion) most future-proof and easiest approach.

Right now we have created .js files that are in ES Module syntax. Therefore, we will add the type key to our package.json and set it to "module". This is the signal to Node.js (if run with the --experimental-modules command line flag) that it should parse every .js file in this package scope as ES Module:

1
2
3
4
5
{
// ...
"type": "module",
// ...
}

Note that you oftentimes will come across the advice to use *.mjs file extensions. Don’t do that. *.js is the extension for JavaScript files and will probably always be. Let’s use the default naming for the current standards like ESM syntax. If you have for whatever reason files inside your package that must use CommonJS syntax, give them another extension: *.cjs. Node.js will know what to do with it.

There are a few caveats:

  1. Using third party dependencies
    1. If the external module is (only) in CommonJS syntax, you can import it only as default import. Node.js says that will hopefully change in the future but for now you can’t have named imports on a CommonJS module.
    2. If the external module is published in ESM syntax, check if it follows Node.js’ rules: If there is ESM syntax in a *.js file and there is no "type": "module" in the package.json, the package is broken and you can not use it with ES Modules. (Example: react-lifecycles-compat). Webpack would make it work but not Node.js. An example for a properly configured package is graphql-js. It uses the *.mjs extension for ESM files.
  2. Imports need file extensions. You can import from a package name (import _ from 'lodash') like before but you can not import from a file (or a folder containing an index.(m)js) without the complete path: import x from './otherfile.js' will work but import x from './otherfile' won’t. import y from './that-folder/index.js' will work but import y from './that-folder' won’t.
  3. There is a way around the file extension rule but you have to force your users to do it: They must run their program with a second flag: --es-module-specifier-resolution=node. That will restore the resolution pattern Node.js users know from CommonJS. Unfortunately that is also necessary if you have Babel runtime helpers included by Babel. Babel will inject default imports which is good, but it omits the file extensions. So if your library depends on Babel transforms, you have to tell your users that they will have to use the second flag. (Not too bad because they already know how to pass ESM related flags when they want to opt into ESM.)

For all other users that are not so into experimental features we also publish in CommonJS. To support CommonJS we do something, let’s say, non-canonical in the Node.js world: we deliver a single-file bundle. Normally, people don’t bundle for Node.js because it’s not necessary. But because we need a second compile one way or the other, it’s the easiest path. Also note that other than in the web we don’t have to care too much for size as everything executes locally and is installed beforehand.

Conclusion: Bundle needed if we want to ship both, CommonJS and ESM.

Web applications (II)

There is another use case regarding web applications. Sometimes people want to be able to include a library by dropping a <script> tag into their HTML and refer to the library via a global variable. (There are also other scenarios that may need such a kind of package.) To make that possible without additional setup by the user, all of your library’s code must be bundled together in one file.

Conclusion: Bundle needed to make usage as easy as possible.

Special “imports”

There is a class of use cases that came up mainly with the rise of Webpack and its rich “loader” landscape. And that is: importing every file type that you can imagine into your JavaScript. It probably started with requiring accompanying CSS files into JS components and went over images and what not. If you do something like that in your library, you have to use a bundler. Because otherwise the consumers of your library would have to use a bundler themselves that is at least configured exactly in a way that handles all strange (read: not JS-) imports in your library. Nobody wants to do that.

If you deliver stylings alongside with your JS code, you should do it with a separate CSS file that comes with the rest of the code. And if you write a whole UI library like Bootstrap then you probably don’t want to ask your users for importing hundreds of CSS files but one compiled file. And the same goes for other non-JS file types.

Conclusion: Bundle needed

Ok, ok, now tell me how to do it!

Alright. Now you can decide if you really need to bundle your library. Also, you have an idea of what the bundle should “look” like from outside: For classic usage with Node.js, it should be a big CommonJS module, consumable with require(). For further bundling in web applications it may be better to have a big ES module that is tree-shakable.

And here is the cliffhanger: Each of the common bundling tools will get their own article in this series. This post is already long enough.

Next up: Use Webpack for bundling your library.

Check types and emit type declarations

Preface

This article is part 5 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Getting the types out of TypeScript

Ok, this is a quick one. When we build our library, we want two things from TypeScript: First we want to know that there are no type errors in our code (or types missing, e.g. from a dependency). Second, since we are publishing a library for other fellow coders to use, not an application, we want to export type declarations. We will start with type checking.

Type-checking

Type-checking can be seen as a form of testing. Take the code and check if certain assertions hold. Therefore, we want to be able to execute it as a separate thing that we can add to our build chain or run it in a pre-commit hook for example. You don’t necessarily want to generate type definition files every time you (or your CI tool) run your tests.

If you want to follow along with my little example library, be sure to check out one of the typescript branches.

The TypeScript Compiler always checks the types of a project it runs on. And it will fail and report errors if there are any. So in principle we could just run tsc to get what we want. Now, to separate creating output files from the pure checking process, we must give tsc a handy option:

1
tsc --noEmit

Regardless if we use Babel or TSC for transpiling, for checking types there is just this one way.

Create type declaration files

This is something pretty library-specific. When you build an application in TypeScript, you only care about correct types and an executable output. But when you provide a library, your users (i.e. other programmers) can directly benefit from the fact that you wrote it in TypeScript. When you provide type declaration files (*.d.ts) the users will get better auto-completion, type-hints and so on when they use your lib.

Maybe you have heard about DefinitelyTyped. Users can get types from there for libraries that don’t ship with their own types. So, in our case we won’t need to do anything with or for DefinitelyTyped. Consumers of our library will have everything they need when we deliver types directly with our code.

Again, because these things are core functionality of TypeScript, we use tsc. But this time the calls are slightly different depending on how we transpile – with Babel or TSC.

With Babel

As you probably remember, to create our output files with Babel, we call the Babel command line interface, babel. To also get declaration files we add a call to tsc:

1
tsc --declaration --emitDeclarationOnly

The --declaration flag ensures that TSC generates the type declaration files and since we defined the outputDir in tsconfig.json, they land in the correct folder dist/.

The second flag, --emitDeclarationOnly, prevents TSC from outputting transpiled JavaScript files. We use Babel for that.

You may ask yourself why we effectively transpile all of our code twice, once with Babel and once with TSC. It looks like a waste of time if TSC can do both. But I discussed before the advantages of Babel. And having a very fast transpile step separate from a slower declaration generation step can translate to a much better developer experience. The output of declarations can occur only once shortly before publishing – transpiling is something that you do all the time.

With TSC

When we use TSC to generate the published library code, we can use it in the same step to spit out the declarations. Instead of just tsc, we call:

1
tsc --declaration

That is all.

Alias All The Things

To make it easier to use and less confusing to find out what our package can do, we will create NPM scripts for all steps that we define. Then we can glue them together so that for example npm run build will always do everything we want from our build.

In the case of using Babel, in our package.json we make sure that "scripts" contains at least:

1
2
3
4
5
6
7
8
9
10
{
...
"scripts": {
"check-types": "tsc --noEmit",
"emit-declarations": "tsc --declaration --emitDeclarationOnly",
"transpile": "babel -d dist/ --extensions .ts,.tsx src/",
"build": "npm run emitDeclarations && npm run transpile"
},
...
}

And if you are just using TSC, it looks like this:

1
2
3
4
5
6
7
8
{
...
"scripts": {
"check-types": "tsc --noEmit",
"build": "tsc --declaration"
},
...
}

Note that we don’t add check-types to build. First of all building and testing are two very different things. We don’t want to mix them explicitly. And second, in both cases we do check the types on build. Because as I said: that happens every time you call tsc. So even if you are slightly pedantic about type-checking on build, you don’t have to call check-types within the build script.

One great advantage of aliasing every action to a NPM script is that everyone working on your library (including you) can just run npm run and will get a nice overview of what scripts are available and what they do.

That’s it for using types.

Next up: All about bundling.

Building your library: Part 1

Preface

This article is part 4 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Note: I have promised in part 2 of this series that the next post would be about exporting types. But bear with me. First we will use what we have. Types are coming up next.

Our first build

Up until now we have discussed how to set up Babel or the TypeScript Compiler, respectively, for transpiling our thoughtfully crafted library code. But we didn’t actually use them. After all, the goal for our work here should be a fully functioning build chain that does everything we need for publishing our library.

So let’s start this now. As you can tell from the title of this article, we will refine our build with every item in our tool belt that we installed and configured. While the “normal” posts each focus on one tool for one purpose, these “build” articles will gather all configurations of our various tool combinations that we have at our disposal.

We will leverage NPM scripts to kick off everything we do. For JavaScript/TypeScript projects it’s the natural thing to do: You npm install and npm test and npm start all the time, so we will npm run build also.

For today we will be done with it relatively quickly. We only have the choice between Babel and TSC and transpiling is the only thing that we do when we build.

Build JavaScript with Babel

You define a build script as you may now in the package.json file inside of the root of your project. The relevant keys are scripts and module and we change it so that they contain at least the following:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "babel -d dist/ src/"
}
// ...
}

Using module

The standard key to point to the entry file of a package is main. But we are using module here. This goes back to a proposal by the bundler Rollup. The idea here is that the entry point under a main key is valid ES5 only. Especially regarding module syntax. The code there should use things like CommonJS, AMD or UMD but not ESModules. While bundlers like Webpack and Rollup can deal with legacy modules they can’t tree-shake them. (Read the article on Babel again if you forgot why that is.)

Therefore the proposal states that you can provide an entry point under module to indicate that the code there is using modern ESModules. The bundlers will always look first if there is a module key in your package.json and in that case just use it. Only when they don’t find it they will fall back to main.

Call Babel

The “script” under the name of build is just a single call to the Babel command line interface (CLI) with one option -d dist which tells Babel where to put the transpiled files (-d : --out-dir). Finally we tell it where to find the source files. When we give it a directory like src Babel will transpile every file it understands. That is, every file with an extension from the following list: .es6,.js,.es,.jsx,.mjs.

Build TypeScript with Babel

This is almost the same as above. The only difference is the options we pass to the Babel CLI. The relevant parts in package.json look like this:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "babel -d dist/ --extensions .ts,.tsx src/"
}
// ...
}

As I mentioned above, Babel wouldn’t know that it should transpile the .ts and .tsx files in src. We have to explicitly tell it to with the --extensions option.

Build TypeScript with TSC

For using the TypeScript Compiler we configure our build in the package.json like this:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "tsc"
}
// ...
}

We don’t have to tell TSC where to find and where to put files because it’s all in the tsconfig.json. The only thing our build script has to do is calling tsc.

Ready to run

And that is it. All you have to do now to get production-ready code is typing

1
npm run build

And you have your transpiled library code inside of the dist directory. It may not seem to be much but I tell you, if you were to npm publish that package or install it in one of the other ways aside from the registry it could be used in an application. And it would not be that bad. It may have no exported types, no tests, no contribution helpers, no semantic versioning and no build automation, BUT it ships modern code that is tree-shakable – which is more than many others have.

Be sure to check out the example code repository that I set up for this series. There are currently three branches: master, typescript and typescript-tsc. Master reflects my personal choice of tools for JS projects, typescript is my choice in TS projects and the third one is an alternative to the second. The README has a table with branches and their features.

Next up: Type-Checking and providing type declarations (and this time for real ;) )

Compiling modern language features with the TypeScript compiler

Preface

This article is part 3 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

How to use the TypeScript compiler tsc to transpile your code

If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion

In the last article we set up Babel to transpile modern JavaScript or even TypeScript to a form which is understood by our target browsers. But we can also instead use the TypeScript compiler tsc to do that. For illustrating purposes I have rewritten my small example library in TypeScript. Be sure to look at one of the typescript- prefixed branches. The master is still written in JavaScript.

I will assume that you already know how to setup a TypeScript project. How else would you have been able to write your library in TS? Rather, I will focus only on the best configuration possible for transpiling for the purposes of delivering a library.

You already know, the configuration is done via a tsconfig.json in the root of your project. It should contain the following options that I will discuss further below:

1
2
3
4
5
6
7
8
9
10
{
"include": ["./src/**/*"],
"compilerOptions": {
"outDir": "./dist",
"target": "es2017",
"module": "esnext",
"moduleResolution": "node",
"importHelpers": true
}
}

include and outDir

These options tell tsc where to find the files to compile and where to put the result. When we discuss how to emit type declaration files along with your code, outDir will be used also for their destination.

Note that these options allow us to just run tsc on the command line without anything else and it will find our files and put the output where it belongs.

Target environment

Remember when we discussed browserslist in the “Babel” article? (If not, check it out here.) We used an array of queries to tell Babel exactly which environments our code should be able to run in. Not so with tsc.

If you are interested, read this intriguing issue in the TypeScript GitHub repository. Maybe some day in the future we will have such a feature in tsc but for now, we have to use “JavaScript versions” as targets.

As you may know, since 2015 every year the TC39 committee ratifies a new version of ECMAScript consisting of all the new features that have reached the “Finished” stage before that ratification. (See The TC39 process.)

Now tsc allows us (only) to specify which version of ECMAScript we are targeting. To reach a more or less similar result as with Babel and my opinionated browserslist config, I decided to go with es2017. I have used the ECMAScript compatibility table and checked until which version it would be “safe” to assume that the last 2 versions of Edge/Chrome/Firefox/Safari/iOS can handle it. Your mileage may vary here! You have basically at least three options:

  • Go with my suggestion and use es2017.
  • Make your own decision based on the compatibility table.
  • Go for the safest option and use es5. This will produce code that can also run in Internet Explorer 11 but also will it be much bigger in size — for all browsers.

Just like with my browserslist config, I will discuss in a future article how to provide more than one bundle: one for modern environments and one for older ones.

Another thing to note here: The target does not directly set which module syntax should be used in the output! You may think it does, because if you don’t explicitly set module (see next section), tsc will choose it dependent of your target setting. If your target is es3 or es5, module will be set implicitly to CommonJS. Otherwise it will be set to es6. To make sure you don’t get surprised by what tsc chooses for you, you should always set module explicitly as described in the following section.

module and moduleResolution

Setting module to "esnext" is roughly the same as the modules: false option of the env preset in our babel.config.js: We make sure that the module syntax of our code stays as ESModules to enable treeshaking.

If we set module: "esnext", we have to also set moduleResolution to "node". The TypeScript compiler has two modes for finding non-relative modules (i.e. import {x} from 'moduleA' as opposed to import {y} from './moduleB'): These modes are called node and classic. The former works similar to the resolution mode of NodeJS (hence the name). The latter does not know about node_modules which is strange and almost never what you want. But tsc enables the classic mode when module is set to "esnext" so you have to explicitly tell it to behave.

In the target section above I mentioned that tsc will set module implicitly to es6 if target is something other than es3 or es5. There is a subtle difference between es6 and esnext. According to the answers in this GitHub issue esnext is meant for all the features that are “on the standard track but not in an official ES spec” (yet). That includes features like dynamic import syntax (import()) which is definitely something you should be able to use because it enables code splitting with Webpack. (Maybe a bit more important for applications than for libraries, but just that you know.)

importHelpers

You can compare importHelpers to Babel’s transform-runtime plugin: Instead of inlining the same helper functions over and over again and making your library bigger and bigger, tsc now injects imports to tslib which contains all these helpers just like @babel/runtime. But this time we will install the production dependency and not leave it to our users:

npm i tslib

The reason for that is that tsc will not compile without it. importHelpers creates imports in our code and if tsc does not find the module that gets imported it aborts with an error.

Should you use tsc or Babel for transpiling?

This is a bit opinion-based. But I think that you are better off with Babel then with tsc.

TypeScript is great and can have many benefits (even if I personally think JavaScript as a language is more powerful without it and the hassle you get with TypeScript outweighs its benefits). And if you want, you should use it! But let Babel produce the final JavaScript files that you are going to deliver. Babel allows for a better configuration and is highly optimized for exactly this purpose. TypeScript’s aim is to provide type-safety so you should use it (separately) for that. And there is another issue: Polyfills.

With a good Babel setup you get everything you need for running your code in the target environments. Not with tsc! It’s now your task to provide all the polyfills that your code needs. And first, to figure out which these are. Even if you don’t agree with my opinion about the different use-cases of Babel and TypeScript, the polyfill issue alone should be enough to follow me on this.

There is a wonderful blog post about using Babel instead of tsc for transpiling: TypeScript With Babel: A Beautiful Marriage. And it lists also the caveats of using Babel for TS: There are four small things that are possible in TypeScript but are not understood correctly by Babel: Namespaces (Don’t use them. They are outdated.), type casting with angle brackets (Use as syntax instead.), const enum (Use normal enums by omitting const.) and legacy style import/export syntax (It’s legacy — let it go). I think the only important constraint here is the const enum because it leads to a little bit more code in the output if you use standard enums. But unless you introduce enums with hundreds and hundreds of members, that problem should be negligible.

Also, it’s way faster to just discard all type annotations than checking the types first. This enables for example a faster compile cycle in development-/watch-mode. The example project that I use for this series is maybe not doing enough to be seen as a good compile time example. But also in another library project of mine which consists of ~25 source files and several third-party dependencies, Babel is five times faster than tsc. That is annoying enough when you are coding and have to wait after every save to see the results.

Conclusion and final notes for the tsc setup

(If you really want to use tsc for this task (see the last paragraphs above): )

Install tslib:

npm i tslib

Make sure your tsconfig.json contains at least the following options:

1
2
3
4
5
6
7
8
9
10
{
"compilerOptions": {
"outDir": "./dist", // where should tsc put the transpiled files
"target": "es2017", // set of features that we assume our targets can handle themselves
"module": "esnext", // emit ESModules to allow treeshaking
"moduleResolution": "node", // necessary with 'module: esnext'
"importHelpers": true // use tslib for helper deduplication
},
"include": ["./src/**/*"] // which files to compile
}

If you are sure you want or need to support older browsers like Android/Samsung 4.4 or Internet Explorer 11 with only one configuration, replace the es2017 target with es5. In a future article I will discuss how to create and publish more than one package: One as small as possible for more modern targets and one to support older engines with more helper code and therefore bigger size.

And remember: In this article I talked only about using tsc as transpiler. We will of course use it for type-checking, but this is another chapter.

Next up: Type-Checking and providing type declarations

Transpile modern language features with Babel

Preface

This article is part 2 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Why Babel and how should you use it in a library?

If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion

Babel can transpile JavaScript as well as TypeScript. I would argue that it’s even better to use Babel instead of the TypeScript compiler for compiling the code (down) to compatible JavaScript because it is faster. What Babel does when it compiles TypeScript is it just discards everything that isn’t JavaScript. Babel does no type checking. Which we don’t need at this point.

To use Babel you have to install it first: Run npm install -D @babel/core @babel/cli @babel/preset-env. This will install the core files, the preset you are going to need always and the command line interface so that you can run Babel in your terminal. Additionally, you should install @babel/preset-typescript and/or @babel/preset-react, both according to your needs. I will explain in a bit what each of is used for but you can imagine from their names in which situations you need them.

So, setup time! Babel is configured via a configuration file. (For details and special cases see the documentation.) The project-wide configuration file should be babel.config.js. It looks at least very similar to this one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
module.exports = {
presets: [
[
'@babel/env',
{
modules: false,
}
],
'@babel/preset-typescript',
'@babel/preset-react'
],
plugins: [
[
'@babel/plugin-transform-runtime',
{ corejs: 3 }
]
],
env: {
test: {
presets: ['@babel/env']
}
}
};

Let’s go through it because there are a few assumptions used in this config which we will need for other features in our list.

module.exports = {…}

The file is treated as a CommonJS module and is expected to return a configuration object. It is possible to export a function instead but we’ll stick to the static object here. For the function version look into the docs.

presets

Presets are (sometimes configurable) sets of Babel plugins so that you don’t have to manage yourself which plugins you need. The one you should definitely use is @babel/preset-env. You have already installed it. Under the presets key in the config you list every preset your library is going to use along with any preset configuration options.

In the example config above there are three presets:

  1. env is the mentioned standard one.
  2. typescript is obviously only needed to compile files that contain TypeScript syntax. As already mentioned it works by throwing away anything that isn’t JavaScript. It does not interpret or even check TypeScript. And that’s a Good Thing. We will talk about that point later. If your library is not written in TypeScript, you don’t need this preset. But if you need it, you have to install it of course: npm install -D @babel/preset-typescript.
  3. react is clearly only needed in React projects. It brings plugins for JSX syntax and transforming. If you need it, install it with: npm i -D @babel/preset-react. Note: With the config option pragma (and probably pragmaFrag) you can transpile JSX to other functions than React.createElement. See documentation.

Let us look at the env preset again. Notable is the modules: false option for preset-env. The effect is this: As per default Babel transpiles ESModules (import / export) to CommonJS modules (require() / module.export(s)). With modules set to false Babel will output the transpiled files with their ESModule syntax untouched. The rest of the code will be transformed, just the module related statements stay the same. This has (at least) two benefits:

First, this is a library. If you publish it as separate files, users of your library can import exactly the modules they need. And if they use a bundler that has the ability to treeshake (that is: to remove unused modules on bundling), they will end up with only the code bits they need from your library. With CommonJS modules that would not be possible and they would have your whole library in their bundle.

Furthermore, if you are going to provide your library as a bundle (for example a UMD bundle that one can use via unpkg.com), you can make use of treeshaking and shrink your bundle as much as possible.

There is another, suspiciously absent option for preset-env and that is the targets option. If you omit it, Babel will transpile your code down to ES5. That is most likely not what you want—unless you live in the dark, medieval times of JavaScript (or you know someone who uses IE). Why transpiling something (and generating much more code) if the runtime environment can handle your modern code? What you could do is to provide said targets key and give it a Browserslist compatible query (see Babel documentation). For example something like "last 2 versions" or even "defaults". In that case Babel would use the browserslist tool to find out which features it has to transpile to be able to run in the environments given with targets.

But we will use another place to put this configuration than the babel.config.js file. You see, Babel is not the only tool that can make use of browserslist. But any tool, including Babel, will find the configuration if it’s in the right place. The documentation of browserslist recommends to put it inside package.json so we will do that. Add something like the following to your library’s package.json:

1
2
3
4
5
6
7
8
9
"browserslist": [
"last 2 Chrome versions",
"last 2 Firefox versions",
"last 2 Edge versions",
"last 2 Opera versions",
"last 2 FirefoxAndroid versions",
"last 2 iOS version",
"last 2 safari version"
]

I will admit this query is a bit opinionated, maybe not even good for you. You can of course roll your own, or if you are unsure, just go with this one:

1
"browserslist": "defaults" // alias for "> 0.5%, last 2 versions, Firefox ESR, not dead"; contains ie 11

The reason I propose the query array above is that I want to get an optimized build for modern browsers. "defaults", "last 2 versions" (without specific browser names) and the like will include things like Internet Explorer 11 and Samsung Internet 4. These ancient browsers do not support so much even of ES2015. We would end up with a much much bigger deliverable than modern browsers would need. But there is something you can do about it. You can deliver modern code to modern browsers and still support The Ancients™. We will go into further details in a future section but as a little cliffhanger: browserslist supports multiple configurations. For now we will target only modern browsers.

plugins

The Babel configuration above defines one extra plugin: plugin-transform-runtime. The main reason to use this is deduplication of helper code. When Babel transpiles your modules, it injects little (or not so little) helper functions. The problem is that it does so in every file where they are needed. The transform-runtime plugin replaces all those injected functions with require statements to the @babel/runtime package. That means in the final application there has to be this runtime package.

To make that happen you could just add @babel/runtime to the prod dependencies of your library (npm i @babel/runtime). That would definitely work. But here we will add it to the peerDependencies in package.json. That way the user of your library has to install it themselves but on the other hand, they have more control over the version (and you don’t have to update the dependency too often). And maybe they have it installed already anyway. So we just push it out of our way and just make sure that it is there when needed.

Back to the Babel plugin. To use that plugin you have to install it: npm i -D @babel/plugin-transform-runtime. Now you’re good to go.

Before we go on to the env key, this is the right place to talk about polyfills and how to use them with Babel.

How to use polyfills in the best way possible

It took me a few hours reading and understanding the problem, the current solutions and their weaknesses. If you want to read it up yourself, start at Babel polyfill, go on with Babel transform-runtime and then read core-js@3, babel and a look into the future.

But because I already did you don’t have to if you don’t want to. Ok, let’s start with the fact that there two standard ways to get polyfills into your code. Wait, one step back: Why polyfills?

If you already know, skip to Import core-js. When Babel transpiles your code according to the target environment that you specified, it just changes syntax. Code that the target (the browser) does not understand is changed to (probably longer and more complicated) code that does the same and is understood. But there are things beyond syntax that are possibly not supported: features. Like for example Promises. Or certain features of other builtin types like Object.is or Array.from or whole new types like Map or Set. Therefore we need polyfills that recreate those features for targets that do not support them natively.

Also note that we are talking here only about polyfills for ES-features or some closely related Web Platform features (see the full list here). There are browser features like for instance the global fetch function that need separate polyfills.

Import core-js

Ok, so there is a Babel package called @babel/polyfill that you can import at the entry point of your application and it adds all needed polyfills from a library called core-js as well as a separate runtime needed for async/await and generator functions. But since Babel 7.4.0 this wrapper package is deprecated. Instead you should install and import two separate packages: core-js/stable and regenerator-runtime/runtime.

Then, we can get a nice effect from our env preset from above. We change the configuration to this:

1
2
3
4
5
6
7
8
[
'@babel/env',
{
modules: false,
corejs: 3,
useBuiltIns: 'usage'
}
],

This will transform our code so that the import of the whole core-js gets removed and instead Babel injects specific polyfills in each file where they are needed. And only those polyfills that are needed in the target environment which we have defined via browserslist. So we end up with the bare minimum of additional code.

Two additional notes here: (1) You have to explicitly set corejs to 3. If the key is absent, Babel will use version 2 of corejs and you don’t want that. Much has changed for the better in version 3, especially feature-wise. But also bugs have been fixed and the package size is dramatically smaller. If you want, read it all up here (overview) and here (changelog for version 3.0.0).

And (2), there is another possible value for useBuiltIns and that is entry. This variant will not figure out which features your code actually needs. Instead, it will just add all polyfills that exist for the given target environment. It works by looking for corejs imports in your source (like import corejs/stable) which should only appear once in your codebase, probably in your entry module. Then, it replaces this “meta” import with all of the specific imports of polyfills that match your targets. This approach will likely result in a much, much larger package with much of unneeded code. So we just use usage. (With corejs@2 there were a few problems with usage that could lead to wrong assumptions about which polyfills you need. So in some cases entry was the more safe option. But these problems are appearently fixed with version 3.)

Tell transform-runtime to import core-js

The second way to get the polyfills that your code needs is via the transform-runtime plugin from above. You can configure it to inject not only imports for the Babel helpers but also for the core-js modules that your code needs:

1
2
3
4
5
6
7
8
plugins: [
[
'@babel/plugin-transform-runtime',
{
corejs: 3
}
]
],

This tells the plugin to insert import statements to corejs version 3. The reason for this version I have mentioned above.

If you configure the plugin to use core-js, you have to change the runtime dependency: The peerDependencies should now contain not @babel/runtime but @babel/runtime-corejs3!

Which way should you use?

In general, the combination of manual import and the env preset is meant for applications and the way with transform-runtime is meant for libraries. One reason for this is that the first way of using core-js imports polyfills that “pollute” the global namespace. And if your library defines a global Promise, it could interfere with other helper libraries used by your library’s users. The imports that are injected by transform-runtime are contained. They import from core-js-pure which does not set globals.

On the other hand, using the transform plugin does not account for the environment you are targeting. Probably in the future it could also use the same heuristics as preset-env but at the moment it just adds every polyfill that is theoretically needed by your code. Even if the target browsers would not need them or not all of them. For the development in that direction see the comment from the corejs maintainer and this RFC issue at Babel.

So it looks like you have to choose between a package that adds as few code as possible and one that plays nicely with unknown applications around it. I have played around with the different options a bit and bundled the resulting files with webpack and this is my result:

You get the smallest bundle with the core-js globals from preset-env. But it’s too dangerous for a library to mess with the global namespace of its users. Besides that, in the (hopefully very near) future the transform-runtime plugin will also use the browserslist target environments. So the size issue is going to go away.

The env key

With env you can add configuration options for specific build environments. When Babel executes it will look for process.env.BABEL_ENV. If that’s not set, it will look up process.env.NODE_ENV and if that’s not found, it will fallback to the string 'development'. After doing this lookup it will check if the config has an env object and if there is a key in that object that matches the previously found env string. If there is such a match, Babel applies the configuration under that env name.

We use it for example for our test runner Jest. Because Jest can not use ESModules we need a Babel config that transpiles our modules to CommonJS modules. So we just add an alternative configuration for preset-env under the env name 'test'. When Jest runs (We will use babel-jest for this. See in a later part of this series.) it sets process.env.NODE_ENV to 'test'. And so everything will work.

Conclusion and final notes for Babel setup

Install all needed packages:

npm i -D @babel/core @babel/cli @babel/preset-env @babel/plugin-transform-runtime

Add a peerDependency to your package.json that your users should install themselves:

1
2
3
4
5
...
"peerDependencies": {
"@babel/runtime-corejs3": "^7.4.5", // at least version 7.4; your users have to provide it
}
...

Create a babel.config.js that contains at least this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// babel.config.js

module.exports = {
presets: [
[
'@babel/env', // transpile for targets
{
modules: false, // don't transpile module syntax
}
],
],
plugins: [
[
'@babel/plugin-transform-runtime', // replace helper code with runtime imports (deduplication)
{ corejs: 3 } // import corejs polyfills exactly where they are needed
]
],
env: {
test: { // extra configuration for process.env.NODE_ENV === 'test'
presets: ['@babel/env'] // overwrite env-config from above with transpiled module syntax
}
}
};

If you write TypeScript, run npm i -D @babel/preset-typescript and add '@babel/preset-typescript' to the presets.

If you write React code, (JSX) run npm i -D @babel/preset-react and add '@babel/preset-react' to the presets.

Add a browserslist section in your package.json:

1
2
3
4
5
6
7
8
9
10
11
...
"browserslist": [
"last 2 Chrome versions",
"last 2 Firefox versions",
"last 2 Edge versions",
"last 2 Opera versions",
"last 2 FirefoxAndroid versions",
"last 2 iOS version",
"last 2 safari version"
]
...

In case of using another browserslist query that includes targets that do not have support for generator functions and/or async/await, there is something you have to tell your users:

Babel’s transform-runtime plugin will import regenerator-runtime. This library depends on a globally available Promise constructor. But Babel will not include a promise polyfill for regenerator-runtime. Probably because it adds polyfills only for things genuinely belonging to your code, not external library code. That means, if your usecase meets these conditions, you should mention it in your README or installation instructions that the users of your lib have to make sure there is a Promise available in their application.

And that is it for the Babel setup.

Next up: Compiling with the TypeScript compiler

Publish a modern JavaScript (or TypeScript) library

Did you ever write some library code together and then wanted to publish it as an NPM package but realized you have no idea what is the technique du jour to do so?

Did you ever wonder “Should I use Webpack or Rollup?”, “What about ES modules?”, “What about any other package format, actually?”, “How to publish Types along with the compiled code?” and so on?

Perfect! You have found the right place. In this series of articles I will try to answer every one of these questions. With example configurations for most of the possible combinations of these tools and wishes.

Technology base

This is the set of tools and their respective version range for which this tutorial is tested:

  • ES2018
  • Webpack >= 4
  • Babel >= 7.4
  • TypeScript >= 3
  • Rollup >= 1
  • React >= 16.8
    ( code aimed at other libraries like Vue or Angular should work the same )

Some or even most of that what follows could be applied to older versions of these tools, too. But I will not guarantee or test it.

Creation

The first thing to do before publishing a library is obviously to write one. Let’s say we have already done that. In fact, it’s this one. It consists of several source files and therefore, modules. We have provided our desired functionality, used our favorite, modern JavaScript (or TypeScript) features and crafted it with our beloved editor settings.

What now? What do we want to achieve in this tutorial?

  1. Transpile modern language features so that every browser in one of the last 2 versions can understand our code.
  2. Avoid duplicating compile-stage helpers to keep the library as small as possible.
  3. Ensure code quality with linting and tests.
  4. Bundle our modules into one consumable, installable file.
  5. Provide ES modules to make the library tree-shakable.
  6. Provide typings if we wrote our library in TypeScript.
  7. Improve collaborating with other developers (from our team or, if it is an open source library, from the public).

Wow. That’s a whole lot of things. Let’s see if we can make it.

Note that some of these steps can be done with different tools or maybe differ depending on the code being written in TypeScript or JavaScript. We’ll cover all of that. Well, probably not all of that, but I will try to cover the most common combinations.

The chapters of this series will not only show configurations I think you should use, but also will I explain the reasoning behind it and how it works. If you aren’t interested in these backgrounds, there will be a link right at the top of each post down to the configurations and steps to execute without much around.

Go!

We will start with the first points on our list above. As new articles arrive, I will add them here as links and I will also try to keep the finished articles updated when the tools they use get new features or change APIs. If you find something that’s not true anymore, please give me a hint.

  1. Transpile modern language features – With Babel.
  2. Compiling modern language features with the TypeScript compiler.
  3. Building your library: Part 1

Oh and one last thing™: I’ll be using npm throughout the series because I like it. If you like yarn better, just exchange the commands.

Resize LVM on LUKS partition without messing everything up

Since forever I run my work computer operating systems on a full-disk-encrypted partition. Currently this is Manjaro Linux. When I set up my current machine I made the following partition scheme:

1
2
3
4
5
sda                  238,5G  disk
├─sda1 260M part /boot/efi
├─sda2 128M part /boot
└─sda3 237G part
└─tank 237G crypt

Somewhere, I can’t even remember when, I read that 128M for /boot would be sufficient. And it was for a few years. But kernel images and/or initram disks grew bigger and bigger until I could not upgrade to a newer kernel anymore. The last kernel I ran was Linux 4.16 and the files in /boot took around 75M space and so mhwd-kernel -i linux417 had too little space on the device left.

What I needed to do was to shrink /dev/sda3, move it to the end of the SSD and grow /dev/sda2 as necessary.

But I did not know if this was even possible with my setup. Inside the encrypted partition there is an LVM container with 5 logical volumes including /. I pushed it into the future again and again because most of the time I am working in running projects and can not afford to have a non-functioning machine for \.

But in the end it was relatively easy. I had feared that in the worst case I would have to re-setup my whole machine and restore backups for the data and system partitions. Which then maybe would need endless tweaking until it runs again (No, I never had a hard disk failure or similar, so I never had to actually do anything like that).

So, here are the things I needed to do:

1. Backup

List all logcal volumes:

1
2
3
4
5
6
7
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker tank -wi-ao---- 5,00g
home tank -wi-ao---- 100,00g
mongo tank -wi-ao---- 1,00g
root tank -wi-ao---- 25,00g
swap tank -wc-ao---- 32,00g

For each lv do the following:

1
2
# lvcreate -s -n <name>snap /dev/tank/<name>
# dd if=/dev/tank/<name>snap of=/path/to/external/storage/<name>.img

Where <name> must be replaced by the actual names of the lvs. Then I backed up both the /boot and the /boot/efi partitions, also with dd.
Finally I made a backup of the LUKS header for the crypto-partition:

1
# cryptsetup luksHeaderBackup /dev/sda3 --header-backup-file /path/to/external/storage/luks-header.bkp

2. Boot in a live system from an USB stick and decrypt the device

1
# cryptsetup open /dev/sda3 tank --type luks

3. Resize the physical volume

Note: I have free space inside my LVM container. As you can see from the output of lvs above I currently use only 143GB out of roughly 238GB. That means I do not have to resize logical volumes before I resize the containing physical volume. If you use all of the available space for logical volumes, look into lvresize(8) first: For example in the Arch Wiki.

I generously shrank the volume from 238,07G to 236G with:

1
# pvresize --setphysicalvolumesize 236G /dev/mapper/tank

4. Resize the crypto-device

Find out how many sectors is the current size (note that my crypto-device has the same name like my volume group: tank. That could be different in your setup):

1
2
3
4
5
# cryptsetup status tank
...
sector size: 512
size: 499122176
...

In the end I want to add about 1G to the /boot partition. That is 1024 * 1024 * 1024 / 512 = 2097152 sectors.

1
# cryptsetup -b 497025024 resize tank

5. Resize the GUID partition

You see we go from innermost to outermost: LVM -> crypto -> GUID. I use parted to resize the partition /dev/sda3:

1
2
3
4
5
6
7
# parted
(parted) unit s
(parted) print
...
Number Begin End Size Name Flags
...
3 100672s 00115455s 97014784s TANK lvm

These numbers were actually different as I write this blog post in hindsight. The point is that partition number 3 went all the way to the last sector of the disk and I now must calculate where it should end in the future. Because resizepart takes not the future size but the future end sector of the partition as argument. So I subtract the same sector count as calculated above for cryptsetup (2097152) from the end sector of partition 3 (500115455) which gives 498018303.

1
2

(parted) resizepart 3 498018303s

Now we have free space on the SSD after the main partition. But I want to grow partition 2.

6. Reorder partitions and resize partition 2

I did that with GParted instead of a command line tool. Probably there is a way to do it with gdisk but parted has removed its command to move partitions. And because I was in a graphical live system anyway and also read that you could do it with GParted I just went for it.
First I closed the crypto device because GParted would not let me move the partition otherwise:

1
2
# vgchange -an tank
# cryptsetup close tank

Then I opened GParted and right-clicked on the crypto partition. I chose “Change size|Move” and moved the free space after the partition before it. Then I opend the same dialog for the /boot partition and extended it to cover all of the free space. Finally I committed the changes.

Handle lid closing correctly in XFCE power settings

This is mainly just a note for my future self. I always had problems with the power management settings on my laptop. It’s running Manjaro Linux (Arch derivate). Regardless of what I set in the XFCE power settings, the actions that should happen on lid closing didn’t work as expected. I wanted that the machine does a suspend-to-RAM when I close the lid and the power cable is plugged in. And when it is not plugged in I wanted the machine to suspend-to-disk (hibernate).

On some point I just disabled everything in /etc/systemd/logind.conf (set it to ignore lid actions) and lived with the fact.

Today on Googling™ I came across two things: First a mention of the file ~/.config/xfce4/xconf/xfce-perchannel-xml/xfce4-power-manager.xml. There, all the settings you can set in the graphical power settings tool are saved as XML. Second: A forum post (https://bbs.archlinux.org/viewtopic.php?pid=1690134#p1690134) that points to the fact that in this XML file there is a setting you can’t set graphical: “logind-handle-lid-switch”. Which is set to true for reasons that are beyond me.

Probably you can do all sorts of things with acpid and/or systemd to control the actions on lid-close and lid-open. But you can also just issue:

1
xfconf-query -c xfce4-power-manager -p /xfce4-power-manager/logind-handle-lid-switch -s false

on the shell and then your settings in XFCEs power settings are used by the system and work. Of course I also set the content in logind.conf back to default.

Fix speed issue when writing to NAS system

I just fixed an issue with my FreeBSD home server. It is set up as a file server for Mac (AFP) and Linux Clients (NFS). My local network is Gigabit-based sothe limitating factor on read/write speeds should be the hard disk drives in the server.

The server has a Core i3-6100T CPU @ 3.20GHz, 8GB RAM, a ZFS setup with two mirror vdevs each consisting of two disks connected to the board via SATA3. And of course the onboard Gbit NIC (Realtek).

I know very well that write speed was at around 50–60MB/sec, which I would expect. But lately, it dropped amazingly to ~1MB/sec. And I just couldn’t think of, why. I suspected the cable, the AFP, the RAM anything.

What I didn’t suspect — until today, that is — was the network interface. But I had time today for some googling and even if I didn’t found the solution directly, I stumbled across something related to the output of ifconfig. So I hacked that into the console and stared at it.

1
2
3
4
5
6
7
8
9
10
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
ether 4c:cc:6a:b3:3c:f5
hwaddr 4c:cc:6a:b3:3c:f5
inet6 fd23:16:7:7::1 prefixlen 64
inet6 fe80::4ecc:6aff:feb3:3cf5%re0 prefixlen 64 scopeid 0x1
inet 192.168.10.118 netmask 0xffffff00 broadcast 192.168.10.255
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet autoselect (10baseT/UTP <full-duplex>)
status: active

Do you spot it?

1
media: Ethernet autoselect (10baseT/UTP <full-duplex>)

Well, that is … unfortunate. The output of ifconfig -m re0 gave me:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
supported media:
media autoselect mediaopt flowcontrol
media autoselect
media 1000baseT mediaopt full-duplex,flowcontrol,master
media 1000baseT mediaopt full-duplex,flowcontrol
media 1000baseT mediaopt full-duplex,master
media 1000baseT mediaopt full-duplex
media 100baseTX mediaopt full-duplex,flowcontrol
media 100baseTX mediaopt full-duplex
media 100baseTX
media 10baseT/UTP mediaopt full-duplex,flowcontrol
media 10baseT/UTP mediaopt full-duplex
media 10baseT/UTP
media none

So I ran sudo ifconfig re0 media 1000baseTX mediaopt full-duplex and it worked. After that I also ran sudo ifconfig re0 media autoselect which also set the media type to 1000baseT full-duplex. I have no idea why the system did that wrong (or when) but I will monitor what happens after the next reboot. Maybe I have to do some configuration but maybe it was just an hickup.

Speeds are up to 60MB/sec again.

Execute Promise-based code in order over an array

The problem

I recently faced a problem: I had a list (an array) of input data and wanted to execute a function for every item in that list.

No problem, you say, take Array.prototype.map, that’s what it’s for. BUT the function in question returns a Promise and I want to be able to only continue in the program flow when all of these Promises are resolved.

No problem, you say, wrap it in Promise.all, that’s what it’s for. BUT the function in question is very expensive. So expensive that it spawns a child process (the whole code runs in NodeJS on my computer) and that child process is using so much CPU power that my computer comes to grinding halt when my input list is longer than a few elements.

And that’s because effectively, all the heavy child processes get started in near parallel. Actually they get started in order but the next will not wait for the previous to finish.

The first solution

So what I need is a way to traverse the array, execute the function for the current element, wait until the Promise resolves and only then go to the next element and call the function with it. That means map will not work because I have no control over the execution flow. So I will have to build my own map. And while I am on it, I will implement it a bit nicer as stand-alone function that takes the mapper function first and then the data array:

1
2
3
4
5
6
7
8
9
10

const sequentialMap = fn =>
function innerSequentialMap([head, ...tail]) {
if (!head) {
return Promise.resolve([])
}
return fn(head).then(headResult =>
innerSequentialMap(tail).then(tailResult => [headResult, ...tailResult])
)
}

So, what does this? It takes the function fn that should be applied to all values in the array and returns a new function. This new function expects an array as input. You see that the function is curried in that it takes only ever one argument and the real execution starts when all arguments are provided. That allows us for example to “preload” sequentialMap with a mapper function and reuse it on different input data:

1
2
3
4
5
// preloading
const mapWithHeavyComputations = sequentialMap(heavyAsyncComputation)

// execution
const result = mapWithHeavyComputations([…])

But in this case the currying enables (or simplifies) another technique: recursion.

We say a function is recursive when it calls itself repeatedly. Recursion is the functional equivalent to looping in imperative programming. You can refactor one into the other as long as the programming language allows both ways. Or so I thought.

I used a recursive function here because I could not think of a way to wait for a Promise resolving in a loop. How would I use .then() and jump to the next iteration step within that then?

Anyway, let’s go further through the code. In the body of the internal or second function firstly I define a condition to terminate the recursion: I check if the first element is falsy and if it is falsy I just return a Promise that resolves to an empty array. That is because the main path of the function returns its data as an array wrapped in a Promise. So if we return the same type of data when we terminate all will fit nicely together.

Next, if we don’t terminate (which means the first element of the given list is truthy) we apply the mapper function to it. That will return a Promise and we wait for its resolving with .then. Once it resolves the whole thing gets a bit magical, but not too much.

What we do then is to build a nested Promise. Normally, when you work with Promises and want to apply several functions to the inner values you would build a “Promise chain”:

1
2
3
4
const result = firstPromise
.then(doSomethingWithIt)
.then(doSomthingElseAfterThat)

The problem we have here is that to build the final result (the mapped array), we need the result from the first resolved Promise and then also the result values from all the other Promises which are not computed upon each other but independent.

So we use two features to solve that: nested scope and Promise-flattening (did someone say Monad?).

For the nested scope first: When we define a function within a function then the inner function can access variables that are defined not within itself but in the outer function (the outer or surrounding scope):

1
2
3
4
5
6
7
8
9
10
11
function outer(arg1) {
const outerValue = arg1 + 42

function inner() {
return outerValue + 23
}

console.log(inner())
}

outer(666) // logs 731

And Promise-flattening means essentially that if you have a Promise of a Promise of a value that is the same as if you just had a Promise of the value.

1
2
3
4
5
6

const p2 = Promise.resolve(Promise.resolve(1))
const p1 = Promise.resolve(1)

p2.then(console.log) // logs 1
p1.then(console.log) // logs 1

To recall, here is what the code we are talking about looks like:

1
2
3
return fn(head).then(headResult =>
sequentialMapInternal(tail).then(tailResult => [headResult, ...tailResult])
)

We keep the headResult in scope and then we generate the next Promise by calling the inner function recursively again but with a shorter list without the first element. We wait again with .then for the final result and only then we build our result array.

This is done by spreading the tailResult after the headResult: We know we get one value from calling fn(head) but we get a list of values from calling sequentialMapInternal(tail). So with the spread operator we get a nice flat array of result values.

Note that the function inside the first then, that gets headResult as parameter immediately returns the next Promise(-chain). And that is essentially where we use Promise-flattening. .then returns a Promise in itself and now we are returning a Promise inside of that. But the result will look like an ordinary Promise – no nesting visible.

The better way

While that works perfectly and my computer remains usable also when I call my script now, all these nested thens do not look so nice. We can fix that when we have async functions at our disposal:

1
2
3
4
5
6
7
8
9
const sequentialMap = fn =>
async function innerSequentialMap([head, ...tail]) {
if (!head) {
return Promise.resolve([])
}
const headResult = await fn(head)
const tailResult = await innerSequentialMap(tail)
return [headResult, ...tailResult]
}

Yes, that is much better. Now the exection is paused until headResult is there and then paused again until tailResult is there and only then we build our result array and are finished.

The shortest way

Wait. Did I just say I can pause execution with await? Wouldn’t this work also within a loop?

1
2
3
4
5
6
7
8
const loopVersion = fn =>
async list => {
const result = []
for (const elem of list) {
result.push(await fn(elem))
}
return result
}

See, this is what happens to people like me that are too deep into functional programming paradigms. Yes, you should generally avoid loops because they are not declarative and you end up telling the machine (and your coworker) not what you want to happen but how you want it to happen. That is, again, generally, no good practice. But in this case that is exactly what we wanted: To give a step-by-step schema on how to execute our code. To optimize for resource usage.