tobias-barth.net

Moderne Webentwicklung aus Köln

Bundling your library with Webpack

Preface

This article is part 7 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion.

Intro

In the last post we have established in which cases we may need to bundle our library – instead of just delivering transpiled files /modules. There are a few tools which help us to do so and we will look at the most important ones of them one after another.

As promised I will make the start with Webpack. Probably most of you have already had contact with Webpack. Probably in the context of website/application bundling. Anyway, a short intro to what it is and does. It is a very versatile tool that was originally build around the concept of code-splitting. Of course it can do (and does) many more things than that but that was the initial, essential idea: make it possible and make it easy to split all of your application code into chunks of code that belong together. So that the browser (the user) does not have to first download, parse and execute all of the app code before anything works. But instead to load only the right amount of code needed at the moment. Webpack is awesome at that.

The thing is, we don’t want to do that. We do not have an application, we have a library. There is either no need for splitting because our code really does only one thing (even if it is a complex thing). Or, we provide rather independent code blocks but then it’s the application’s job to put the right things in the right chunks. We can not assume anything about the library-user’s needs so they get to decide about splitting.

Then, what can Webpack do for us? It can take all of our carefully crafted modules, walk through their dependency structure like a tree and put them all together in one module – a bundle. Plus, it adds a tiny bit of runtime code to make sure everything is consumable as we expect it to.

Webpack, like all bundlers I can think of right now, can work directly with the source code. It’s not like you have to, say, transpile it first and then Webpack starts its thing. But for Webpack to be able to understand your code and also to apply any transformation you may want, you need to use so-called loaders. There is a babel-loader that we can use for transpiling, TypeScript-loaders, even things like SVG- or CSS-loaders which allow us to import things in our JS/TS files that aren’t even related to JavaScript.

This article does not want and is not able to cover all the possibilities of what you can achieve with Webpack. If you want to learn more, consult the official documentation. It’s really good these days. (Back in my time … but anyway.)

Our goal

We have library code, written in plain JavaScript or TypeScript, no fancy imports. It needs to get transpiled according to our rules and result in one consumable file which people can import in their applications. Also, we want people to be able to just drop it in their HTML in form of a script tag. That is, we want to get a UMD module.


What are UMD modules?

(If you already know our if you don’t want to know more than I mentioned in the paragraph before, feel free to skip to Starting with Webpack or even to the Conclusion and final config.)

UMD stands for Universal Module Definition. It combines the module systems Asynchronous Module Definition (AMD), CommonJS and exposure via a global variable for cases where no module system is in place. You can read the specification and its variants here. Basically, a UMD module wraps the actual library code with a thin detection layer that tries to find out if it’s currently being executed in the context of one of the two mentioned module systems. In case it is, it exposes the library within that system (with define or module.exports). If not, it will assign the library’s exports to a global variable.

Starting with Webpack

This will be roughly the same as in the official documentation of Webpack. But I will try to provide the complete configuration including optimizations andcomments. Also note that I will omit many possibilities Webpack offers or simplify a few things here and there. That’s because this is not a deep dive into Webpack but a what-you-should-know-when-bundling-a-library piece.

First we install Webpack and its command line interface:

1
npm install -D webpack webpack-cli

Now we create a file called webpack.config.js within the root directory of our library. Let’s start with the absolute basics:

1
2
3
4
5
6
7
8
9
10
// webpack.config.js
const path = require('path')

module.exports = {
entry: './src/index.js', // or './src/index.ts' if TypeScript
output: {
path: path.resolve(__dirname, 'dist'),
filename: 'library-starter.js'
}
}

With entry we are defining the entry point into our library. Webpack will load this file first and build a tree of dependent modules from that point on. Also, together with a few other options that we will see in a bit, Webpack will expose all exports from that entry module to the outside world – our library’s consumers. The value is, as you can see, a string with a path that is relative to the config file location.

The output key allows us to define what files Webpack should create. The filename prop makes running Webpack result in a bundle file with this name. The path is the folder where that output file will be put in. Webpack also defaults to the dist folder that we defined here but you could change it, e.g. to path.resolve(__dirname, 'output')or something completely different. But ensure to provide an absolute path – it will not get expanded like the entry value.

Problem 1: custom syntax like JSX

When we now run npx webpack on the command line, we expect it to result in a generated dist/library-starter.js file. Instead it fails with an error. In my library-starter example code I use React’s JSX. As it is configured now, Webpack will refuse to bundle it because it encounters an “unexpected token” when it tries to parse the code. You see that Webpack needs to understand your code. We help with configuring an appropriate “loader”.

If you use Babel for transpiling, install the Babel loader:

1
npm install -D babel-loader

The rest of the Babel setup we need is already installed in our project.

If you instead are using TSC you’ll need ts-loader:

1
npm install -D ts-loader

Note: I know there is also the Awesome TypeScript Loader but the repository has been archived by the author and has not seen any updates for two years (as the time of writing this). Even the author writes in the README: “The world is changing, other solutions are evolving and ATL may work slower for some workloads.” Recently it seems to be the case that TS-Loader is faster and is the default choice for most users. Also more information on “Parallelising Builds” is found in the README of ts-loader.

We now add the following to the webpack.config.js file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// webpack.config.js (Babel)
...
module.exports = {
...
module: {
rules: [
{
test: /\.jsx?$/, // If you are using TypeScript: /\.tsx?$/
include: path.resolve(__dirname, 'src'),
use: [
{
loader: 'babel-loader',
options: {
cacheDirectory: true
}
}
]
}
]
}
}

Or:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// webpack.config.js (TSC)
...
module.exports = {
...
module: {
rules: [
{
test: /\.tsx?$/,
include: path.resolve(__dirname, 'src'),
use: [
{
loader: 'ts-loader',
options: {
transpileOnly: true
}
}
]
}
]
}
}

Problem 2: Babels runtime helpers

In case we are using Babel for transpiling, Webpack now runs into the next error. It tries to resolve the helper and polyfill imports that Babel created for us but as we only declared them as a peerDependency we haven’t installed them yet and so Webpack can’t put them into the bundle.

Bundling helpers?

As you remember, we deliberately did define @babel/runtime-corejs3 as a peer dependency to make sure our delivered library is as small as possible and also to allow the user to have at best only one version of it installed, keeping their application bundle smaller. Now, if we install it by ourselves and bundle it with Webpack, then all the benefit is gone. Yes, that’s right. We can of course tell Webpack that certain imports should be treated as “external” and we will in fact do that later on for the “react” dependency that our specific library has. But not for the runtime helpers.

Because remember why we are bundling: One of the reasons was to make it possible for a user to drop the bundle in a script tag into their page. To be able to do that with deps declared as external, also those have to be available as separate UMD package. This is the case for many things like React or Lodash but not for this runtime package. That means we have to bundle it together with our code. We could make a very sophisticated setup with several Webpack configs, one resulting in a bigger bundle for that specific use case and one for usual importing in an application. But we already reached the second goal: with our non-bundled build.

If your library uses non-JS/TS imports like CSS or SVGs, then of course you can think about how much it will save the users of your library if you go that extra mile. I am not going to cover that in this article. Maybe at a later point when we have all of our foundations in place.

Bundling helpers!

Install @babel/runtime-corejs3 as a development dependency:

1
npm install -D @babel/runtime-corejs3

Problem 3: Externals

The next thing we will cover is dependencies that we really don’t want to have in our bundle but instead should be provided by the using environment. The next error Webpack throws is about the 'react' dependency. To solve this we make use of the externals key:

1
2
3
4
5
6
7
8
9
10
11
// webpack.config.js
module.exports = {
...
externals: {
react: {
root: 'React',
commonjs: 'react',
commonjs2: 'react',
amd: 'react',
}
}

Because some libraries expose themselves differently depending on the module system that is being used, we can (and must) declare the name under which the external can be found for each of these systems. root denotes the name of a global accessible variable. Deeper explanation can be found in the Webpack docs.

Problem 4: File extensions

This is of course only an issue if you are writing TypeScript or if you name files containing JSX *.jsx instead of *js (which we don’t in the example library). Do you remember when we had to tell the Babel CLI which file extensions it should accept? If not, read again about building our library. Now, Webpack has to find all the files we are trying to import in our code. And like Babel by default it looks for files with a .js extension. If we want Webpack to find other files as well we have to give it a list of valid extensions:

1
2
3
4
5
6
7
8
// webpack.config.js
module.exports = {
...
resolve: {
extensions: ['.tsx', '.ts', '.jsx', 'js']
},
...
}

If you are not writing TypeScript the list of extensions can be as short as ['.jsx', '.js']. We didn’t need to specify the *.jsx extension for the normal Babel call because Babel recognizes it already (as opposed to *.tsx for example).

Mode

Now when we run npx webpack our bundle is made without errors and put into /dist. But there is still a warning from Webpack about the fact that we didn’t set the mode option in our config. The mode can be 'development' or 'production' and will default to the latter. (There is also the value 'none' but we will not cover it here.) It’s kind of a shorthand for several settings and activation of plugins. 'development' will keep the output readable (besides other things) while 'production' will compress the code as much as possible.

Since we mainly bundle for users to be able to use it in script tags, i.e. additionally to providing single module files, we will not bother to differentiate between the two modes. We only use 'production':

1
2
3
4
5
6
// webpack.config.js

module.exports = {
mode: 'production',
...
}

And thus the warning is gone.

Library

Everything is fine now. Or, is it?

1
2
3
4
5
6
# node repl

> const lib = require('./dist/library-starter')
> lib
{}
>

We get only an empty module. That is because Webpack by default creates application bundles that should get executed. If we want to get a module with exports than we have to explicitly tell it:

1
2
3
4
5
6
7
8
9
// webpack.config.js

module.exports = {
...
output: {
...
library: 'libraryStarter',
}
}

But this is still not enough because we now get an executable script that creates a global variable named libraryStarter which contains our library. Actually, this would be enough to drop it into a <script> tag. We could use it on a web page like this:

1
2
3
4
5
6
<script src="/library-starter.js"></script>
<script>
...
libraryStarter.usePropsThatChanged...
...
</script>

But come on, we wanted a real UMD module. If we do this, we do it right. So back in our webpack.config.js we add two more options:

1
2
3
4
5
6
7
8
// webpack.config.js

output: {
...
library: 'libraryStarter',
libraryTarget: 'umd',
globalObject: 'this',
}

Let’s run npx webpack again and try it out:

1
2
3
4
5
6
7
8
# node repl

> const lib = require('./dist/library-starter.js')
> lib
Object [Module] {
ExampleComponent: [Getter],
usePropsThatChanged: [Getter]
}

Finally. If you wonder, why we added the globalObject key: It makes sure that in the case of using the bundle file without a module system like AMD or CommonJS it works in the browser as well as in a Node context. The return value of the entry point will get assigned to the current this object which is window in browsers and the global object in Node.

There are more nuanced ways to set libraryTarget than explained here. If you are interested please read the documentation. But for our purposes this should set a solid base.

Build and expose

We are done with the configuration part. (Unbelievable, right?!) The only thing that’s left is changing the package.json so that the bundle can be imported from outside as an addition to our ES modules and that users can get it automatically from unpkg.com as well.

Right now both, the main and the module key are pointing to dist/index.js. While only the latter is correct. As I mentioned before main should point to a ES5-compatible file and not to an ES module. Now we can safely change it to our new bundle file.

Of course we also have to actually build the bundle. For this we add an npm script named “bundle” to our script section and add it to the “build” script.

1
2
3
4
5
6
7
8
9
10
11
12
// package.json
{
...
"main": "dist/library-starter.js",
"module": "dist/index.js",
"scripts": {
...
"bundle": "webpack",
"build": "<our build commands up until now> && npm run bundle"
}
...
}

Conclusion

Install webpack:

1
npm install -D webpack webpack-cli

Install babel-loader or ts-loader:

1
npm install -D babel-loader # or ts-loader

If using Babel, install its runtime helpers:

1
npm install -D @babel/runtime-corejs3

Create a webpack.config.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
const path = require("path");

module.exports = {
mode: "production",
entry: "./src/index.js", // or './src/index.ts' if TypeScript
output: {
filename: "library-starter.js", // Desired file name. Same as in package.json's "main" field.
path: path.resolve(__dirname, "dist"),
library: "libraryStarter", // Desired name for the global variable when using as a drop-in script-tag.
libraryTarget: "umd",
globalObject: "this"
},
module: {
rules: [
{
test: /\.jsx?/, // If you are using TypeScript: /\.tsx?$/
include: path.resolve(__dirname, "src"),
use: [
// If using babel-loader
{
loader: "babel-loader",
options: {
cacheDirectory: true
}
}
// If _instead_ using ts-loader
{
loader: 'ts-loader',
options: {
transpileOnly: true
}
]
}
]
},
// If using TypeScript
resolve: {
extensions: ['.tsx', '.ts', '.jsx', 'js']
},
// If using an external dependency that should not get bundled, e.g. React
externals: {
react: {
root: "React",
commonjs2: "react",
commonjs: "react",
amd: "react"
}
}
};

Change the package.json:

1
2
3
4
5
6
7
8
9
10
11
12
// package.json
{
...
"main": "dist/library-starter.js",
"module": "dist/index.js",
"scripts": {
...
"bundle": "webpack",
"build": "<our build commands up until now> && npm run bundle"
}
...
}

That’s all there is to bundling libraries with Webpack.
Next article’s topic: Rollup.

How to bundle your library and why

Preface

This article is part 6 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Publishing formats – do you even need a bundle?

At this point in our setup we deliver our library as separate modules. ES Modules to be exact. Let’s discuss what we achieve with that and what could be missing.

Remember, we are publishing a library to be used within other applications. Depending on your concrete use case the library will be used in web applications in browsers or in Node.js applications on servers or locally.

Web applications (I)

In the case of web applications we can assume that they will get bundled with any of the current solutions, Webpack for example. These bundlers can understand ES Module syntax and since we deliver our code in several modules, the bundler can optimize which code needs to be included and which code doesn’t (tree-shaking). In other words, for this use case we already have everything we need. In fact, bundling our modules together into one blob could defeat our goal to enable end-users to end up with only the code they need. The final application bundlers could maybe no longer differentiate which parts of the library code are being used.

Conclusion: No bundle needed.

Node.js applications

What about Node.js? Typically, Node.js applications consist of several independent files; source files and their dependencies (node_modules). The modules will get imported during runtime when they are needed. But does it work with ES Modules? Sort of.

Node.js v12 has experimental support for ES Modules. “Experimental” means we must “expect major changes in the implementation including interoperability support, specifier resolution, and default behavior.” But yes, it works and it will work even better and smoother in future versions.

Since Node.js has to support CommonJS modules for the time being and since the two module types are not 100% compatible, there are a few things we have to respect if we want to support both ways of usage. First of all, things will change. The Node.js team even warns to “publish any ES module packages intended for use by Node.js until [handling of packages that support CJS and ESM] is resolved.”

That means, if your library is intended only for Node.js (so no browser optimization necessary), you don’t want to rely on your library’s users reading your installation notes (they will have to know what to import/require) or you are not interested in investing in features that are only almost there: Please just don’t publish ES Modules. Change the configuration of Babel’s env preset to { modules: 'commonjs' } and ship only CommonJS modules.

But with a bit of work we can make sure everthing will be fine. For now the ESM support is behind a flag (--experimental-modules). When the implementation changes, I will update this post as soon as possible.

Node.js uses a combination of declaring a module type inside of package.json and filename extensions. I won’t lay out every detail and combination of these variants but rather show the (in my opinion) most future-proof and easiest approach.

Right now we have created .js files that are in ES Module syntax. Therefore, we will add the type key to our package.json and set it to "module". This is the signal to Node.js (if run with the --experimental-modules command line flag) that it should parse every .js file in this package scope as ES Module:

1
2
3
4
5
{
// ...
"type": "module",
// ...
}

Note that you oftentimes will come across the advice to use *.mjs file extensions. Don’t do that. *.js is the extension for JavaScript files and will probably always be. Let’s use the default naming for the current standards like ESM syntax. If you have for whatever reason files inside your package that must use CommonJS syntax, give them another extension: *.cjs. Node.js will know what to do with it.

There are a few caveats:

  1. Using third party dependencies
    1. If the external module is (only) in CommonJS syntax, you can import it only as default import. Node.js says that will hopefully change in the future but for now you can’t have named imports on a CommonJS module.
    2. If the external module is published in ESM syntax, check if it follows Node.js’ rules: If there is ESM syntax in a *.js file and there is no "type": "module" in the package.json, the package is broken and you can not use it with ES Modules. (Example: react-lifecycles-compat). Webpack would make it work but not Node.js. An example for a properly configured package is graphql-js. It uses the *.mjs extension for ESM files.
  2. Imports need file extensions. You can import from a package name (import _ from 'lodash') like before but you can not import from a file (or a folder containing an index.(m)js) without the complete path: import x from './otherfile.js' will work but import x from './otherfile' won’t. import y from './that-folder/index.js' will work but import y from './that-folder' won’t.
  3. There is a way around the file extension rule but you have to force your users to do it: They must run their program with a second flag: --es-module-specifier-resolution=node. That will restore the resolution pattern Node.js users know from CommonJS. Unfortunately that is also necessary if you have Babel runtime helpers included by Babel. Babel will inject default imports which is good, but it omits the file extensions. So if your library depends on Babel transforms, you have to tell your users that they will have to use the second flag. (Not too bad because they already know how to pass ESM related flags when they want to opt into ESM.)

For all other users that are not so into experimental features we also publish in CommonJS. To support CommonJS we do something, let’s say, non-canonical in the Node.js world: we deliver a single-file bundle. Normally, people don’t bundle for Node.js because it’s not necessary. But because we need a second compile one way or the other, it’s the easiest path. Also note that other than in the web we don’t have to care too much for size as everything executes locally and is installed beforehand.

Conclusion: Bundle needed if we want to ship both, CommonJS and ESM.

Web applications (II)

There is another use case regarding web applications. Sometimes people want to be able to include a library by dropping a <script> tag into their HTML and refer to the library via a global variable. (There are also other scenarios that may need such a kind of package.) To make that possible without additional setup by the user, all of your library’s code must be bundled together in one file.

Conclusion: Bundle needed to make usage as easy as possible.

Special “imports”

There is a class of use cases that came up mainly with the rise of Webpack and its rich “loader” landscape. And that is: importing every file type that you can imagine into your JavaScript. It probably started with requiring accompanying CSS files into JS components and went over images and what not. If you do something like that in your library, you have to use a bundler. Because otherwise the consumers of your library would have to use a bundler themselves that is at least configured exactly in a way that handles all strange (read: not JS-) imports in your library. Nobody wants to do that.

If you deliver stylings alongside with your JS code, you should do it with a separate CSS file that comes with the rest of the code. And if you write a whole UI library like Bootstrap then you probably don’t want to ask your users for importing hundreds of CSS files but one compiled file. And the same goes for other non-JS file types.

Conclusion: Bundle needed

Ok, ok, now tell me how to do it!

Alright. Now you can decide if you really need to bundle your library. Also, you have an idea of what the bundle should “look” like from outside: For classic usage with Node.js, it should be a big CommonJS module, consumable with require(). For further bundling in web applications it may be better to have a big ES module that is tree-shakable.

And here is the cliffhanger: Each of the common bundling tools will get their own article in this series. This post is already long enough.

Next up: Use Webpack for bundling your library.

Check types and emit type declarations

Preface

This article is part 5 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Getting the types out of TypeScript

Ok, this is a quick one. When we build our library, we want two things from TypeScript: First we want to know that there are no type errors in our code (or types missing, e.g. from a dependency). Second, since we are publishing a library for other fellow coders to use, not an application, we want to export type declarations. We will start with type checking.

Type-checking

Type-checking can be seen as a form of testing. Take the code and check if certain assertions hold. Therefore, we want to be able to execute it as a separate thing that we can add to our build chain or run it in a pre-commit hook for example. You don’t necessarily want to generate type definition files every time you (or your CI tool) run your tests.

If you want to follow along with my little example library, be sure to check out one of the typescript branches.

The TypeScript Compiler always checks the types of a project it runs on. And it will fail and report errors if there are any. So in principle we could just run tsc to get what we want. Now, to separate creating output files from the pure checking process, we must give tsc a handy option:

1
tsc --noEmit

Regardless if we use Babel or TSC for transpiling, for checking types there is just this one way.

Create type declaration files

This is something pretty library-specific. When you build an application in TypeScript, you only care about correct types and an executable output. But when you provide a library, your users (i.e. other programmers) can directly benefit from the fact that you wrote it in TypeScript. When you provide type declaration files (*.d.ts) the users will get better auto-completion, type-hints and so on when they use your lib.

Maybe you have heard about DefinitelyTyped. Users can get types from there for libraries that don’t ship with their own types. So, in our case we won’t need to do anything with or for DefinitelyTyped. Consumers of our library will have everything they need when we deliver types directly with our code.

Again, because these things are core functionality of TypeScript, we use tsc. But this time the calls are slightly different depending on how we transpile – with Babel or TSC.

With Babel

As you probably remember, to create our output files with Babel, we call the Babel command line interface, babel. To also get declaration files we add a call to tsc:

1
tsc --declaration --emitDeclarationOnly

The --declaration flag ensures that TSC generates the type declaration files and since we defined the outputDir in tsconfig.json, they land in the correct folder dist/.

The second flag, --emitDeclarationOnly, prevents TSC from outputting transpiled JavaScript files. We use Babel for that.

You may ask yourself why we effectively transpile all of our code twice, once with Babel and once with TSC. It looks like a waste of time if TSC can do both. But I discussed before the advantages of Babel. And having a very fast transpile step separate from a slower declaration generation step can translate to a much better developer experience. The output of declarations can occur only once shortly before publishing – transpiling is something that you do all the time.

With TSC

When we use TSC to generate the published library code, we can use it in the same step to spit out the declarations. Instead of just tsc, we call:

1
tsc --declaration

That is all.

Alias All The Things

To make it easier to use and less confusing to find out what our package can do, we will create NPM scripts for all steps that we define. Then we can glue them together so that for example npm run build will always do everything we want from our build.

In the case of using Babel, in our package.json we make sure that "scripts" contains at least:

1
2
3
4
5
6
7
8
9
10
{
...
"scripts": {
"check-types": "tsc --noEmit",
"emit-declarations": "tsc --declaration --emitDeclarationOnly",
"transpile": "babel -d dist/ --extensions .ts,.tsx src/",
"build": "npm run emitDeclarations && npm run transpile"
},
...
}

And if you are just using TSC, it looks like this:

1
2
3
4
5
6
7
8
{
...
"scripts": {
"check-types": "tsc --noEmit",
"build": "tsc --declaration"
},
...
}

Note that we don’t add check-types to build. First of all building and testing are two very different things. We don’t want to mix them explicitly. And second, in both cases we do check the types on build. Because as I said: that happens every time you call tsc. So even if you are slightly pedantic about type-checking on build, you don’t have to call check-types within the build script.

One great advantage of aliasing every action to a NPM script is that everyone working on your library (including you) can just run npm run and will get a nice overview of what scripts are available and what they do.

That’s it for using types.

Next up: All about bundling.

Building your library: Part 1

Preface

This article is part 4 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Note: I have promised in part 3 of this series that the next post would be about exporting types. But bear with me. First we will use what we have. Types are coming up next.

Our first build

Up until now we have discussed how to set up Babel or the TypeScript Compiler, respectively, for transpiling our thoughtfully crafted library code. But we didn’t actually use them. After all, the goal for our work here should be a fully functioning build chain that does everything we need for publishing our library.

So let’s start this now. As you can tell from the title of this article, we will refine our build with every item in our tool belt that we installed and configured. While the “normal” posts each focus on one tool for one purpose, these “build” articles will gather all configurations of our various tool combinations that we have at our disposal.

We will leverage NPM scripts to kick off everything we do. For JavaScript/TypeScript projects it’s the natural thing to do: You npm install and npm test and npm start all the time, so we will npm run build also.

For today we will be done with it relatively quickly. We only have the choice between Babel and TSC and transpiling is the only thing that we do when we build.

Build JavaScript with Babel

You define a build script as you may now in the package.json file inside of the root of your project. The relevant keys are scripts and module and we change it so that they contain at least the following:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "babel -d dist/ src/"
}
// ...
}

Using module

The standard key to point to the entry file of a package is main. But we are using module here. This goes back to a proposal by the bundler Rollup. The idea here is that the entry point under a main key is valid ES5 only. Especially regarding module syntax. The code there should use things like CommonJS, AMD or UMD but not ESModules. While bundlers like Webpack and Rollup can deal with legacy modules they can’t tree-shake them. (Read the article on Babel again if you forgot why that is.)

Therefore the proposal states that you can provide an entry point under module to indicate that the code there is using modern ESModules. The bundlers will always look first if there is a module key in your package.json and in that case just use it. Only when they don’t find it they will fall back to main.

Call Babel

The “script” under the name of build is just a single call to the Babel command line interface (CLI) with one option -d dist which tells Babel where to put the transpiled files (-d : --out-dir). Finally we tell it where to find the source files. When we give it a directory like src Babel will transpile every file it understands. That is, every file with an extension from the following list: .es6,.js,.es,.jsx,.mjs.

Build TypeScript with Babel

This is almost the same as above. The only difference is the options we pass to the Babel CLI. The relevant parts in package.json look like this:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "babel -d dist/ --extensions .ts,.tsx src/"
}
// ...
}

As I mentioned above, Babel wouldn’t know that it should transpile the .ts and .tsx files in src. We have to explicitly tell it to with the --extensions option.

Build TypeScript with TSC

For using the TypeScript Compiler we configure our build in the package.json like this:

1
2
3
4
5
6
7
8
{
// ...
"module": "dist/index.js",
"scripts": {
"build": "tsc"
}
// ...
}

We don’t have to tell TSC where to find and where to put files because it’s all in the tsconfig.json. The only thing our build script has to do is calling tsc.

Ready to run

And that is it. All you have to do now to get production-ready code is typing

1
npm run build

And you have your transpiled library code inside of the dist directory. It may not seem to be much but I tell you, if you were to npm publish that package or install it in one of the other ways aside from the registry it could be used in an application. And it would not be that bad. It may have no exported types, no tests, no contribution helpers, no semantic versioning and no build automation, BUT it ships modern code that is tree-shakable – which is more than many others have.

Be sure to check out the example code repository that I set up for this series. There are currently three branches: master, typescript and typescript-tsc. Master reflects my personal choice of tools for JS projects, typescript is my choice in TS projects and the third one is an alternative to the second. The README has a table with branches and their features.

Next up: Type-Checking and providing type declarations (and this time for real ;) )

Compiling modern language features with the TypeScript compiler

Preface

This article is part 3 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

How to use the TypeScript compiler tsc to transpile your code

If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion

In the last article we set up Babel to transpile modern JavaScript or even TypeScript to a form which is understood by our target browsers. But we can also instead use the TypeScript compiler tsc to do that. For illustrating purposes I have rewritten my small example library in TypeScript. Be sure to look at one of the typescript- prefixed branches. The master is still written in JavaScript.

I will assume that you already know how to setup a TypeScript project. How else would you have been able to write your library in TS? Rather, I will focus only on the best configuration possible for transpiling for the purposes of delivering a library.

You already know, the configuration is done via a tsconfig.json in the root of your project. It should contain the following options that I will discuss further below:

1
2
3
4
5
6
7
8
9
10
{
"include": ["./src/**/*"],
"compilerOptions": {
"outDir": "./dist",
"target": "es2017",
"module": "esnext",
"moduleResolution": "node",
"importHelpers": true
}
}

include and outDir

These options tell tsc where to find the files to compile and where to put the result. When we discuss how to emit type declaration files along with your code, outDir will be used also for their destination.

Note that these options allow us to just run tsc on the command line without anything else and it will find our files and put the output where it belongs.

Target environment

Remember when we discussed browserslist in the “Babel” article? (If not, check it out here.) We used an array of queries to tell Babel exactly which environments our code should be able to run in. Not so with tsc.

If you are interested, read this intriguing issue in the TypeScript GitHub repository. Maybe some day in the future we will have such a feature in tsc but for now, we have to use “JavaScript versions” as targets.

As you may know, since 2015 every year the TC39 committee ratifies a new version of ECMAScript consisting of all the new features that have reached the “Finished” stage before that ratification. (See The TC39 process.)

Now tsc allows us (only) to specify which version of ECMAScript we are targeting. To reach a more or less similar result as with Babel and my opinionated browserslist config, I decided to go with es2017. I have used the ECMAScript compatibility table and checked until which version it would be “safe” to assume that the last 2 versions of Edge/Chrome/Firefox/Safari/iOS can handle it. Your mileage may vary here! You have basically at least three options:

  • Go with my suggestion and use es2017.
  • Make your own decision based on the compatibility table.
  • Go for the safest option and use es5. This will produce code that can also run in Internet Explorer 11 but also will it be much bigger in size — for all browsers.

Just like with my browserslist config, I will discuss in a future article how to provide more than one bundle: one for modern environments and one for older ones.

Another thing to note here: The target does not directly set which module syntax should be used in the output! You may think it does, because if you don’t explicitly set module (see next section), tsc will choose it dependent of your target setting. If your target is es3 or es5, module will be set implicitly to CommonJS. Otherwise it will be set to es6. To make sure you don’t get surprised by what tsc chooses for you, you should always set module explicitly as described in the following section.

module and moduleResolution

Setting module to "esnext" is roughly the same as the modules: false option of the env preset in our babel.config.js: We make sure that the module syntax of our code stays as ESModules to enable treeshaking.

If we set module: "esnext", we have to also set moduleResolution to "node". The TypeScript compiler has two modes for finding non-relative modules (i.e. import {x} from 'moduleA' as opposed to import {y} from './moduleB'): These modes are called node and classic. The former works similar to the resolution mode of NodeJS (hence the name). The latter does not know about node_modules which is strange and almost never what you want. But tsc enables the classic mode when module is set to "esnext" so you have to explicitly tell it to behave.

In the target section above I mentioned that tsc will set module implicitly to es6 if target is something other than es3 or es5. There is a subtle difference between es6 and esnext. According to the answers in this GitHub issue esnext is meant for all the features that are “on the standard track but not in an official ES spec” (yet). That includes features like dynamic import syntax (import()) which is definitely something you should be able to use because it enables code splitting with Webpack. (Maybe a bit more important for applications than for libraries, but just that you know.)

importHelpers

You can compare importHelpers to Babel’s transform-runtime plugin: Instead of inlining the same helper functions over and over again and making your library bigger and bigger, tsc now injects imports to tslib which contains all these helpers just like @babel/runtime. But this time we will install the production dependency and not leave it to our users:

npm i tslib

The reason for that is that tsc will not compile without it. importHelpers creates imports in our code and if tsc does not find the module that gets imported it aborts with an error.

Should you use tsc or Babel for transpiling?

This is a bit opinion-based. But I think that you are better off with Babel then with tsc.

TypeScript is great and can have many benefits (even if I personally think JavaScript as a language is more powerful without it and the hassle you get with TypeScript outweighs its benefits). And if you want, you should use it! But let Babel produce the final JavaScript files that you are going to deliver. Babel allows for a better configuration and is highly optimized for exactly this purpose. TypeScript’s aim is to provide type-safety so you should use it (separately) for that. And there is another issue: Polyfills.

With a good Babel setup you get everything you need for running your code in the target environments. Not with tsc! It’s now your task to provide all the polyfills that your code needs. And first, to figure out which these are. Even if you don’t agree with my opinion about the different use-cases of Babel and TypeScript, the polyfill issue alone should be enough to follow me on this.

There is a wonderful blog post about using Babel instead of tsc for transpiling: TypeScript With Babel: A Beautiful Marriage. And it lists also the caveats of using Babel for TS: There are four small things that are possible in TypeScript but are not understood correctly by Babel: Namespaces (Don’t use them. They are outdated.), type casting with angle brackets (Use as syntax instead.), const enum (Use normal enums by omitting const.) and legacy style import/export syntax (It’s legacy — let it go). I think the only important constraint here is the const enum because it leads to a little bit more code in the output if you use standard enums. But unless you introduce enums with hundreds and hundreds of members, that problem should be negligible.

Also, it’s way faster to just discard all type annotations than checking the types first. This enables for example a faster compile cycle in development-/watch-mode. The example project that I use for this series is maybe not doing enough to be seen as a good compile time example. But also in another library project of mine which consists of ~25 source files and several third-party dependencies, Babel is five times faster than tsc. That is annoying enough when you are coding and have to wait after every save to see the results.

Conclusion and final notes for the tsc setup

(If you really want to use tsc for this task (see the last paragraphs above): )

Install tslib:

npm i tslib

Make sure your tsconfig.json contains at least the following options:

1
2
3
4
5
6
7
8
9
10
{
"compilerOptions": {
"outDir": "./dist", // where should tsc put the transpiled files
"target": "es2017", // set of features that we assume our targets can handle themselves
"module": "esnext", // emit ESModules to allow treeshaking
"moduleResolution": "node", // necessary with 'module: esnext'
"importHelpers": true // use tslib for helper deduplication
},
"include": ["./src/**/*"] // which files to compile
}

If you are sure you want or need to support older browsers like Android/Samsung 4.4 or Internet Explorer 11 with only one configuration, replace the es2017 target with es5. In a future article I will discuss how to create and publish more than one package: One as small as possible for more modern targets and one to support older engines with more helper code and therefore bigger size.

And remember: In this article I talked only about using tsc as transpiler. We will of course use it for type-checking, but this is another chapter.

Next up: Type-Checking and providing type declarations

Transpile modern language features with Babel

Preface

This article is part 2 of the series “Publish a modern JavaScript (or TypeScript) library”. Check out the motivation and links to other parts in the introduction.

Why Babel and how should you use it in a library?

If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion

Babel can transpile JavaScript as well as TypeScript. I would argue that it’s even better to use Babel instead of the TypeScript compiler for compiling the code (down) to compatible JavaScript because it is faster. What Babel does when it compiles TypeScript is it just discards everything that isn’t JavaScript. Babel does no type checking. Which we don’t need at this point.

To use Babel you have to install it first: Run npm install -D @babel/core @babel/cli @babel/preset-env. This will install the core files, the preset you are going to need always and the command line interface so that you can run Babel in your terminal. Additionally, you should install @babel/preset-typescript and/or @babel/preset-react, both according to your needs. I will explain in a bit what each of is used for but you can imagine from their names in which situations you need them.

So, setup time! Babel is configured via a configuration file. (For details and special cases see the documentation.) The project-wide configuration file should be babel.config.js. It looks at least very similar to this one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
module.exports = {
presets: [
[
'@babel/env',
{
modules: false,
}
],
'@babel/preset-typescript',
'@babel/preset-react'
],
plugins: [
[
'@babel/plugin-transform-runtime',
{ corejs: 3 }
]
],
env: {
test: {
presets: ['@babel/env']
}
}
};

Let’s go through it because there are a few assumptions used in this config which we will need for other features in our list.

module.exports = {…}

The file is treated as a CommonJS module and is expected to return a configuration object. It is possible to export a function instead but we’ll stick to the static object here. For the function version look into the docs.

presets

Presets are (sometimes configurable) sets of Babel plugins so that you don’t have to manage yourself which plugins you need. The one you should definitely use is @babel/preset-env. You have already installed it. Under the presets key in the config you list every preset your library is going to use along with any preset configuration options.

In the example config above there are three presets:

  1. env is the mentioned standard one.
  2. typescript is obviously only needed to compile files that contain TypeScript syntax. As already mentioned it works by throwing away anything that isn’t JavaScript. It does not interpret or even check TypeScript. And that’s a Good Thing. We will talk about that point later. If your library is not written in TypeScript, you don’t need this preset. But if you need it, you have to install it of course: npm install -D @babel/preset-typescript.
  3. react is clearly only needed in React projects. It brings plugins for JSX syntax and transforming. If you need it, install it with: npm i -D @babel/preset-react. Note: With the config option pragma (and probably pragmaFrag) you can transpile JSX to other functions than React.createElement. See documentation.

Let us look at the env preset again. Notable is the modules: false option for preset-env. The effect is this: As per default Babel transpiles ESModules (import / export) to CommonJS modules (require() / module.export(s)). With modules set to false Babel will output the transpiled files with their ESModule syntax untouched. The rest of the code will be transformed, just the module related statements stay the same. This has (at least) two benefits:

First, this is a library. If you publish it as separate files, users of your library can import exactly the modules they need. And if they use a bundler that has the ability to treeshake (that is: to remove unused modules on bundling), they will end up with only the code bits they need from your library. With CommonJS modules that would not be possible and they would have your whole library in their bundle.

Furthermore, if you are going to provide your library as a bundle (for example a UMD bundle that one can use via unpkg.com), you can make use of treeshaking and shrink your bundle as much as possible.

There is another, suspiciously absent option for preset-env and that is the targets option. If you omit it, Babel will transpile your code down to ES5. That is most likely not what you want—unless you live in the dark, medieval times of JavaScript (or you know someone who uses IE). Why transpiling something (and generating much more code) if the runtime environment can handle your modern code? What you could do is to provide said targets key and give it a Browserslist compatible query (see Babel documentation). For example something like "last 2 versions" or even "defaults". In that case Babel would use the browserslist tool to find out which features it has to transpile to be able to run in the environments given with targets.

But we will use another place to put this configuration than the babel.config.js file. You see, Babel is not the only tool that can make use of browserslist. But any tool, including Babel, will find the configuration if it’s in the right place. The documentation of browserslist recommends to put it inside package.json so we will do that. Add something like the following to your library’s package.json:

1
2
3
4
5
6
7
8
9
"browserslist": [
"last 2 Chrome versions",
"last 2 Firefox versions",
"last 2 Edge versions",
"last 2 Opera versions",
"last 2 FirefoxAndroid versions",
"last 2 iOS version",
"last 2 safari version"
]

I will admit this query is a bit opinionated, maybe not even good for you. You can of course roll your own, or if you are unsure, just go with this one:

1
"browserslist": "defaults" // alias for "> 0.5%, last 2 versions, Firefox ESR, not dead"; contains ie 11

The reason I propose the query array above is that I want to get an optimized build for modern browsers. "defaults", "last 2 versions" (without specific browser names) and the like will include things like Internet Explorer 11 and Samsung Internet 4. These ancient browsers do not support so much even of ES2015. We would end up with a much much bigger deliverable than modern browsers would need. But there is something you can do about it. You can deliver modern code to modern browsers and still support The Ancients™. We will go into further details in a future section but as a little cliffhanger: browserslist supports multiple configurations. For now we will target only modern browsers.

plugins

The Babel configuration above defines one extra plugin: plugin-transform-runtime. The main reason to use this is deduplication of helper code. When Babel transpiles your modules, it injects little (or not so little) helper functions. The problem is that it does so in every file where they are needed. The transform-runtime plugin replaces all those injected functions with require statements to the @babel/runtime package. That means in the final application there has to be this runtime package.

To make that happen you could just add @babel/runtime to the prod dependencies of your library (npm i @babel/runtime). That would definitely work. But here we will add it to the peerDependencies in package.json. That way the user of your library has to install it themselves but on the other hand, they have more control over the version (and you don’t have to update the dependency too often). And maybe they have it installed already anyway. So we just push it out of our way and just make sure that it is there when needed.

Back to the Babel plugin. To use that plugin you have to install it: npm i -D @babel/plugin-transform-runtime. Now you’re good to go.

Before we go on to the env key, this is the right place to talk about polyfills and how to use them with Babel.

How to use polyfills in the best way possible

It took me a few hours reading and understanding the problem, the current solutions and their weaknesses. If you want to read it up yourself, start at Babel polyfill, go on with Babel transform-runtime and then read core-js@3, babel and a look into the future.

But because I already did you don’t have to if you don’t want to. Ok, let’s start with the fact that there two standard ways to get polyfills into your code. Wait, one step back: Why polyfills?

If you already know, skip to Import core-js. When Babel transpiles your code according to the target environment that you specified, it just changes syntax. Code that the target (the browser) does not understand is changed to (probably longer and more complicated) code that does the same and is understood. But there are things beyond syntax that are possibly not supported: features. Like for example Promises. Or certain features of other builtin types like Object.is or Array.from or whole new types like Map or Set. Therefore we need polyfills that recreate those features for targets that do not support them natively.

Also note that we are talking here only about polyfills for ES-features or some closely related Web Platform features (see the full list here). There are browser features like for instance the global fetch function that need separate polyfills.

Import core-js

Ok, so there is a Babel package called @babel/polyfill that you can import at the entry point of your application and it adds all needed polyfills from a library called core-js as well as a separate runtime needed for async/await and generator functions. But since Babel 7.4.0 this wrapper package is deprecated. Instead you should install and import two separate packages: core-js/stable and regenerator-runtime/runtime.

Then, we can get a nice effect from our env preset from above. We change the configuration to this:

1
2
3
4
5
6
7
8
[
'@babel/env',
{
modules: false,
corejs: 3,
useBuiltIns: 'usage'
}
],

This will transform our code so that the import of the whole core-js gets removed and instead Babel injects specific polyfills in each file where they are needed. And only those polyfills that are needed in the target environment which we have defined via browserslist. So we end up with the bare minimum of additional code.

Two additional notes here: (1) You have to explicitly set corejs to 3. If the key is absent, Babel will use version 2 of corejs and you don’t want that. Much has changed for the better in version 3, especially feature-wise. But also bugs have been fixed and the package size is dramatically smaller. If you want, read it all up here (overview) and here (changelog for version 3.0.0).

And (2), there is another possible value for useBuiltIns and that is entry. This variant will not figure out which features your code actually needs. Instead, it will just add all polyfills that exist for the given target environment. It works by looking for corejs imports in your source (like import corejs/stable) which should only appear once in your codebase, probably in your entry module. Then, it replaces this “meta” import with all of the specific imports of polyfills that match your targets. This approach will likely result in a much, much larger package with much of unneeded code. So we just use usage. (With corejs@2 there were a few problems with usage that could lead to wrong assumptions about which polyfills you need. So in some cases entry was the more safe option. But these problems are appearently fixed with version 3.)

Tell transform-runtime to import core-js

The second way to get the polyfills that your code needs is via the transform-runtime plugin from above. You can configure it to inject not only imports for the Babel helpers but also for the core-js modules that your code needs:

1
2
3
4
5
6
7
8
plugins: [
[
'@babel/plugin-transform-runtime',
{
corejs: 3
}
]
],

This tells the plugin to insert import statements to corejs version 3. The reason for this version I have mentioned above.

If you configure the plugin to use core-js, you have to change the runtime dependency: The peerDependencies should now contain not @babel/runtime but @babel/runtime-corejs3!

Which way should you use?

In general, the combination of manual import and the env preset is meant for applications and the way with transform-runtime is meant for libraries. One reason for this is that the first way of using core-js imports polyfills that “pollute” the global namespace. And if your library defines a global Promise, it could interfere with other helper libraries used by your library’s users. The imports that are injected by transform-runtime are contained. They import from core-js-pure which does not set globals.

On the other hand, using the transform plugin does not account for the environment you are targeting. Probably in the future it could also use the same heuristics as preset-env but at the moment it just adds every polyfill that is theoretically needed by your code. Even if the target browsers would not need them or not all of them. For the development in that direction see the comment from the corejs maintainer and this RFC issue at Babel.

So it looks like you have to choose between a package that adds as few code as possible and one that plays nicely with unknown applications around it. I have played around with the different options a bit and bundled the resulting files with webpack and this is my result:

You get the smallest bundle with the core-js globals from preset-env. But it’s too dangerous for a library to mess with the global namespace of its users. Besides that, in the (hopefully very near) future the transform-runtime plugin will also use the browserslist target environments. So the size issue is going to go away.

The env key

With env you can add configuration options for specific build environments. When Babel executes it will look for process.env.BABEL_ENV. If that’s not set, it will look up process.env.NODE_ENV and if that’s not found, it will fallback to the string 'development'. After doing this lookup it will check if the config has an env object and if there is a key in that object that matches the previously found env string. If there is such a match, Babel applies the configuration under that env name.

We use it for example for our test runner Jest. Because Jest can not use ESModules we need a Babel config that transpiles our modules to CommonJS modules. So we just add an alternative configuration for preset-env under the env name 'test'. When Jest runs (We will use babel-jest for this. See in a later part of this series.) it sets process.env.NODE_ENV to 'test'. And so everything will work.

Conclusion and final notes for Babel setup

Install all needed packages:

npm i -D @babel/core @babel/cli @babel/preset-env @babel/plugin-transform-runtime

Add a peerDependency to your package.json that your users should install themselves:

1
2
3
4
5
...
"peerDependencies": {
"@babel/runtime-corejs3": "^7.4.5", // at least version 7.4; your users have to provide it
}
...

Create a babel.config.js that contains at least this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// babel.config.js

module.exports = {
presets: [
[
'@babel/env', // transpile for targets
{
modules: false, // don't transpile module syntax
}
],
],
plugins: [
[
'@babel/plugin-transform-runtime', // replace helper code with runtime imports (deduplication)
{ corejs: 3 } // import corejs polyfills exactly where they are needed
]
],
env: {
test: { // extra configuration for process.env.NODE_ENV === 'test'
presets: ['@babel/env'] // overwrite env-config from above with transpiled module syntax
}
}
};

If you write TypeScript, run npm i -D @babel/preset-typescript and add '@babel/preset-typescript' to the presets.

If you write React code, (JSX) run npm i -D @babel/preset-react and add '@babel/preset-react' to the presets.

Add a browserslist section in your package.json:

1
2
3
4
5
6
7
8
9
10
11
...
"browserslist": [
"last 2 Chrome versions",
"last 2 Firefox versions",
"last 2 Edge versions",
"last 2 Opera versions",
"last 2 FirefoxAndroid versions",
"last 2 iOS version",
"last 2 safari version"
]
...

In case of using another browserslist query that includes targets that do not have support for generator functions and/or async/await, there is something you have to tell your users:

Babel’s transform-runtime plugin will import regenerator-runtime. This library depends on a globally available Promise constructor. But Babel will not include a promise polyfill for regenerator-runtime. Probably because it adds polyfills only for things genuinely belonging to your code, not external library code. That means, if your usecase meets these conditions, you should mention it in your README or installation instructions that the users of your lib have to make sure there is a Promise available in their application.

And that is it for the Babel setup.

Next up: Compiling with the TypeScript compiler

Publish a modern JavaScript (or TypeScript) library

Did you ever write some library code together and then wanted to publish it as an NPM package but realized you have no idea what is the technique du jour to do so?

Did you ever wonder “Should I use Webpack or Rollup?”, “What about ES modules?”, “What about any other package format, actually?”, “How to publish Types along with the compiled code?” and so on?

Perfect! You have found the right place. In this series of articles I will try to answer every one of these questions. With example configurations for most of the possible combinations of these tools and wishes.

Technology base

This is the set of tools and their respective version range for which this tutorial is tested:

  • ES2018
  • Webpack >= 4
  • Babel >= 7.4
  • TypeScript >= 3
  • Rollup >= 1
  • React >= 16.8
    ( code aimed at other libraries like Vue or Angular should work the same )

Some or even most of that what follows could be applied to older versions of these tools, too. But I will not guarantee or test it.

Creation

The first thing to do before publishing a library is obviously to write one. Let’s say we have already done that. In fact, it’s this one. It consists of several source files and therefore, modules. We have provided our desired functionality, used our favorite, modern JavaScript (or TypeScript) features and crafted it with our beloved editor settings.

What now? What do we want to achieve in this tutorial?

  1. Transpile modern language features so that every browser in one of the last 2 versions can understand our code.
  2. Avoid duplicating compile-stage helpers to keep the library as small as possible.
  3. Ensure code quality with linting and tests.
  4. Bundle our modules into one consumable, installable file.
  5. Provide ES modules to make the library tree-shakable.
  6. Provide typings if we wrote our library in TypeScript.
  7. Improve collaborating with other developers (from our team or, if it is an open source library, from the public).

Wow. That’s a whole lot of things. Let’s see if we can make it.

Note that some of these steps can be done with different tools or maybe differ depending on the code being written in TypeScript or JavaScript. We’ll cover all of that. Well, probably not all of that, but I will try to cover the most common combinations.

The chapters of this series will not only show configurations I think you should use, but also will I explain the reasoning behind it and how it works. If you aren’t interested in these backgrounds, there will be a link right at the top of each post down to the configurations and steps to execute without much around.

Go!

We will start with the first points on our list above. As new articles arrive, I will add them here as links and I will also try to keep the finished articles updated when the tools they use get new features or change APIs. If you find something that’s not true anymore, please give me a hint.

  1. Transpile modern language features – With Babel.
  2. Compiling modern language features with the TypeScript compiler.
  3. Building your library: Part 1

Oh and one last thing™: I’ll be using npm throughout the series because I like it. If you like yarn better, just exchange the commands.

Resize LVM on LUKS partition without messing everything up

Since forever I run my work computer operating systems on a full-disk-encrypted partition. Currently this is Manjaro Linux. When I set up my current machine I made the following partition scheme:

1
2
3
4
5
sda                  238,5G  disk
├─sda1 260M part /boot/efi
├─sda2 128M part /boot
└─sda3 237G part
└─tank 237G crypt

Somewhere, I can’t even remember when, I read that 128M for /boot would be sufficient. And it was for a few years. But kernel images and/or initram disks grew bigger and bigger until I could not upgrade to a newer kernel anymore. The last kernel I ran was Linux 4.16 and the files in /boot took around 75M space and so mhwd-kernel -i linux417 had too little space on the device left.

What I needed to do was to shrink /dev/sda3, move it to the end of the SSD and grow /dev/sda2 as necessary.

But I did not know if this was even possible with my setup. Inside the encrypted partition there is an LVM container with 5 logical volumes including /. I pushed it into the future again and again because most of the time I am working in running projects and can not afford to have a non-functioning machine for \.

But in the end it was relatively easy. I had feared that in the worst case I would have to re-setup my whole machine and restore backups for the data and system partitions. Which then maybe would need endless tweaking until it runs again (No, I never had a hard disk failure or similar, so I never had to actually do anything like that).

So, here are the things I needed to do:

1. Backup

List all logcal volumes:

1
2
3
4
5
6
7
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker tank -wi-ao---- 5,00g
home tank -wi-ao---- 100,00g
mongo tank -wi-ao---- 1,00g
root tank -wi-ao---- 25,00g
swap tank -wc-ao---- 32,00g

For each lv do the following:

1
2
# lvcreate -s -n <name>snap /dev/tank/<name>
# dd if=/dev/tank/<name>snap of=/path/to/external/storage/<name>.img

Where <name> must be replaced by the actual names of the lvs. Then I backed up both the /boot and the /boot/efi partitions, also with dd.
Finally I made a backup of the LUKS header for the crypto-partition:

1
# cryptsetup luksHeaderBackup /dev/sda3 --header-backup-file /path/to/external/storage/luks-header.bkp

2. Boot in a live system from an USB stick and decrypt the device

1
# cryptsetup open /dev/sda3 tank --type luks

3. Resize the physical volume

Note: I have free space inside my LVM container. As you can see from the output of lvs above I currently use only 143GB out of roughly 238GB. That means I do not have to resize logical volumes before I resize the containing physical volume. If you use all of the available space for logical volumes, look into lvresize(8) first: For example in the Arch Wiki.

I generously shrank the volume from 238,07G to 236G with:

1
# pvresize --setphysicalvolumesize 236G /dev/mapper/tank

4. Resize the crypto-device

Find out how many sectors is the current size (note that my crypto-device has the same name like my volume group: tank. That could be different in your setup):

1
2
3
4
5
# cryptsetup status tank
...
sector size: 512
size: 499122176
...

In the end I want to add about 1G to the /boot partition. That is 1024 * 1024 * 1024 / 512 = 2097152 sectors.

1
# cryptsetup -b 497025024 resize tank

5. Resize the GUID partition

You see we go from innermost to outermost: LVM -> crypto -> GUID. I use parted to resize the partition /dev/sda3:

1
2
3
4
5
6
7
# parted
(parted) unit s
(parted) print
...
Number Begin End Size Name Flags
...
3 3100672s 500115455s 497014784s TANK lvm

These numbers were actually different as I write this blog post in hindsight. The point is that partition number 3 went all the way to the last sector of the disk and I now must calculate where it should end in the future. Because resizepart takes not the future size but the future end sector of the partition as argument. So I subtract the same sector count as calculated above for cryptsetup (2097152) from the end sector of partition 3 (500115455) which gives 498018303.

1
2

(parted) resizepart 3 498018303s

Now we have free space on the SSD after the main partition. But I want to grow partition 2.

6. Reorder partitions and resize partition 2

I did that with GParted instead of a command line tool. Probably there is a way to do it with gdisk but parted has removed its command to move partitions. And because I was in a graphical live system anyway and also read that you could do it with GParted I just went for it.
First I closed the crypto device because GParted would not let me move the partition otherwise:

1
2
# vgchange -an tank
# cryptsetup close tank

Then I opened GParted and right-clicked on the crypto partition. I chose “Change size|Move” and moved the free space after the partition before it. Then I opend the same dialog for the /boot partition and extended it to cover all of the free space. Finally I committed the changes.

Handle lid closing correctly in XFCE power settings

This is mainly just a note for my future self. I always had problems with the power management settings on my laptop. It’s running Manjaro Linux (Arch derivate). Regardless of what I set in the XFCE power settings, the actions that should happen on lid closing didn’t work as expected. I wanted that the machine does a suspend-to-RAM when I close the lid and the power cable is plugged in. And when it is not plugged in I wanted the machine to suspend-to-disk (hibernate).

On some point I just disabled everything in /etc/systemd/logind.conf (set it to ignore lid actions) and lived with the fact.

Today on Googling™ I came across two things: First a mention of the file ~/.config/xfce4/xconf/xfce-perchannel-xml/xfce4-power-manager.xml. There, all the settings you can set in the graphical power settings tool are saved as XML. Second: A forum post (https://bbs.archlinux.org/viewtopic.php?pid=1690134#p1690134) that points to the fact that in this XML file there is a setting you can’t set graphical: “logind-handle-lid-switch”. Which is set to true for reasons that are beyond me.

Probably you can do all sorts of things with acpid and/or systemd to control the actions on lid-close and lid-open. But you can also just issue:

1
xfconf-query -c xfce4-power-manager -p /xfce4-power-manager/logind-handle-lid-switch -s false

on the shell and then your settings in XFCEs power settings are used by the system and work. Of course I also set the content in logind.conf back to default.

Fix speed issue when writing to NAS system

I just fixed an issue with my FreeBSD home server. It is set up as a file server for Mac (AFP) and Linux Clients (NFS). My local network is Gigabit-based sothe limitating factor on read/write speeds should be the hard disk drives in the server.

The server has a Core i3-6100T CPU @ 3.20GHz, 8GB RAM, a ZFS setup with two mirror vdevs each consisting of two disks connected to the board via SATA3. And of course the onboard Gbit NIC (Realtek).

I know very well that write speed was at around 50–60MB/sec, which I would expect. But lately, it dropped amazingly to ~1MB/sec. And I just couldn’t think of, why. I suspected the cable, the AFP, the RAM anything.

What I didn’t suspect — until today, that is — was the network interface. But I had time today for some googling and even if I didn’t found the solution directly, I stumbled across something related to the output of ifconfig. So I hacked that into the console and stared at it.

1
2
3
4
5
6
7
8
9
10
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
ether 4c:cc:6a:b3:3c:f5
hwaddr 4c:cc:6a:b3:3c:f5
inet6 fd23:16:7:7::1 prefixlen 64
inet6 fe80::4ecc:6aff:feb3:3cf5%re0 prefixlen 64 scopeid 0x1
inet 192.168.10.118 netmask 0xffffff00 broadcast 192.168.10.255
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet autoselect (10baseT/UTP <full-duplex>)
status: active

Do you spot it?

1
media: Ethernet autoselect (10baseT/UTP <full-duplex>)

Well, that is … unfortunate. The output of ifconfig -m re0 gave me:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
supported media:
media autoselect mediaopt flowcontrol
media autoselect
media 1000baseT mediaopt full-duplex,flowcontrol,master
media 1000baseT mediaopt full-duplex,flowcontrol
media 1000baseT mediaopt full-duplex,master
media 1000baseT mediaopt full-duplex
media 100baseTX mediaopt full-duplex,flowcontrol
media 100baseTX mediaopt full-duplex
media 100baseTX
media 10baseT/UTP mediaopt full-duplex,flowcontrol
media 10baseT/UTP mediaopt full-duplex
media 10baseT/UTP
media none

So I ran sudo ifconfig re0 media 1000baseTX mediaopt full-duplex and it worked. After that I also ran sudo ifconfig re0 media autoselect which also set the media type to 1000baseT full-duplex. I have no idea why the system did that wrong (or when) but I will monitor what happens after the next reboot. Maybe I have to do some configuration but maybe it was just an hickup.

Speeds are up to 60MB/sec again.