<?xml version="1.0" ?>
  <rss version="2.0"
  xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
      <title>Blog-Articles by Tobias Barth, Freelance Web Person</title>
	  <link>[object Promise]/blog</link>
	  <description>Modern web development with HTML, CSS and JavaScript. Mostly ReactJS</description>
	  <language>en</language>
	  <pubDate>Fri, 08 May 2020 17:59:34 GMT</pubDate>
	  
      <item>
        <title>.htaccess-Spielereien</title>
        <link>https://tobias-barth.net/blog/-htaccess-Spielereien</link>
        <pubDate>Sun, 04 Dec 2011 13:47:00 GMT</pubDate>
        
        <description>Mit .htaccess-Dateien kann man den Zugang zu den einzelnen Dateien oder Dokumenten auf seinem (Apache-)Webserver sehr bequem regeln. Es lassen sich zum Beispiel einzelne Verzeichnisse nur für Nutzer mit einer bestimmten IP freigeben. Oder man kann Passwörter für den Zugang vergeben. Vor ein paar Tagen habe ich herausgefunden, dass man mit ihnen auch wunderbar beliebige andere Servervariablen abfragen kann.</description>
        <content:encoded><![CDATA[<p>Mit .htaccess-Dateien kann man den Zugang zu den einzelnen Dateien oder Dokumenten auf seinem (Apache-)Webserver sehr bequem regeln. Es lassen sich zum Beispiel einzelne Verzeichnisse nur für Nutzer mit einer bestimmten IP freigeben. Oder man kann Passwörter für den Zugang vergeben. Vor ein paar Tagen habe ich herausgefunden, dass man mit ihnen auch wunderbar beliebige andere Servervariablen abfragen kann.</p>
<p>Mit dem Modul <a href="https://httpd.apache.org/docs/2.2/mod/mod_setenvif.html">mod_setenvif</a> nämlich lassen sich mit Hilfe der zwei Direktiven <em>BrowserMatch</em> und <em>SetEnvIf</em> (und deren Varianten, denen Groß- und Kleinschreibung egal ist) Umgebungsvariablen abfragen und davon abhängig den Zugang zu Ressourcen regeln.</p>
<p>Ich habe das benutzt, um folgendes zu tun. Ein PHP-Dokument bindet mit <code>require</code> eine HTML-Datei ein und zwar abhängig vom Rückgabewert eines If-Statements:</p>
<pre><code class="language-php">    if (…) { require &quot;eingebunden.html&quot;;}
</code></pre>
<p>Auch wenn ich den Namen der HTML-Datei so wähle, dass er schwer erraten werden kann, möchte ich sichergehen, dass niemand einfach die Adresse der Datei in den Browser eingeben und so ihren Inhalt anzeigen kann. Das geht jetzt ziemlich einfach, indem ich eine .htaccess-Datei in das Verzeichnis lege, in dem sich die fragliche HTML-Datei befindet und sie mit diesem Inhalt versehe:</p>
<pre><code>    SetEnvIf Request_URI &quot;eingebunden\.html$&quot; verboten
    Deny from env=verboten
</code></pre>
<p>Mit der SetEnvIf-Direktive setze ich eine Variable namens &bdquo;verboten&ldquo; genau dann wenn der <abbr title="Uniform Resource Identifier">URI</abbr>-String, den der Browser als Request gesendet hat, mit dem regulären Ausdruck <code>&quot;eingebunden\.html$&quot;</code> übereinstimmt. Als nächstes sage ich dem Server, dass er Anfragen, für die er diese Variable erzeugt hat, ablehnen soll. Versucht man jetzt die HTML-Datei direkt im Browser aufzurufen, ist das einzige, was man sieht, ein 403.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Building your library: Part 1</title>
        <link>https://tobias-barth.net/blog/Building-your-library-Part-1</link>
        <pubDate>Wed, 24 Jul 2019 10:39:26 GMT</pubDate>
        
        <description>Part 4 of the series "Publish a modern JavaScript (or TypeScript) library". The goal for our work here should be a fully functioning build chain that does everything we need for publishing our library.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 4 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<p><em>Note:</em> I have promised in <a href="http://tobias-barth.net/blog/Compiling-modern-language-features-with-the-TypeScript-compiler/">part 3 of this series</a> that the next post would be about exporting types. But bear with me. First we will use what we have. Types are coming up next.</p>
<h3>Our first build</h3>
<p>Up until now we have discussed how to set up Babel or the TypeScript Compiler, respectively, for transpiling our thoughtfully crafted library code. But we didn&#39;t actually use them. After all, the goal for our work here should be a fully functioning build chain that does everything we need for publishing our library.</p>
<p>So let&#39;s start this now. As you can tell from the title of this article, we will refine our build with every item in our tool belt that we installed and configured. While the &quot;normal&quot; posts each focus on one tool for one purpose, these &quot;build&quot; articles will gather all configurations of our various tool combinations that we have at our disposal.</p>
<p>We will leverage NPM scripts to kick off everything we do. For JavaScript/TypeScript projects it&#39;s the natural thing to do: You <code>npm install</code> and <code>npm test</code> and <code>npm start</code> all the time, so we will <code>npm run build</code> also.</p>
<p>For today we will be done with it relatively quickly. We only have the choice between Babel and TSC and transpiling is the only thing that we do when we build.</p>
<h3>Build JavaScript with Babel</h3>
<p>You define a <code>build</code> script as you may now in the <code>package.json</code> file inside of the root of your project. The relevant keys are <code>scripts</code> and <code>module</code> and we change it so that they contain at least the following:</p>
<pre><code class="language-javascript">{
  // ...
  &quot;module&quot;: &quot;dist/index.js&quot;,
  &quot;scripts&quot;: {
    &quot;build&quot;: &quot;babel -d dist/ src/&quot;
  }
  // ...
}
</code></pre>
<h4>Using <code>module</code></h4>
<p>The standard key to point to the entry file of a package is <code>main</code>. But we are using <code>module</code> here. This goes back to a <a href="https://github.com/rollup/rollup/wiki/pkg.module">proposal by the bundler Rollup</a>. The idea here is that the entry point under a <code>main</code> key is valid ES5 only. Especially regarding module syntax. The code there should use things like CommonJS, AMD or UMD but not ESModules. While bundlers like Webpack and Rollup can deal with legacy modules they can&#39;t tree-shake them. (Read <a href="http://tobias-barth.net/blog/Transpile-modern-language-features-with-Babel/">the article on Babel</a> again if you forgot why that is.)</p>
<p>Therefore the proposal states that you can provide an entry point under <code>module</code> to indicate that the code there is using modern ESModules. The bundlers will always look first if there is a <code>module</code> key in your package.json and in that case just use it. Only when they don&#39;t find it they will fall back to <code>main</code>.</p>
<h4>Call Babel</h4>
<p>The &quot;script&quot; under the name of <code>build</code> is just a single call to the Babel command line interface (CLI) with one option <code>-d dist</code> which tells Babel where to put the transpiled files (<code>-d</code> : <code>--out-dir</code>). Finally we tell it where to find the source files. When we give it a directory like <code>src</code> Babel will transpile every file it understands. That is, every file with an extension from the following list: <code>.es6,.js,.es,.jsx,.mjs</code>.</p>
<h3>Build TypeScript with Babel</h3>
<p>This is almost the same as above. The only difference is the options we pass to the Babel CLI. The relevant parts in <code>package.json</code> look like this:</p>
<pre><code class="language-javascript">{
  // ...
  &quot;module&quot;: &quot;dist/index.js&quot;,
  &quot;scripts&quot;: {
    &quot;build&quot;: &quot;babel -d dist/ --extensions .ts,.tsx src/&quot;
  }
  // ...
}
</code></pre>
<p>As I mentioned above, Babel wouldn&#39;t know that it should transpile the <code>.ts</code> and <code>.tsx</code> files in <code>src</code>. We have to explicitly tell it to with the <code>--extensions</code> option.</p>
<h3>Build TypeScript with TSC</h3>
<p>For using the TypeScript Compiler we configure our build in the <code>package.json</code> like this:</p>
<pre><code class="language-javascript">{
  // ...
  &quot;module&quot;: &quot;dist/index.js&quot;,
  &quot;scripts&quot;: {
    &quot;build&quot;: &quot;tsc&quot;
  }
  // ...
}
</code></pre>
<p>We don&#39;t have to tell TSC where to find and where to put files because it&#39;s all in the tsconfig.json. The only thing our build script has to do is calling <code>tsc</code>.</p>
<h3>Ready to run</h3>
<p>And that is it. All you have to do now to get production-ready code is typing</p>
<pre><code>npm run build
</code></pre>
<p>And you have your transpiled library code inside of the <code>dist</code> directory. It may not seem to be much but I tell you, if you were to <code>npm publish</code> that package or install it in <a href="https://docs.npmjs.com/cli/install.html">one of the other ways aside from the registry</a> it could be used in an application. And it would not be that bad. It may have no exported types, no tests, no contribution helpers, no semantic versioning and no build automation, <strong>BUT</strong> it ships modern code that is tree-shakable – which is more than many others have.</p>
<p>Be sure to check out the <a href="https://github.com/4nduril/library-starter">example code repository</a> that I set up for this series. There are currently three branches: <code>master</code>, <code>typescript</code> and <code>typescript-tsc</code>. Master reflects my personal choice of tools for JS projects, <code>typescript</code> is my choice in TS projects and the third one is an alternative to the second. The README has a table with branches and their features.</p>
<p>Next up: Type-Checking and providing type declarations (and this time for real ;) )</p>
]]></content:encoded>
      </item>
      <item>
        <title>Bundling your library with Webpack</title>
        <link>https://tobias-barth.net/blog/Bundling-your-library-with-Webpack</link>
        <pubDate>Fri, 08 May 2020 17:59:34 GMT</pubDate>
        
        <description>Part 7 of the series "Publish a modern JavaScript (or TypeScript) library". In the last post we have established in which cases we may need to bundle our library – instead of just delivering transpiled files /modules. There are a few tools which help us to do so and we will look at the most important of them one after another.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 7 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<p><strong>If you are not interested in the background and reasoning behind the setup, <a href="#bylww-conclusion">jump directly to the conclusion</a>.</strong></p>
<h3>Intro</h3>
<p>In the last post we have established in which cases we may need to bundle our library – instead of just delivering transpiled files /modules. There are a few tools which help us to do so and we will look at the most important ones of them one after another.</p>
<p>As promised I will make the start with Webpack. Probably most of you have already had contact with Webpack. Probably in the context of website/application bundling. Anyway, a short intro to what it is and does. It is a very versatile tool that was originally build around the concept of code-splitting. Of course it can do (and does) many more things than that but that was the initial, essential idea: make it possible and make it easy to split all of your application code into chunks of code that belong together. So that the browser (the user) does not have to first download, parse and execute <strong>all</strong> of the app code before anything works. But instead to load only the right amount of code needed at the moment. Webpack is awesome at that.</p>
<p>The thing is, we don&#39;t want to do that. We do not have an application, we have a library. There is either no need for splitting because our code really does only one thing (even if it is a complex thing). Or, we provide rather independent code blocks but then it&#39;s the <em>application&#39;s</em> job to put the right things in the right chunks. We can not assume anything about the library-user&#39;s needs so they get to decide about splitting.</p>
<p>Then, what can Webpack do for us? It can take all of our carefully crafted modules, walk through their dependency structure like a tree and put them all together in one module – a bundle. Plus, it adds a tiny bit of runtime code to make sure everything is consumable as we expect it to.</p>
<p>Webpack, like all bundlers I can think of right now, can work directly with the source code. It&#39;s not like you have to, say, transpile it first and then Webpack starts its thing. But for Webpack to be able to understand your code and also to apply any transformation you may want, you need to use so-called <em>loaders</em>. There is a <code>babel-loader</code> that we can use for transpiling, TypeScript-loaders, even things like SVG- or CSS-loaders which allow us to import things in our JS/TS files that aren&#39;t even related to JavaScript.</p>
<p>This article does not want and is not able to cover all the possibilities of what you can achieve with Webpack. If you want to learn more, consult the official <a href="https://webpack.js.org/">documentation</a>. It&#39;s really good these days. (Back in my time … but anyway.)</p>
<h3>Our goal</h3>
<p>We have library code, written in plain JavaScript or TypeScript, no fancy imports. It needs to get transpiled according to our rules and result in one consumable file which people can import in their applications. Also, we want people to be able to just drop it in their HTML in form of a script tag. That is, we want to get a UMD module.</p>
<hr>
<h4>What are UMD modules?</h4>
<p>(If you already know our if you don&#39;t want to know more than I mentioned in the paragraph before, feel free to skip to <a href="#bylww-starting">Starting with Webpack</a> or even to the <a href="#bylww-conclusion">Conclusion and final config</a>.)</p>
<p>UMD stands for Universal Module Definition. It combines the module systems Asynchronous Module Definition (AMD), CommonJS and exposure via a global variable for cases where no module system is in place. You can read the <a href="https://github.com/umdjs/umd">specification and its variants here</a>. Basically, a UMD module wraps the actual library code with a thin detection layer that tries to find out if it&#39;s currently being executed in the context of one of the two mentioned module systems. In case it is, it exposes the library within that system (with <code>define</code> or <code>module.exports</code>). If not, it will assign the library&#39;s exports to a global variable.</p>
<h3><a name="bylww-starting"></a> Starting with Webpack</h3>
<p>This will be roughly the same as in the <a href="https://webpack.js.org/guides/author-libraries/">official documentation</a> of Webpack. But I will try to provide the complete configuration including optimizations andcomments. Also note that I will omit many possibilities Webpack offers or simplify a few things here and there. That&#39;s because this is not a deep dive into Webpack but a what-you-should-know-when-bundling-a-library piece.</p>
<p>First we install Webpack and its command line interface:</p>
<pre><code class="language-bash">npm install -D webpack webpack-cli
</code></pre>
<p>Now we create a file called <code>webpack.config.js</code> within the root directory of our library. Let&#39;s start with the absolute basics:</p>
<pre><code class="language-jsx">// webpack.config.js
const path = require(&#39;path&#39;)

module.exports = {
  entry: &#39;./src/index.js&#39;, // or &#39;./src/index.ts&#39; if TypeScript
  output: {
    path: path.resolve(__dirname, &#39;dist&#39;),
    filename: &#39;library-starter.js&#39;,
  },
}
</code></pre>
<p>With <code>entry</code> we are defining the entry point into our library. Webpack will load this file first and build a tree of dependent modules from that point on. Also, together with a few other options that we will see in a bit, Webpack will expose all exports from that entry module to the outside world – our library&#39;s consumers. The value is, as you can see, a string with a path that is relative to the config file location.</p>
<p>The <code>output</code> key allows us to define what files Webpack should create. The <code>filename</code> prop makes running Webpack result in a bundle file with this name. The <code>path</code> is the folder where that output file will be put in. Webpack also defaults to the <code>dist</code> folder that we defined here but you could change it, e.g. to <code>path.resolve(__dirname, &#39;output&#39;)</code>or something completely different. But ensure to provide an absolute path – it will not get expanded like the <code>entry</code> value.</p>
<h3>Problem 1: custom syntax like JSX</h3>
<p>When we now run <code>npx webpack</code> on the command line, we expect it to result in a generated <code>dist/library-starter.js</code> file. Instead it fails with an error. In my <a href="https://github.com/4nduril/library-starter">library-starter example code</a> I use React&#39;s JSX. As it is configured now, Webpack will refuse to bundle it because it encounters an &quot;unexpected token&quot; when it tries to parse the code. You see that Webpack needs to understand your code. We help with configuring an appropriate &quot;loader&quot;.</p>
<p>If you use Babel for transpiling, install the Babel loader:</p>
<pre><code class="language-bash">npm install -D babel-loader
</code></pre>
<p>The rest of the Babel setup we need is already installed in our project.</p>
<p>If you instead are using TSC you&#39;ll need <code>ts-loader</code>:</p>
<pre><code class="language-bash">npm install -D ts-loader
</code></pre>
<p><strong>Note:</strong> I know there is also the <a href="https://github.com/s-panferov/awesome-typescript-loader">Awesome TypeScript Loader</a> but the repository has been archived by the author and has not seen any updates for two years (as the time of writing this). Even the author writes in the README: &quot;The world is changing, other solutions are evolving and ATL may work slower for some workloads.&quot; Recently it seems to be the case that TS-Loader is faster and is the default choice for most users. Also more information on <a href="https://github.com/TypeStrong/ts-loader#parallelising-builds">&quot;Parallelising Builds&quot;</a> is found in the README of <code>ts-loader</code>.</p>
<p>We now add the following to the <code>webpack.config.js</code> file:</p>
<pre><code class="language-jsx">// webpack.config.js (Babel)
...
module.exports = {
  ...
  module: {
    rules: [
      {
        test: /\.jsx?$/, // If you are using TypeScript: /\.tsx?$/
        include: path.resolve(__dirname, &#39;src&#39;),
        use: [
          {
            loader: &#39;babel-loader&#39;,
            options: {
              cacheDirectory: true
            }
          }
        ]
      }
    ]
  }
}
</code></pre>
<p>Or:</p>
<pre><code class="language-jsx">// webpack.config.js (TSC)
...
module.exports = {
  ...
  module: {
    rules: [
      {
        test: /\.tsx?$/,
        include: path.resolve(__dirname, &#39;src&#39;),
        use: [
          {
            loader: &#39;ts-loader&#39;,
            options: {
              transpileOnly: true
            }
          }
        ]
      }
    ]
  }
}
</code></pre>
<h3>Problem 2: Babels runtime helpers</h3>
<p>In case we are using Babel for transpiling, Webpack now runs into the next error. It tries to resolve the helper and polyfill imports that Babel created for us but as <a href="http://tobias-barth.net/blog/Transpile-modern-language-features-with-Babel/">we only declared them</a> as a <code>peerDependency</code> we haven&#39;t installed them yet and so Webpack can&#39;t put them into the bundle.</p>
<h4>Bundling helpers?</h4>
<p>As you remember, we deliberately did define <code>@babel/runtime-corejs3</code> as a peer dependency to make sure our delivered library is as small as possible and also to allow the user to have at best only one version of it installed, keeping their application bundle smaller. Now, if we install it by ourselves and bundle it with Webpack, then all the benefit is gone. Yes, that&#39;s right. We can of course tell Webpack that certain imports should be treated as &quot;external&quot; and we will in fact do that later on for the &quot;react&quot; dependency that our specific library has. But not for the runtime helpers.</p>
<p>Because remember why we are bundling: One of the reasons was to make it possible for a user to drop the bundle in a <code>script</code> tag into their page. To be able to do that with deps declared as external, also <em>those</em> have to be available as separate UMD package. This is the case for many things like React or Lodash but not for this runtime package. That means we have to bundle it together with our code. We could make a very sophisticated setup with several Webpack configs, one resulting in a bigger bundle for that specific use case and one for usual importing in an application. But <em>we already reached</em> the second goal: with our non-bundled build.</p>
<p>If your library uses non-JS/TS imports like CSS or SVGs, then of course you can think about how much it will save the users of your library if you go that extra mile. I am not going to cover that in this article. Maybe at a later point when we have all of our foundations in place.</p>
<h4>Bundling helpers!</h4>
<p>Install <code>@babel/runtime-corejs3</code> as a development dependency:</p>
<pre><code class="language-bash">npm install -D @babel/runtime-corejs3
</code></pre>
<h3>Problem 3: Externals</h3>
<p>The next thing we will cover is dependencies that we really don&#39;t want to have in our bundle but instead should be provided by the using environment. The next error Webpack throws is about the <code>&#39;react&#39;</code> dependency. To solve this we make use of the <code>externals</code> key:</p>
<pre><code class="language-jsx">// webpack.config.js
module.exports = {
  ...
  externals: {
    react: {
      root: &#39;React&#39;,
      commonjs: &#39;react&#39;,
      commonjs2: &#39;react&#39;,
      amd: &#39;react&#39;,
    }
}
</code></pre>
<p>Because some libraries expose themselves differently depending on the module system that is being used, we can (and must) declare the name under which the external can be found for each of these systems. <code>root</code> denotes the name of a global accessible variable. Deeper explanation can be found in the <a href="https://webpack.js.org/configuration/externals/#object">Webpack docs</a>.</p>
<h3>Problem 4: File extensions</h3>
<p>This is of course only an issue if you are writing TypeScript or if you name files containing JSX <code>*.jsx</code> instead of <code>*js</code> (which we don&#39;t in the example library). Do you remember when we had to tell the Babel CLI which file extensions it should accept? If not, read again <a href="http://tobias-barth.net/blog/Building-your-library-Part-1/">about building our library</a>. Now, Webpack has to find all the files we are trying to import in our code. And like Babel by default it looks for files with a <code>.js</code> extension. If we want Webpack to find other files as well we have to give it a list of valid extensions:</p>
<pre><code class="language-jsx">// webpack.config.js
module.exports = {
  ...
  resolve: {
    extensions: [&#39;.tsx&#39;, &#39;.ts&#39;, &#39;.jsx&#39;, &#39;js&#39;]
  },
  ...
}
</code></pre>
<p>If you are not writing TypeScript the list of extensions can be as short as <code>[&#39;.jsx&#39;, &#39;.js&#39;]</code>. We didn&#39;t need to specify the <code>*.jsx</code> extension for the normal Babel call because Babel recognizes it already (as opposed to <code>*.tsx</code> for example).</p>
<h3>Mode</h3>
<p>Now when we run <code>npx webpack</code> our bundle is made without errors and put into <code>/dist</code>. But there is still a warning from Webpack about the fact that we didn&#39;t set the <code>mode</code> option in our config. The mode can be <code>&#39;development&#39;</code> or <code>&#39;production&#39;</code> and will default to the latter. (There is also the value <code>&#39;none&#39;</code> but we will not cover it here.) It&#39;s kind of a shorthand for several settings and activation of plugins. <code>&#39;development&#39;</code> will keep the output readable (besides other things) while <code>&#39;production&#39;</code> will compress the code as much as possible.</p>
<p>Since we mainly bundle for users to be able to use it in script tags, i.e. additionally to providing single module files, we will not bother to differentiate between the two modes. We only use <code>&#39;production&#39;</code>:</p>
<pre><code class="language-jsx">// webpack.config.js

module.exports = {
  mode: &#39;production&#39;,
  ...
}
</code></pre>
<p>And thus the warning is gone.</p>
<h3>Library</h3>
<p>Everything is fine now. Or, is it?</p>
<pre><code class="language-bash"># node repl

&gt; const lib = require(&#39;./dist/library-starter&#39;)
&gt; lib
{}
&gt;
</code></pre>
<p>We get only an empty module. That is because Webpack by default creates application bundles that should get executed. If we want to get a module with exports than we have to explicitly tell it:</p>
<pre><code class="language-jsx">// webpack.config.js

module.exports = {
  ...
  output: {
    ...
    library: &#39;libraryStarter&#39;,
  }
}
</code></pre>
<p>But this is still not enough because we now get an executable script that creates a global variable named <code>libraryStarter</code> which contains our library. Actually, this would be enough to drop it into a <code>&lt;script&gt;</code> tag. We could use it on a web page like this:</p>
<pre><code class="language-html">&lt;script src=&quot;/library-starter.js&quot;&gt;&lt;/script&gt;
&lt;script&gt;
  ...
  libraryStarter.usePropsThatChanged...
  ...
&lt;/script&gt;
</code></pre>
<p>But come on, we wanted a real UMD module. If we do this, we do it right. So back in our <code>webpack.config.js</code> we add two more options:</p>
<pre><code class="language-jsx">// webpack.config.js

output: {
  ...
  library: &#39;libraryStarter&#39;,
  libraryTarget: &#39;umd&#39;,
  globalObject: &#39;this&#39;,
}
</code></pre>
<p>Let&#39;s run <code>npx webpack</code> again and try it out:</p>
<pre><code class="language-bash"># node repl

&gt; const lib = require(&#39;./dist/library-starter.js&#39;)
&gt; lib
Object [Module] {
   ExampleComponent: [Getter],
   usePropsThatChanged: [Getter]
}
</code></pre>
<p>Finally. If you wonder, why we added the <code>globalObject</code> key: It makes sure that in the case of using the bundle file without a module system like AMD or CommonJS it works in the browser as well as in a Node context. The return value of the entry point will get assigned to the current <code>this</code> object which is <code>window</code> in browsers and the global object in Node.</p>
<p>There are more nuanced ways to set <code>libraryTarget</code> than explained here. If you are interested please read the <a href="https://webpack.js.org/configuration/output/#outputlibrarytarget">documentation</a>. But for our purposes this should set a solid base.</p>
<h3>Build and expose</h3>
<p>We are done with the configuration part. (Unbelievable, right?!) The only thing that&#39;s left is changing the <code>package.json</code> so that the bundle can be imported from outside as an addition to our ES modules and that users can get it automatically from <a href="https://unpkg.com/">unpkg.com</a> as well.</p>
<p>Right now both, the <code>main</code> and the <code>module</code> key are pointing to <code>dist/index.js</code>. While only the latter is correct. As <a href="http://tobias-barth.net/blog/Building-your-library-Part-1/">I mentioned before</a> <code>main</code> should point to a ES5-compatible file and not to an ES module. Now we can safely change it to our new bundle file.</p>
<p>Of course we also have to actually build the bundle. For this we add an npm script named &quot;bundle&quot; to our script section and add it to the &quot;build&quot; script.</p>
<pre><code class="language-json">// package.json
{
  ...
  &quot;main&quot;: &quot;dist/library-starter.js&quot;,
  &quot;module&quot;: &quot;dist/index.js&quot;,
  &quot;scripts&quot;: {
    ...
    &quot;bundle&quot;: &quot;webpack&quot;,
    &quot;build&quot;: &quot;&lt;our build commands up until now&gt; &amp;&amp; npm run bundle&quot;
  }
  ...
}
</code></pre>
<h3><a name="bylww-conclusion"></a> Conclusion</h3>
<p>Install webpack:</p>
<pre><code class="language-bash">npm install -D webpack webpack-cli
</code></pre>
<p>Install babel-loader or ts-loader:</p>
<pre><code class="language-bash">npm install -D babel-loader # or ts-loader
</code></pre>
<p>If using Babel, install its runtime helpers:</p>
<pre><code class="language-bash">npm install -D @babel/runtime-corejs3
</code></pre>
<p>Create a <code>webpack.config.js</code>:</p>
<pre><code class="language-jsx">const path = require(&quot;path&quot;);

module.exports = {
  mode: &quot;production&quot;,
  entry: &quot;./src/index.js&quot;, // or &#39;./src/index.ts&#39; if TypeScript
  output: {
    filename: &quot;library-starter.js&quot;, // Desired file name. Same as in package.json&#39;s &quot;main&quot; field.
    path: path.resolve(__dirname, &quot;dist&quot;),
    library: &quot;libraryStarter&quot;, // Desired name for the global variable when using as a drop-in script-tag.
    libraryTarget: &quot;umd&quot;,
    globalObject: &quot;this&quot;
  },
  module: {
    rules: [
      {
        test: /\.jsx?/, // If you are using TypeScript: /\.tsx?$/
        include: path.resolve(__dirname, &quot;src&quot;),
        use: [
          // If using babel-loader
          {
            loader: &quot;babel-loader&quot;,
            options: {
              cacheDirectory: true
            }
          }
          // If _instead_ using ts-loader
          {
          loader: &#39;ts-loader&#39;,
          options: {
            transpileOnly: true
          }
        ]
      }
    ]
  },
  // If using TypeScript
  resolve: {
    extensions: [&#39;.tsx&#39;, &#39;.ts&#39;, &#39;.jsx&#39;, &#39;js&#39;]
  },
  // If using an external dependency that should not get bundled, e.g. React
  externals: {
    react: {
      root: &quot;React&quot;,
      commonjs2: &quot;react&quot;,
      commonjs: &quot;react&quot;,
      amd: &quot;react&quot;
    }
  }
};
</code></pre>
<p>Change the <code>package.json</code>:</p>
<pre><code class="language-json">// package.json
{
  ...
  &quot;main&quot;: &quot;dist/library-starter.js&quot;,
  &quot;module&quot;: &quot;dist/index.js&quot;,
  &quot;scripts&quot;: {
    ...
    &quot;bundle&quot;: &quot;webpack&quot;,
    &quot;build&quot;: &quot;&lt;our build commands up until now&gt; &amp;&amp; npm run bundle&quot;
  }
  ...
}
</code></pre>
<p>That&#39;s all there is to bundling libraries with Webpack.
Next article&#39;s topic: Rollup.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Check types and emit type declarations</title>
        <link>https://tobias-barth.net/blog/Check-types-and-emit-type-declarations</link>
        <pubDate>Thu, 31 Oct 2019 19:59:48 GMT</pubDate>
        
        <description>Part 5 of the series "Publish a modern JavaScript (or TypeScript) library". We want to know that there are no type errors in our code and we want to export type declarations.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 5 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<h3>Getting the types out of TypeScript</h3>
<p>Ok, this is a quick one. When we build our library, we want two things from TypeScript: First we want to know that there are no type errors in our code (or types missing, e.g. from a dependency). Second, since we are publishing a library for other fellow coders to use, not an application, we want to export type declarations. We will start with type checking.</p>
<h3>Type-checking</h3>
<p>Type-checking can be seen as a form of testing. Take the code and check if certain assertions hold. Therefore, we want to be able to execute it as a separate thing that we can add to our build chain or run it in a pre-commit hook for example. You don&#39;t necessarily want to generate type definition files every time you (or your CI tool) run your tests.</p>
<p>If you want to follow along with my <a href="https://github.com/4nduril/library-starter/tree/typescript">little example library</a>, be sure to check out one of the <code>typescript</code> branches.</p>
<p>The TypeScript Compiler always checks the types of a project it runs on. And it will fail and report errors if there are any. So in principle we could just run <code>tsc</code> to get what we want. Now, to separate creating output files from the pure checking process, we must give <code>tsc</code> a handy option:</p>
<pre><code>tsc --noEmit
</code></pre>
<p>Regardless if we use Babel or TSC for transpiling, for checking types there is just this one way.</p>
<h3>Create type declaration files</h3>
<p>This is something pretty library-specific. When you build an application in TypeScript, you only care about correct types and an executable output. But when you provide a library, your users (i.e. other programmers) can directly benefit from the fact that you wrote it in TypeScript. When you provide type declaration files (<code>*.d.ts</code>) the users will get better auto-completion, type-hints and so on when they use your lib.</p>
<p>Maybe you have heard about <a href="https://www.definitelytyped.org/">DefinitelyTyped</a>. Users can get types from there for libraries that don&#39;t ship with their own types. So, in our case we won&#39;t need to do anything with or for DefinitelyTyped. Consumers of our library will have everything they need when we deliver types directly with our code.</p>
<p>Again, because these things are core functionality of TypeScript, we use <code>tsc</code>. But this time the calls are slightly different depending on how we transpile – with Babel or TSC.</p>
<h4>With Babel</h4>
<p>As you probably remember, to create our output files with Babel, we call the Babel command line interface, <code>babel</code>. To also get declaration files we add a call to <code>tsc</code>:</p>
<pre><code>tsc --declaration --emitDeclarationOnly
</code></pre>
<p>The <code>--declaration</code> flag ensures that TSC generates the type declaration files and since we defined the <code>outputDir</code> in <code>tsconfig.json</code>, they land in the correct folder <code>dist/</code>.</p>
<p>The second flag, <code>--emitDeclarationOnly</code>, prevents TSC from outputting transpiled JavaScript files. We use Babel for that.</p>
<p>You may ask yourself why we effectively transpile all of our code twice, once with Babel and once with TSC. It looks like a waste of time if TSC can do both. But <a href="http://tobias-barth.net/blog/Compiling-modern-language-features-with-the-TypeScript-compiler/">I discussed before</a> the advantages of Babel. And having a very fast transpile step separate from a slower declaration generation step can translate to a much better developer experience. The output of declarations can occur only once shortly before publishing – transpiling is something that you do all the time.</p>
<h4>With TSC</h4>
<p>When we use TSC to generate the published library code, we can use it <em>in the same step</em> to spit out the declarations. Instead of just <code>tsc</code>, we call:</p>
<pre><code>tsc --declaration
</code></pre>
<p>That is all.</p>
<h3>Alias All The Things</h3>
<p>To make it easier to use and less confusing to find out what our package can do, we will create NPM scripts for all steps that we define. Then we can glue them together so that for example <code>npm run build</code> will always do everything we want from our build.</p>
<p>In the case of using Babel, in our <code>package.json</code> we make sure that <code>&quot;scripts&quot;</code> contains at least:</p>
<pre><code class="language-javascript">{
  ...
  &quot;scripts&quot;: {
    &quot;check-types&quot;: &quot;tsc --noEmit&quot;,
    &quot;emit-declarations&quot;: &quot;tsc --declaration --emitDeclarationOnly&quot;,
    &quot;transpile&quot;: &quot;babel -d dist/ --extensions .ts,.tsx src/&quot;,
    &quot;build&quot;: &quot;npm run emitDeclarations &amp;&amp; npm run transpile&quot;
  },
  ...
}
</code></pre>
<p>And if you are just using TSC, it looks like this:</p>
<pre><code class="language-javascript">{
  ...
  &quot;scripts&quot;: {
    &quot;check-types&quot;: &quot;tsc --noEmit&quot;,
    &quot;build&quot;: &quot;tsc --declaration&quot;
  },
  ...
}
</code></pre>
<p>Note that we don&#39;t add <code>check-types</code> to <code>build</code>. First of all building and testing are two very different things. We don&#39;t want to mix them explicitly. And second, in both cases we <em>do</em> check the types on build. Because as I said: that happens every time you call <code>tsc</code>. So even if you are slightly pedantic about type-checking on build, you don&#39;t have to call <code>check-types</code> within the build script.</p>
<p>One great advantage of aliasing every action to a NPM script is that everyone working on your library (including you) can just run <code>npm run</code> and will get a nice overview of what scripts are available and what they do.</p>
<p>That&#39;s it for using types.</p>
<p>Next up: All about bundling.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Compiling modern language features with the TypeScript compiler</title>
        <link>https://tobias-barth.net/blog/Compiling-modern-language-features-with-the-TypeScript-compiler</link>
        <pubDate>Thu, 18 Jul 2019 13:24:59 GMT</pubDate>
        
        <description>Part 3 of the series "Publish a modern JavaScript (or TypeScript) library". Instead of Babel like in the last post we can use the TypeScript compiler `tsc` to transpile our code.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 3 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<h3>How to use the TypeScript compiler <code>tsc</code> to transpile your code</h3>
<p><strong>If you are not interested in the background and reasoning behind the setup, <a href="#cmlfwttc-conclusion">jump directly to the conclusion</a></strong></p>
<p>In the last article we set up Babel to transpile modern JavaScript or even TypeScript to a form which is understood by our target browsers. But we can also instead use the TypeScript compiler <code>tsc</code> to do that. For illustrating purposes I have rewritten my small <a href="https://github.com/4nduril/library-starter/tree/rewrite-in-typescript">example library</a> in TypeScript. Be sure to look at one of the <code>typescript-</code> prefixed branches. The <code>master</code> is still written in JavaScript.</p>
<p>I will assume that you already know how to setup a TypeScript project. How else would you have been able to write your library in TS? Rather, I will focus only on the best configuration possible for transpiling for the purposes of delivering a library.</p>
<p>You already know, the configuration is done via a <code>tsconfig.json</code> in the root of your project. It should contain the following options that I will discuss further below:</p>
<pre><code class="language-javascript">{
  &quot;include&quot;: [&quot;./src/**/*&quot;],
  &quot;compilerOptions&quot;: {
    &quot;outDir&quot;: &quot;./dist&quot;,
    &quot;target&quot;: &quot;es2017&quot;,
    &quot;module&quot;: &quot;esnext&quot;,
    &quot;moduleResolution&quot;: &quot;node&quot;,
    &quot;importHelpers&quot;: true
  }
}
</code></pre>
<h3><code>include</code> and <code>outDir</code></h3>
<p>These options tell <code>tsc</code> where to find the files to compile and where to put the result. When we discuss how to emit type declaration files along with your code, <code>outDir</code> will be used also for their destination.</p>
<p>Note that these options allow us to just run <code>tsc</code> on the command line without anything else and it will find our files and put the output where it belongs.</p>
<h3>Target environment</h3>
<p>Remember when we discussed <code>browserslist</code> in the &quot;Babel&quot; article? (If not, <a href="http://tobias-barth.net/blog/Transpile-modern-language-features-with-Babel/">check it out here</a>.) We used an array of queries to tell Babel exactly which environments our code should be able to run in. Not so with <code>tsc</code>.</p>
<p>If you are interested, read this intriguing <a href="https://github.com/Microsoft/TypeScript/issues/19183">issue</a> in the TypeScript GitHub repository. Maybe some day in the future we will have such a feature in <code>tsc</code> but for now, we have to use &quot;JavaScript versions&quot; as targets.</p>
<p>As you may know, since 2015 every year the TC39 committee ratifies a new version of ECMAScript consisting of all the new features that have reached the &quot;Finished&quot; stage before that ratification. (See <a href="https://tc39.es/process-document/">The TC39 process</a>.)</p>
<p>Now <code>tsc</code> allows us (only) to specify which version of ECMAScript we are targeting. To reach a more or less similar result as with Babel and my opinionated <code>browserslist</code> config, I decided to go with <code>es2017</code>. I have used the <a href="https://kangax.github.io/compat-table/es2016plus/">ECMAScript compatibility table</a> and checked until which version it would be &quot;safe&quot; to assume that the last 2 versions of Edge/Chrome/Firefox/Safari/iOS can handle it. Your mileage may vary here! You have basically at least three options:</p>
<ul>
<li>Go with my suggestion and use <code>es2017</code>.</li>
<li>Make your own decision based on the compatibility table.</li>
<li>Go for the safest option and use <code>es5</code>. This will produce code that can also run in Internet Explorer 11 but also will it be much bigger in size — for all browsers.</li>
</ul>
<p>Just like with my <code>browserslist</code> config, I will discuss in a future article how to provide more than one bundle: one for modern environments and one for older ones.</p>
<p>Another thing to note here: The <code>target</code> does not directly set which module syntax should be used in the output! You may think it does, because if you don&#39;t explicitly set <code>module</code> (see next section), <code>tsc</code> will choose it dependent of your <code>target</code> setting. If your <code>target</code> is <code>es3</code> or <code>es5</code>, <code>module</code> will be set implicitly to <code>CommonJS</code>. Otherwise it will be set to <code>es6</code>. To make sure you don&#39;t get surprised by what <code>tsc</code> chooses for you, you should always set <code>module</code> explicitly as described in the following section.</p>
<h3><code>module</code> and <code>moduleResolution</code></h3>
<p>Setting <code>module</code> to <code>&quot;esnext&quot;</code> is roughly the same as the <code>modules: false</code> option of the <code>env</code> preset in our <code>babel.config.js</code>: We make sure that the module syntax of our code stays as ESModules to enable treeshaking.</p>
<p>If we set <code>module: &quot;esnext&quot;</code>, we have to also set <code>moduleResolution</code> to <code>&quot;node&quot;</code>. The TypeScript compiler has two modes for finding non-relative modules (i.e. <code>import {x} from &#39;moduleA&#39;</code> as opposed to <code>import {y} from &#39;./moduleB&#39;</code>): These modes are called <code>node</code> and <code>classic</code>. The former works similar to the resolution mode of NodeJS (hence the name). The latter does not know about <code>node_modules</code> which is strange and almost never what you want. But <code>tsc</code> enables the <code>classic</code> mode when <code>module</code> is set to <code>&quot;esnext&quot;</code> so you have to explicitly tell it to behave.</p>
<p>In the <code>target</code> section above I mentioned that <code>tsc</code> will set <code>module</code> implicitly to <code>es6</code> if <code>target</code> is something other than <code>es3</code> or <code>es5</code>. There is a subtle difference between <code>es6</code> and <code>esnext</code>. According to the answers in <a href="https://github.com/Microsoft/TypeScript/issues/24082">this GitHub issue</a> <code>esnext</code> is meant for all the features that are &quot;on the standard track but not in an official ES spec&quot; (yet). That includes features like dynamic import syntax (<code>import()</code>) which is definitely something you should be able to use because it enables code splitting with Webpack. (Maybe a bit more important for applications than for libraries, but just that you know.)</p>
<h3><code>importHelpers</code></h3>
<p>You can compare <code>importHelpers</code> to Babel&#39;s <code>transform-runtime</code> plugin: Instead of inlining the same helper functions over and over again and making your library bigger and bigger, <code>tsc</code> now injects imports to <code>tslib</code> which contains all these helpers just like <code>@babel/runtime</code>. But this time we will install the production dependency and not leave it to our users:</p>
<p><code>npm i tslib</code></p>
<p>The reason for that is that <code>tsc</code> will not compile without it. <code>importHelpers</code> creates imports in our code and if <code>tsc</code> does not find the module that gets imported it aborts with an error.</p>
<h3>Should you use <code>tsc</code> or Babel for transpiling?</h3>
<p>This is a bit opinion-based. But I think that you are better off with Babel then with <code>tsc</code>.</p>
<p>TypeScript is great and can have many benefits (even if I <strong>personally</strong> think JavaScript as a language is more powerful without it and the hassle you get with TypeScript outweighs its benefits). And if you want, you should use it! But let Babel produce the final JavaScript files that you are going to deliver. Babel allows for a better configuration and is highly optimized for exactly this purpose. TypeScript&#39;s aim is to provide type-safety so you should use it (separately) for that. And there is another issue: Polyfills.</p>
<p>With a good Babel setup you get everything you need for running your code in the target environments. Not with <code>tsc</code>! It&#39;s now your task to provide all the polyfills that your code needs. And first, to figure out which these are. Even if you don&#39;t agree with my opinion about the different use-cases of Babel and TypeScript, the polyfill issue alone should be enough to follow me on this.</p>
<p>There is a wonderful blog post about using Babel instead of <code>tsc</code> for transpiling: <a href="https://iamturns.com/typescript-babel/">TypeScript With Babel: A Beautiful Marriage</a>. And it lists also the caveats of using Babel for TS: There are four small things that are possible in TypeScript but are not understood correctly by Babel: Namespaces (Don&#39;t use them. They are outdated.), type casting with angle brackets (Use <code>as</code> syntax instead.), <code>const enum</code> (Use normal enums by omitting <code>const</code>.) and legacy style import/export syntax (It&#39;s <strong>legacy</strong> — let it go). I think the only important constraint here is the <code>const enum</code> because it leads to a little bit more code in the output if you use standard enums. But unless you introduce enums with hundreds and hundreds of members, that problem should be negligible.</p>
<p>Also, it&#39;s way faster to just discard all type annotations than checking the types first. This enables for example a faster compile cycle in development-/watch-mode. The example project that I use for this series is maybe not doing enough to be seen as a good compile time example. But also in another library project of mine which consists of ~25 source files and several third-party dependencies, Babel is five times faster than <code>tsc</code>. That is annoying enough when you are coding and have to wait after every save to see the results.</p>
<h3><a name="cmlfwttc-conclusion"></a>Conclusion and final notes for the <code>tsc</code> setup</h3>
<p>(If you really want to use <code>tsc</code> for this task (see the last paragraphs above): )</p>
<p>Install <code>tslib</code>:</p>
<p><code>npm i tslib</code></p>
<p>Make sure your <code>tsconfig.json</code> contains at least the following options:</p>
<pre><code class="language-javascript">{
  &quot;compilerOptions&quot;: {
    &quot;outDir&quot;: &quot;./dist&quot;, // where should tsc put the transpiled files
    &quot;target&quot;: &quot;es2017&quot;, // set of features that we assume our targets can handle themselves
    &quot;module&quot;: &quot;esnext&quot;, // emit ESModules to allow treeshaking
    &quot;moduleResolution&quot;: &quot;node&quot;, // necessary with &#39;module: esnext&#39;
    &quot;importHelpers&quot;: true // use tslib for helper deduplication
  },
  &quot;include&quot;: [&quot;./src/**/*&quot;] // which files to compile
}
</code></pre>
<p>If you are sure you want or need to support older browsers like Android/Samsung 4.4 or Internet Explorer 11 with only one configuration, replace the <code>es2017</code> target with <code>es5</code>. In a future article I will discuss how to create and publish more than one package: One as small as possible for more modern targets and one to support older engines with more helper code and therefore bigger size.</p>
<p>And remember: In this article I talked only about using <code>tsc</code> as transpiler. We will of course use it for type-checking, but this is another chapter.</p>
<p>Next up: Type-Checking and providing type declarations</p>
]]></content:encoded>
      </item>
      <item>
        <title>Einfache Foto-Upload-App mit Node.js</title>
        <link>https://tobias-barth.net/blog/Einfache-Foto-Upload-App-mit-Node-js</link>
        <pubDate>Tue, 30 May 2017 19:08:00 GMT</pubDate>
        
        <description>Wir stellen einen WLAN-Router auf, mit dem sich alle verbinden. Er braucht keinen Internet-Uplink. Die Projektleiterin startet einen Node.js-Webserver auf ihrem Macbook, der eine Seite mit einem Webformular ausliefert, das nichts anderes tut, als einen Datei-Upload zu ermöglichen. Der Server speichert die hochgeladenen Files auf dem Laptop und das wars.</description>
        <content:encoded><![CDATA[<p>Meine Frau brauchte für ein Projekt mit Kindern eine einfache Möglichkeit, Fotos von Smartphones auf ihren Laptop zu laden. Es gibt viele Methoden das zu machen: Bluetooth, USB, SD-Karten-Austausch, und natürlich die üblichen Internet-Lösungen wie Dropbox.</p>
<p>Bei 20 Zehn- bis Vierzehnjährigen sind 1-zu-1-Verbindungen mit Bluetooth oder Kabel nicht so richtig praktisch. Internetbasierte Lösungen waren aus verschiedenen Gründen ebenfalls nicht das Richtige. Also habe ich angeboten, eine kleine Eigenbau-Lösung zu basteln.</p>
<p>Das Konzept: Wir stellen einen WLAN-Router auf, mit dem sich alle verbinden. Er braucht keinen Internet-Uplink. Die Projektleiterin startet einen Node.js-Webserver auf ihrem Macbook, der eine Seite mit einem Webformular ausliefert, das nichts anderes tut, als einen Datei-Upload zu ermöglichen. Der Server speichert die hochgeladenen Files auf dem Laptop und das wars.</p>
<h3>Schritt 0: Init</h3>
<pre><code>$ mkdir -p upload-app/server &amp;&amp; cd upload-app
</code></pre>
<p>Wir initialisieren das Projekt und installieren das erste Paket:</p>
<pre><code>$ yarn init -y
$ yarn add express
</code></pre>
<h3>Schritt 1: Der Server</h3>
<p>Als erstes erstellen wir die Datei <code>server/index.js</code>:</p>
<pre><code class="language-javascript">// index.js
const express = require(&#39;express&#39;)
const app = express()

// Für CSS und ggfs. JS
app.use(express.static(path.join(__dirname, &#39;static&#39;)))

app.get(&#39;/&#39;, (req, res) =&gt; {
  res.send(&#39;Hello\n&#39;)
})

app.listen(3000, err =&gt; {
  if (err) throw err
  console.log(&#39;Listening on port 3000&#39;)
})
</code></pre>
<p>Sehr schön. Um uns das Entwicklerleben leichter zu machen, installieren wir uns <code>nodemon</code> und starten dann per npm-Script den Server:</p>
<pre><code>$ yarn add -D nodemon
</code></pre>
<pre><code class="language-json">// package.json
{
  &quot;name&quot;: &quot;upload-app&quot;,
  &quot;version&quot;: &quot;1.0.0&quot;,
  &quot;scripts&quot;: {
    &quot;start&quot;: &quot;nodemon ./server/index.js&quot;
  },
  …
}
</code></pre>
<pre><code>$ npm start
Listening on port 3000
</code></pre>
<h3>Schritt 2: Die Startseite</h3>
<p>Bisher antwortet unser Server nur mit dem String <code>&#39;Hello&#39;</code>. Wir ändern das mit einem schicken Low-Budget-Template. Wir erstellen die Datei <code>server/templates.js</code> mit folgendem Inhalt:</p>
<pre><code class="language-javascript">// server/templates.js
const pageHeader = `&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset=&quot;utf-8&quot; /&gt;
    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width&quot; /&gt;
    &lt;title&gt;Du und die Kamera – KKS&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;div class=&quot;wrapper&quot;&gt;
`

const pageFooter = `    &lt;/div&gt;
    &lt;script src=&quot;/upload.js&quot; /&gt;
  &lt;/body&gt;
&lt;/html&gt;
`

const homepage = () =&gt; `${pageHeader}
      &lt;form action=&quot;/upload&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
        &lt;h1&gt;Du und die Kamera&lt;/h1&gt;
        &lt;label for=&quot;upload&quot;&gt;Bild auswählen&lt;/label&gt;
        &lt;input id=&quot;upload&quot; type=&quot;file&quot; name=&quot;datei&quot; accept=&quot;image/*&quot; /&gt;
        &lt;button type=&quot;submit&quot;&gt;Hochladen&lt;/button&gt;
      &lt;/form&gt;
${pageFooter}
`

module.exports = {
  homepage,
}
</code></pre>
<p>Wir exportieren also eine Funktion, die ein Template-Literal zurückgibt. Wir hätten Header und Footer auch direkt integrieren können, aber wir brauchen sie später noch für eine zweite Seite.</p>
<p>In <code>server/index.js</code> nutzen wir jetzt die <code>homepage</code>-Funktion:</p>
<pre><code class="language-javascript">// server/index.js
const { homepage } = require(&#39;./templates.js&#39;)

app.get(&#39;/&#39;, (req, res) =&gt; {
  res.send(homepage())
})
</code></pre>
<p>Wir haben jetzt ein Formular mit einem File-Upload-Input auf unserer Website. Es sieht noch ein bisschen unspektakulär aus. Werfen wir etwas CSS dagegen! Wir erstellen die Datei <code>server/static/style.css</code>:</p>
<pre><code class="language-css">/* server/static/style.css */
* {
  font-family: sans-serif;
  box-sizing: border-box;
}

img {
  max-width: 100%;
}

.wrapper &gt; form {
  display: flex;
  width: 20em;
  max-width: 100%;
  margin-left: auto;
  margin-right: auto;
  justify-content: center;
  flex-wrap: wrap;
}

#upload {
  display: none;
}

[for=&#39;upload&#39;] {
  display: block;
  width: 18em;
  padding: 1em;
  margin-top: 1em;
  margin-bottom: 2em;
  border-radius: 0.2em;
  text-align: center;
  color: white;
  background-color: deepskyblue;
  font-weight: bold;
}
</code></pre>
<p>Besser. <em>Anmerkung:</em> Ich blende hier das eigentliche Input-Element aus und nutze die Tatsache, dass man auch auf dass zugehörige Label-Element klicken kann, um den Datei-Auswahl-Dialog zu öffnen. Diese Idee habe ich mir aus dem MDN abgeguckt: <a href="https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications#Using_a_label_element_to_trigger_a_hidden_file_input_element">Using a label element to trigger a hidden file input element</a>. Das hat den Vorteil, dass man das hässliche Browser-gestylte File-Input los ist und in Ruhe einfach das Label stylen kann.</p>
<h3>Schritt 3: Datei-Uploads annehmen</h3>
<p>Jetzt sollten wir dafür sorgen, dass der <code>/upload</code> Pfad auch in der Express-App definiert ist und die hochgeladenen Files gespeichtert werden.</p>
<p>Wir benutzen dafür <code>multiparty</code>:</p>
<pre><code>$ yarn add multiparty
</code></pre>
<p>und passen zuerst unseren Server an:</p>
<pre><code class="language-javascript">// in server/index.js
const handleUpload = require(&#39;./handleUpload.js&#39;)

app.post(&#39;/upload&#39;, handleUpload)
</code></pre>
<p>Die Datei <code>server/handleUpload.js</code> müssen wir natürlich auch schreiben:</p>
<pre><code class="language-javascript">// server/handleUpload.js
const Form = require(&#39;multiparty&#39;).Form
const { uploadOptions } = require(&#39;./config.js&#39;)
const { successPage } = require(&#39;./templates.js&#39;)

module.exports = function handleUpload(req, res) {
  const form = new Form(uploadOptions)
  form.on(&#39;file&#39;, (name, file) =&gt; {
    req.filename = file.originalFilename
  })
  form.on(&#39;error&#39;, () =&gt; {})
  form.on(&#39;close&#39;, () =&gt; res.send(successPage(req.filename)))
  form.parse(req)
}
</code></pre>
<p>Die Instanzen von <code>multiparty.Form</code> sind Event-Emitter. <code>.parse</code> verabeitet die Anfrage vom Browser, in der die hochgeladene Datei enthalten ist. Das <code>file</code>-Event wird emittiert, wenn eine Datei aus dem Request fertig verarbeitet ist. Hier nutzen wir die Gelegenheit um den Original-Dateinamen im Request-Objekt zwischen zu speichern.</p>
<p>Die Dokumentation von multiparty rät dringend, einen Error-Listener zu registrieren, auch wenn man nichts mit dem Fehler macht. Ansonsten crasht nämlich die App bei allem, was multiparty als Error ansieht.</p>
<p>Schließlich senden wir eine Erfolgsmeldung an den Browser zurück, wenn der Request fertig verarbeitet ist. Diese <code>successPage</code> ist wieder eine Template-Funktion, die wir genau wie die Homepage in <code>server/templates.js</code> definieren:</p>
<pre><code class="language-javascript">// in server/templates.js

const successPage = filename =&gt; `${pageHeader}
      &lt;p&gt;Du hast &quot;${filename}&quot; erfolgreich hochgeladen.&lt;/p&gt;
${pageFooter}
`

module.exports = {
  homepage,
  successPage,
}
</code></pre>
<p>Hier nutzen wir den im Request-Objekt gespeicherten Dateinamen, um der Nutzerin im Browser nochmal anzuzeigen, was sie hochgeladen hat.</p>
<p>Haben alle bemerkt, dass es noch ein Detail in <code>handleUpload.js</code> zu besprechen gibt? <code>config.js</code>, richtig.</p>
<p>Der Bequemlichkeit halber legen wir die Datei <code>server/config.js</code> an, und exportieren von dort ein paar Dinge:</p>
<pre><code class="language-javascript">// server/config.js
const path = require(&#39;path&#39;)

const uploadDir = path.join(process.cwd(), &#39;upload-app&#39;)

module.exports = {
  port: process.env.NODE_ENV === &#39;production&#39; ? 80 : 3000,
  uploadDir,
  uploadOptions: {
    uploadDir,
  },
}
</code></pre>
<p>Bisher geben wir <code>multiparty</code> nur eine Option mit, aber eventuell ändert sich das auch einmal und dann haben wir schon ein ganzes Config-Objekt dafür. Außerdem wollen wir in der Lage sein, im Produktions-Modus den allgemein gültigen HTTP-Port zu nutzen anstatt 3000. Das <code>uploadDir</code> exportieren wir gleich mit, um es bei App-Start anzulegen und zusammen mit der &quot;Listening …&quot;-Meldung anzuzeigen. So weiß derjenige, der den Server startet, auch direkt, wo er nach den hochgeladenen Dateien schauen muss. Wir haben es hier so eingerichtet, dass dieses Verzeichnis innerhalb des Ordners erzeugt wird, aus dem die App gestartet wird. Das ganze wird von unserem Server so genutzt:</p>
<pre><code class="language-javascript">// in server/index.js
const fs = require(&#39;fs&#39;)
const { port, uploadDir } = require(&#39;./config.js&#39;)

try {
  fs.mkdirSync(uploadDir)
} catch (e) {
  // Wenn das Verzeichnis schon existiert, machen wir einfach weiter.
  // Bei jedem anderen Fehler lassen wir crashen.
  if (e.code !== &#39;EEXIST&#39;) throw e
}
app.listen(port, err =&gt; {
  if (err) throw err
  console.log(`
Listening on port ${port}
Using directory &quot;${uploadDir}&quot; for uploads
  `)
})
</code></pre>
<p>Damit sind wir eigentlich fertig. Wir haben eine Website, man kann dort eine Bild-Datei hochladen, sie landet in unserem Upload-Ordner und man bekommt eine Bestätigung nach dem Hochladen.</p>
<h3>Schritt 4: Es geht besser</h3>
<p>Es bleibt allerdings die Frage offen: Woher wissen denn alle Beteiligten, was sie in die Adresszeile ihres Smartphone-Browsers eingeben müssen, um zu unserer schicken Upload-App zu kommen?</p>
<p>Per default lauscht Express auf allen zugewiesenen IP-Adressen. Wenn wir wüssten, welche IP der Computer, auf dem der Server läuft, vom WLAN-Accesspoint bekommen hat, könnten wir das allen sagen. Wir wollen die Kursleiterin aber nicht dazu verdonnern in ihren Netzwerkeinstellungen irgendwo in den Untiefen des Computers danach zu suchen. Es wäre doch schon mal viel netter, wenn wir die richtige (also erreichbare) IP einfach direkt beim App-Start im Terminal ausgeben. Dann könnte zumindest die Kursleiterin allen die Addresse sagen.</p>
<p>Ok, los gehts. Wir brauchen eine IP-Adresse, die nicht intern ist (intern wäre z.B. 127.0.0.1, die zeigt immer auf den Rechner, auf dem sie angefragt wird). Dann stellen wir sicher, dass der Server auf dieser Adresse lauscht und zeigen sie im Terminal an:</p>
<pre><code class="language-javascript">// in server/config.js
const os = require(&#39;os&#39;)

const getMachineIp = () =&gt; {
  const ifs = os.networkInterfaces()
  return (
    Object.keys(ifs)
      // Accumulate all address objects from all interfaces in one array
      .reduce((flattened, iface) =&gt; [...flattened, ...ifs[iface]], [])
      // Only external addresses
      .filter(address =&gt; !address.internal)
      // Only v4 addresses (easier to type in a browser)
      // And take only the first one
      .filter(address =&gt; address.family === &#39;IPv4&#39;)[0].address
  )
}

module.exports = {
  port: process.env.NODE_ENV === &#39;production&#39; ? 80 : 3000,
  uploadDir,
  uploadOptions: {
    uploadDir,
  },
  serverIp: getMachineIp(),
}
</code></pre>
<p>Ich empfehle jedem mal auf dem eigenen Rechner die Node.js REPL zu starten und dort <code>os.networkInterfaces()</code> aufzurufen, um einen Eindruck zu bekommen, mit was für Daten wir hier arbeiten.</p>
<p>Als nächstes bauen wir das in den Server ein:</p>
<pre><code class="language-javascript">// in server/index.js
const { port, serverIp, uploadDir } = require(&#39;./config.js&#39;);

if (serverIp) {
  try {
    fs.mkdirSync(uploadDir);
  } catch (e) {
    if (e.code !== &#39;EEXIST&#39;) throw e;
  }
  app.listen(port, serverIp, (err) =&gt; {
    if (err) throw err;
    console.log(`
  Listening on ${serverIp}:${port}
  Using directory &quot;${uploadDir}&quot; for uploads
      `);
    });
  });
} else {
  throw Error(&#39;No public v4 IP found!&#39;);
}
</code></pre>
<p>Wir können jetzt zwar unsere Website nicht mehr mit <code>localhost:3000</code> aufrufen, aber das wäre ja eh nur auf dem Computer gegangen, auf dem der Server läuft. Viel wichtiger ist, dass die Kinder sie von außen erreichen.</p>
<p>Jetzt könnte die Kursleiterin also die richtige IP-Adresse sehen und sie z.B. an die Tafel schreiben. Oder in ein Dokument tippen und das mit dem Beamer an die Wand werfen. Moment. Ein Beamer? Ein Dokument? Wir können die IP auch gleich zusätzlich auf der Upload-Webseite anzeigen. Dazu müssen wir sie bloß unserer Template-Funktion übergeben:</p>
<pre><code class="language-javascript">// in server/templates.js
const homepage = ({ address, listenPort }) =&gt; `${pageHeader}
      &lt;form action=&quot;/upload&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
        &lt;h1&gt;Du und die Kamera&lt;/h1&gt;
        &lt;p&gt;Die Adresse ist: ${address}${
  listenPort === 80 ? &#39;&#39; : `:${listenPort}`
}&lt;/p&gt;
        &lt;label for=&quot;upload&quot;&gt;Bild auswählen&lt;/label&gt;
        &lt;input id=&quot;upload&quot; type=&quot;file&quot; name=&quot;datei&quot; accept=&quot;image/*&quot; /&gt;
        &lt;button type=&quot;submit&quot;&gt;Hochladen&lt;/button&gt;
      &lt;/form&gt;
${pageFooter}
`
</code></pre>
<pre><code class="language-javascript">// in server/index.js

const homepageOpts = {
  address: serverIp,
  listenPort: port,
}

app.get(&#39;/&#39;, (req, res) =&gt; {
  res.send(homepage(homepageOpts))
})
</code></pre>
<h3>Schritt 5: Und noch besser</h3>
<p>IP-Adressen sind trotzdem ganz schön nervig einzutippen, erst recht auf einem Smartphone. Also machen wir noch eine Verbesserung: Einen QR-Code!</p>
<pre><code>$ yarn add qrcode
</code></pre>
<p>Wir erzeugen jetzt beim App-Start einmalig aus der IP-Adresse und dem Port einen URL, den wir als QR-Code in Form eines Image-Data-URI encodieren. Das spart uns das abspeichern, auslesen und aufräumen einer richtigen Bild-Datei. Der Data-URI wird bei App-Start erzeugt und im RAM gehalten bis der Prozess beendet wird. Und wir können ihn einfach bei jedem Request an die Template-Funktion weiterreichen, von der er direkt in das HTML injiziert wird. Sollte aus irgendeinem Grund das Erzeugen fehlschlagen, ist uns das egal, denn die IP wird als Fallback auch noch angezeigt.</p>
<pre><code class="language-javascript">// in server/index.js
const qrcode = require(&#39;qrcode&#39;)

app.get(&#39;/&#39;, (req, res) =&gt; {
  homepageOpts.qr = app.locals.qr
  res.send(homepage(homepageOpts))
})

if (serverIp) {
  try {
    fs.mkdirSync(uploadDir)
  } catch (e) {
    if (e.code !== &#39;EEXIST&#39;) throw e
  }
  const location = `${serverIp}${port === &#39;80&#39; ? &#39;&#39; : `:${port}`}`
  qrcode.toDataURI(`http://${location}`, { scale: 8 }, (error, uri) =&gt; {
    app.locals.qr = uri
    app.listen(port, serverIp, err =&gt; {
      if (err) throw err
      // eslint-disable-next-line no-console
      console.log(`
  Listening on ${serverIp}:${port}
  Using directory &quot;${uploadDir}&quot; for uploads
      `)
    })
  })
} else {
  throw Error(&#39;No public v4 IP found!&#39;)
}
</code></pre>
<pre><code class="language-javascript">// in server/templates.js
const homepage = ({ address, listenPort, qr }) =&gt; `${pageHeader}
        &lt;form action=&quot;/upload&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
          &lt;h1&gt;Du und die Kamera&lt;/h1&gt;
          &lt;p&gt;Die Adresse ist: ${address}${
  listenPort === 80 ? &#39;&#39; : `:${listenPort}`
}&lt;/p&gt;
          ${qr ? `&lt;p&gt;&lt;img src=&quot;${qr}&quot; /&gt;&lt;/p&gt;` : &#39;&#39;}
          &lt;label for=&quot;upload&quot;&gt;Bild auswählen&lt;/label&gt;
          &lt;input id=&quot;upload&quot; type=&quot;file&quot; name=&quot;datei&quot; accept=&quot;image/*&quot; /&gt;
          &lt;button type=&quot;submit&quot;&gt;Hochladen&lt;/button&gt;
        &lt;/form&gt;
  ${pageFooter}
  `
</code></pre>
<p>So, jetzt sind wir aber wirklich fertig. Übrigens ganz ohne Client-Side-Javascript. Wir brauchten gerade mal zwei Dependencies: <code>express</code> und <code>multiparty</code>. Eine dritte, <code>qrcode</code>, um den Bequemlichkeitsfaktor noch zu erhöhen. Das ganze läuft unter Node.js 6.9.5, ohne Babel, ohne Webpack, ohne React, sogar ganz ohne externe Templating-Engine.</p>
<p>Was noch schön wäre, weil wir ja das File-Input-Element kaputtgemacht haben, mit ein bisschen Browser-Javascript das zum Upload ausgewählte Bild anzuzeigen. Entweder als schnöden Dateinamen oder sogar als Thumbnail. Ideen dazu gibt es <a href="https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications">hier</a>.</p>
<p>Außerdem wollen wir eher ungern die Kursleiterin dazu zwingen, sich Node.js zu installieren, <code>npm install</code> zu machen und so weiter. Daher bietet sich ein Packager wie <code>pkg</code> von zeit an.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Execute Promise-based code in order over an array</title>
        <link>https://tobias-barth.net/blog/Execute-Promise-based-code-in-order-over-an-array</link>
        <pubDate>Thu, 18 Apr 2019 14:47:53 GMT</pubDate>
        
        <description>I had a list of input data and wanted to execute a function for every item in that list. What I need is a way to traverse the array, execute the function for the current element, wait until the Promise resolves and only then go to the next element and call the function with it.</description>
        <content:encoded><![CDATA[<h3>The problem</h3>
<p>I recently faced a problem: I had a list (an array) of input data and wanted to execute a function for every item in that list.</p>
<p>No problem, you say, take <code>Array.prototype.map</code>, that&#39;s what it&#39;s for. <strong>BUT</strong> the function in question returns a Promise and I want to be able to only continue in the program flow when all of these Promises are resolved.</p>
<p>No problem, you say, wrap it in <code>Promise.all</code>, that&#39;s what it&#39;s for. <strong>BUT</strong> the function in question is very expensive. So expensive that it spawns a child process (the whole code runs in NodeJS on my computer) and that child process is using so much CPU power that my computer comes to grinding halt when my input list is longer than a few elements.</p>
<p>And that&#39;s because effectively, all the heavy child processes get started in near parallel. Actually they get started in order but the next will not wait for the previous to finish.</p>
<h3>The first solution</h3>
<p>So what I need is a way to traverse the array, execute the function for the current element, <em>wait</em> until the Promise resolves and <em>only then</em> go to the next element and call the function with it. That means <code>map</code> will not work because I have no control over the execution flow. So I will have to build my own <code>map</code>. And while I am on it, I will implement it a bit nicer as stand-alone function that takes the mapper function first and then the data array:</p>
<pre><code class="language-javascript">const sequentialMap = fn =&gt;
  function innerSequentialMap([head, ...tail]) {
    if (!head) {
      return Promise.resolve([])
    }
    return fn(head).then(headResult =&gt;
      innerSequentialMap(tail).then(tailResult =&gt; [headResult, ...tailResult])
    )
  }
</code></pre>
<p>So, what does this? It takes the function <code>fn</code> that should be applied to all values in the array and returns a new function. This new function expects an array as input. You see that the function is curried in that it takes only ever one argument and the real execution starts when all arguments are provided. That allows us for example to &quot;preload&quot; <code>sequentialMap</code> with a mapper function and reuse it on different input data:</p>
<pre><code class="language-javascript">// preloading
const mapWithHeavyComputations = sequentialMap(heavyAsyncComputation)

// execution
const result = mapWithHeavyComputations([…])
</code></pre>
<p>But in this case the currying enables (or simplifies) another technique: recursion.</p>
<p>We say a function is recursive when it calls itself repeatedly. Recursion is the functional equivalent to looping in imperative programming. You can refactor one into the other as long as the programming language allows both ways. Or so I thought.</p>
<p>I used a recursive function here because I could not think of a way to wait for a Promise resolving in a loop. How would I use <code>.then()</code> and jump to the next iteration step <em>within</em> that <code>then</code>?</p>
<p>Anyway, let&#39;s go further through the code. In the body of the internal or second function firstly I define a condition to terminate the recursion: I check if the first element is falsy and if it is falsy I just return a Promise that resolves to an empty array. That is because the main path of the function returns its data as an array wrapped in a Promise. So if we return the same type of data when we terminate all will fit nicely together.</p>
<p>Next, if we don&#39;t terminate (which means the first element of the given list is truthy) we apply the mapper function to it. That will return a Promise and we wait for its resolving with <code>.then</code>. Once it resolves the whole thing gets a bit magical, but not too much.</p>
<p>What we do then is to build a nested Promise. Normally, when you work with Promises and want to apply several functions to the inner values you would build a &quot;Promise chain&quot;:</p>
<pre><code class="language-javascript">const result = firstPromise
  .then(doSomethingWithIt)
  .then(doSomthingElseAfterThat)
  …
</code></pre>
<p>The problem we have here is that to build the final result (the mapped array), we need the result from the first resolved Promise and then also the result values from all the other Promises which are not computed <em>upon</em> each other but <em>independent</em>.</p>
<p>So we use two features to solve that: nested scope and Promise-flattening (did someone say Monad?).</p>
<p>For the nested scope first: When we define a function within a function then the inner function can access variables that are defined not within itself but in the outer function (the outer or surrounding scope):</p>
<pre><code class="language-javascript">function outer(arg1) {
  const outerValue = arg1 + 42

  function inner() {
    return outerValue + 23
  }

  console.log(inner())
}

outer(666) // logs 731
</code></pre>
<p>And Promise-flattening means essentially that if you have a Promise of a Promise of a value that is the same as if you just had a Promise of the value.</p>
<pre><code class="language-javascript">const p2 = Promise.resolve(Promise.resolve(1))
const p1 = Promise.resolve(1)

p2.then(console.log) // logs 1
p1.then(console.log) // logs 1
</code></pre>
<p>To recall, here is what the code we are talking about looks like:</p>
<pre><code class="language-javascript">return fn(head).then(headResult =&gt;
  sequentialMapInternal(tail).then(tailResult =&gt; [headResult, ...tailResult])
)
</code></pre>
<p>We keep the <code>headResult</code> in scope and then we generate the next Promise by calling the inner function recursively again but with a shorter list without the first element. We wait again with <code>.then</code> for the final result and only then we build our result array.</p>
<p>This is done by spreading the <code>tailResult</code> after the <code>headResult</code>: We know we get one value from calling <code>fn(head)</code> but we get a list of values from calling <code>sequentialMapInternal(tail)</code>. So with the spread operator we get a nice flat array of result values.</p>
<p>Note that the function inside the first <code>then</code>, that gets <code>headResult</code> as parameter immediately returns the next Promise(-chain). And that is essentially where we use Promise-flattening. <code>.then</code> returns a Promise in itself and now we are returning a Promise inside of that. But the result will look like an ordinary Promise – no nesting visible.</p>
<h3>The better way</h3>
<p>While that works perfectly and my computer remains usable also when I call my script now, all these nested <code>then</code>s do not look so nice. We can fix that when we have async functions at our disposal:</p>
<pre><code class="language-javascript">const sequentialMap = fn =&gt;
  async function innerSequentialMap([head, ...tail]) {
    if (!head) {
      return Promise.resolve([])
    }
    const headResult = await fn(head)
    const tailResult = await innerSequentialMap(tail)
    return [headResult, ...tailResult]
  }
</code></pre>
<p>Yes, that is much better. Now the exection is paused until <code>headResult</code> is there and then paused again until <code>tailResult</code> is there and only then we build our result array and are finished.</p>
<h3>The shortest way</h3>
<p>Wait. Did I just say I can pause execution with <code>await</code>? Wouldn&#39;t this work also within a loop?</p>
<pre><code class="language-javascript">const loopVersion = fn =&gt; async list =&gt; {
  const result = []
  for (const elem of list) {
    result.push(await fn(elem))
  }
  return result
}
</code></pre>
<p>See, this is what happens to people like me that are too deep into functional programming paradigms. Yes, you should generally avoid loops because they are not declarative and you end up telling the machine (and your coworker) not <em>what</em> you want to happen but <em>how</em> you want it to happen. That is, again, generally, no good practice. But in this case that is exactly what we wanted: To give a step-by-step schema on how to execute our code. To optimize for resource usage.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Ezjail und IPv6 alias Adressen</title>
        <link>https://tobias-barth.net/blog/Ezjail-und-IPv6-alias-Adressen</link>
        <pubDate>Sat, 01 Jul 2017 11:39:12 GMT</pubDate>
        
        <description>Ich habe vor Kurzem begonnen Jails in FreeBSD zu nutzen und auszuprobieren. Mein Digitalocean-Droplet läuft unter FreeBSD 11 und DO gibt mir eine public v4 IP-Adresse und ein /124 v6 Präfix. Für meinen privaten Kram interessiert mich IPv4 nicht, daher habe ich 16 öffentliche IPs, mit denen ich spielen kann.</description>
        <content:encoded><![CDATA[<p>Ich habe vor Kurzem begonnen Jails in FreeBSD zu nutzen und auszuprobieren. Mein Digitalocean-Droplet läuft unter FreeBSD 11 und DO gibt mir eine public v4 IP-Adresse und ein <code>/124</code> v6 Präfix. Für meinen privaten Kram interessiert mich IPv4 nicht, daher habe ich 16 öffentliche IPs, mit denen ich spielen kann.</p>
<p>Das heißt, ich brauche kein NAT sondern kann den Jails direkt public IPs zuweisen, die ich als Aliases auf das Interface lege.</p>
<p>Bisher habe ich mit <code>ezjail</code> zwei Jails eingrichtet: <code>git</code> und <code>backup</code>. DO weißt dem Interface automatisch die v6-IP mit der Endziffer <code>1</code> zu. In meiner <code>rc.conf</code> steht entsprechend für die beiden Jail-IPs:</p>
<pre><code>ifconfig_vtnet0_alias1=&quot;inet6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx2 prefixlen 64&quot;
ifconfig_vtnet0_alias2=&quot;inet6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx3 prefixlen 64&quot;
cloned_interfaces=&quot;lo1&quot;
ezjail_enable=&quot;YES&quot;
</code></pre>
<p>Heute musste ich das Droplet powercyclen und beim Boot kam nur der <code>git</code>-Jail hoch. Ich probierte, den zweiten selbst zu starten:</p>
<pre><code># ezjail-admin start backup
</code></pre>
<p>Das gab diesen Fehler aus:</p>
<pre><code>ifconfig: ioctl (SIOCAIFADDR): Invalid argument)
</code></pre>
<p>Erst wusste ich damit nichts anzufangen. Dann fiel mir auf, dass in der Zeile davor der Befehl stand, der versucht wurde auszuführen:</p>
<pre><code>/usr/sbin/ifconfig vtnet0 inet6 &lt;adresse&gt;/128 alias
</code></pre>
<p>Hoppala. <code>ezjail</code> versuchte selbst einen Alias anzulegen, obwohl schon einer existiert. Tatsächlich habe ich dann nochmal selbst den <code>ifconfig</code> Befehl getestet, geht tatsächlich nicht.</p>
<p>Ist auch klar: Mit dieser IP war bereits ein Alias definiert <em>mit der Präfixlänge 64</em>. Hier wurde versucht ein Alias für dieselbe IP anzulegen, <em>aber mit der Präfixlänge 128</em>, was natürlich nicht geht.</p>
<p>Ein Blick in die <code>ezjail</code>-Config-files für die beiden Jails zeigt, dass ich bei dem <code>git</code>-Jail die IP so konfiguriert habe:</p>
<pre><code>export jail_git_ip=&quot;xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx2&quot;
</code></pre>
<p>Bei dem <code>backup</code>-Jail brauchte ich aber wegen eines Dienstes darin ein Loopback-Interface, das ich in der <code>rc.conf</code> oben als <code>lo1</code> angelegt habe. Also muss ich dem Jail zwei Interfaces mit IPs übergeben und das mache ich so:</p>
<pre><code>export jail_git_ip=&quot;lo1|127.0.0.3,vtnet0|xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx3&quot;
</code></pre>
<p>Offenbar führt die fehlende Präfixlängenangabe hier dazu, dass <code>ezjail</code> (oder <code>ifconfig</code>) sie auf 128 setzt. Also änderte ich die Zeile zu:</p>
<pre><code>export jail_git_ip=&quot;lo1|127.0.0.3,vtnet0|xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx3/64&quot;
</code></pre>
<p>Und schon läuft es.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Fix speed issue when writing to NAS system</title>
        <link>https://tobias-barth.net/blog/Fix-speed-issue-when-writing-to-NAS-system</link>
        <pubDate>Fri, 03 May 2019 11:36:54 GMT</pubDate>
        
        <description>I just fixed an issue with my FreeBSD home server. It is set up as a file server for Mac (AFP) and Linux Clients (NFS). My local network is Gigabit-based so the limitating factor on read/write speeds should be the hard disk drives in the server. But lately, it dropped amazingly to ~1MB/sec.</description>
        <content:encoded><![CDATA[<p>I just fixed an issue with my FreeBSD home server. It is set up as a file server for Mac (AFP) and Linux Clients (NFS). My local network is Gigabit-based so the limitating factor on read/write speeds should be the hard disk drives in the server.</p>
<p>The server has a Core i3-6100T CPU @ 3.20GHz, 8GB RAM, a ZFS setup with two mirror vdevs each consisting of two disks connected to the board via SATA3. And of course the onboard Gbit NIC (Realtek).</p>
<p>I know very well that write speed was at around 50–60MB/sec, which I would expect. But lately, it dropped amazingly to ~1MB/sec. And I just couldn&#39;t think of, why. I suspected the cable, the AFP, the RAM anything.</p>
<p>What I didn&#39;t suspect — until today, that is — was the network interface. But I had time today for some googling and even if I didn&#39;t found the solution directly, I stumbled across something related to the output of <code>ifconfig</code>. So I hacked that into the console and stared at it.</p>
<pre><code>re0: flags=8843&lt;UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST&gt; metric 0 mtu 1500
        options=8209b&lt;RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE&gt;
        ether 4c:cc:6a:b3:3c:f5
        hwaddr 4c:cc:6a:b3:3c:f5
        inet6 fd23:16:7:7::1 prefixlen 64
        inet6 fe80::4ecc:6aff:feb3:3cf5%re0 prefixlen 64 scopeid 0x1
        inet 192.168.10.118 netmask 0xffffff00 broadcast 192.168.10.255
        nd6 options=21&lt;PERFORMNUD,AUTO_LINKLOCAL&gt;
        media: Ethernet autoselect (10baseT/UTP &lt;full-duplex&gt;)
        status: active
</code></pre>
<p>Do you spot it?</p>
<pre><code>        media: Ethernet autoselect (10baseT/UTP &lt;full-duplex&gt;)
</code></pre>
<p>Well, that is … unfortunate. The output of <code>ifconfig -m re0</code> gave me:</p>
<pre><code>	supported media:
			media autoselect mediaopt flowcontrol
			media autoselect
			media 1000baseT mediaopt full-duplex,flowcontrol,master
			media 1000baseT mediaopt full-duplex,flowcontrol
			media 1000baseT mediaopt full-duplex,master
			media 1000baseT mediaopt full-duplex
			media 100baseTX mediaopt full-duplex,flowcontrol
			media 100baseTX mediaopt full-duplex
			media 100baseTX
			media 10baseT/UTP mediaopt full-duplex,flowcontrol
			media 10baseT/UTP mediaopt full-duplex
			media 10baseT/UTP
			media none
</code></pre>
<p>So I ran <code>sudo ifconfig re0 media 1000baseTX mediaopt full-duplex</code> and it worked. After that I also ran <code>sudo ifconfig re0 media autoselect</code> which also set the media type to 1000baseT full-duplex. I have no idea why the system did that wrong (or when) but I will monitor what happens after the next reboot. Maybe I have to do some configuration but maybe it was just an hickup.</p>
<p>Speeds are up to 60MB/sec again.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Handle lid closing correctly in XFCE power settings</title>
        <link>https://tobias-barth.net/blog/Handle-lid-closing-correctly-in-XFCE-power-settings</link>
        <pubDate>Tue, 21 May 2019 15:55:25 GMT</pubDate>
        
        <description>I always had problems with the power management settings on my laptop. It's running Manjaro Linux (Arch derivate). Regardless of what I set in the XFCE power settings, the actions that should happen on lid closing didn't work as expected.</description>
        <content:encoded><![CDATA[<p>This is mainly just a note for my future self. I always had problems with the power management settings on my laptop. It&#39;s running Manjaro Linux (Arch derivate). Regardless of what I set in the XFCE power settings, the actions that should happen on lid closing didn&#39;t work as expected. I wanted that the machine does a suspend-to-RAM when I close the lid and the power cable is plugged in. And when it is not plugged in I wanted the machine to suspend-to-disk (hibernate).</p>
<p>On some point I just disabled everything in <code>/etc/systemd/logind.conf</code> (set it to ignore lid actions) and lived with the fact.</p>
<p>Today on Googling™ I came across two things: First a mention of the file <code>~/.config/xfce4/xconf/xfce-perchannel-xml/xfce4-power-manager.xml</code>. There, all the settings you can set in the graphical power settings tool are saved as XML. Second: A forum post (<a href="https://bbs.archlinux.org/viewtopic.php?pid=1690134#p1690134">https://bbs.archlinux.org/viewtopic.php?pid=1690134#p1690134</a>) that points to the fact that in this XML file there is a setting you can&#39;t set graphical: &quot;logind-handle-lid-switch&quot;. Which is set to <code>true</code> for reasons that are beyond me.</p>
<p>Probably you can do all sorts of things with <code>acpid</code> and/or systemd to control the actions on lid-close and lid-open. But you can also just issue:</p>
<pre><code class="language-bash">xfconf-query -c xfce4-power-manager -p /xfce4-power-manager/logind-handle-lid-switch -s false
</code></pre>
<p>on the shell and then your settings in XFCEs power settings are used by the system and work. Of course I also set the content in <code>logind.conf</code> back to default.</p>
]]></content:encoded>
      </item>
      <item>
        <title>How to bundle your library and why</title>
        <link>https://tobias-barth.net/blog/How-to-bundle-your-library-and-why</link>
        <pubDate>Mon, 18 Nov 2019 21:58:14 GMT</pubDate>
        
        <description>Part 6 of the series "Publish a modern JavaScript (or TypeScript) library". At this point in our setup we deliver our library as separate modules. ES Modules to be exact. Let's discuss what we achieve with that and what could be missing.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 6 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<h3>Publishing formats – do you even need a bundle?</h3>
<p>At this point in our setup we deliver our library as separate modules. ES Modules to be exact. Let&#39;s discuss what we achieve with that and what could be missing.</p>
<p>Remember, we are publishing a library to be used within other applications. Depending on your concrete use case the library will be used in web applications in browsers or in Node.js applications on servers or locally.</p>
<h4>Web applications (I)</h4>
<p>In the case of web applications we can assume that they will get bundled with any of the current solutions, Webpack for example. These bundlers can understand ES Module syntax and since we deliver our code in several modules, the bundler can optimize which code needs to be included and which code doesn&#39;t (tree-shaking). In other words, for this use case we already have everything we need. In fact, bundling our modules together into one blob could defeat our goal to enable end-users to end up with only the code they need. The final application bundlers could maybe no longer differentiate which parts of the library code are being used.</p>
<p><strong>Conclusion: No bundle needed.</strong></p>
<h4>Node.js applications</h4>
<p>What about Node.js? Typically, Node.js applications consist of several independent files; source files and their dependencies (<code>node_modules</code>). The modules will get imported during runtime when they are needed. But does it work with ES Modules? Sort of.</p>
<p>Node.js v12 has <a href="https://nodejs.org/dist/latest-v12.x/docs/api/esm.html">experimental support for ES Modules</a>. &quot;Experimental&quot; means we must &quot;expect major changes in the implementation including interoperability support, specifier resolution, and default behavior.&quot; But yes, it works and it will work even better and smoother in future versions.</p>
<p>Since Node.js has to support CommonJS modules for the time being and since the two module types are not 100% compatible, there are a few things we have to respect if we want to support both ways of usage. First of all, things <strong>will</strong> change. The Node.js team even <a href="https://medium.com/@Node.js/announcing-a-new-experimental-modules-1be8d2d6c2ff">warns</a> to &quot;publish any ES module packages intended for use by Node.js until [handling of packages that support CJS and ESM] is resolved.&quot;</p>
<blockquote>
<p>That means, if your library is intended <em>only</em> for Node.js (so no browser optimization necessary), you don&#39;t want to rely on your library&#39;s users reading your installation notes (they will have to know <em>what</em> to import/require) or you are not interested in investing in features that are only <em>almost</em> there: Please just don&#39;t publish ES Modules. Change the <a href="https://dev.to/4nduril/transpile-modern-language-features-with-babel-4fcp">configuration</a> of Babel&#39;s <code>env</code> preset to <code>{ modules: &#39;commonjs&#39; }</code> and ship only CommonJS modules.</p>
</blockquote>
<p>But with a bit of work we can make sure everthing will be fine. For now the ESM support is behind a flag (<code>--experimental-modules</code>). When the implementation changes, I will update this post as soon as possible.</p>
<p>Node.js uses a combination of declaring a module <code>type</code> inside of <code>package.json</code> and filename extensions. I won&#39;t lay out every detail and combination of these variants but rather show the (in my opinion) most future-proof and easiest approach.</p>
<p>Right now we have created <code>.js</code> files that are in ES Module syntax. Therefore, we will add the <code>type</code> key to our <code>package.json</code> and set it to <code>&quot;module&quot;</code>. This is the signal to Node.js (if run with the <code>--experimental-modules</code> command line flag) that it should parse every <code>.js</code> file in this package scope as ES Module:</p>
<pre><code class="language-javascript">{
  // ...
  &quot;type&quot;: &quot;module&quot;,
  // ...
}
</code></pre>
<p>Note that you oftentimes will come across the advice to use <code>*.mjs</code> file extensions. Don&#39;t do that. <code>*.js</code> is <em>the</em> extension for JavaScript files and will probably always be. Let&#39;s use the default naming for the current standards like ESM syntax. If you have for whatever reason files inside your package that must use CommonJS syntax, give <em>them</em> another extension: <code>*.cjs</code>. Node.js will know what to do with it.</p>
<p>There are a few caveats:</p>
<ol>
<li>Using third party dependencies<ol>
<li>If the external module is (only) in CommonJS syntax, you can import it only as default import. Node.js says that will hopefully change in the future but for now you can&#39;t have named imports on a CommonJS module.</li>
<li>If the external module is published in ESM syntax, check if it follows Node.js&#39; rules: If there is ESM syntax in a <code>*.js</code> file <strong>and</strong> there is no <code>&quot;type&quot;: &quot;module&quot;</code> in the <code>package.json</code>, the package is broken and you can not use it with ES Modules. (Example: <a href="https://github.com/reactjs/react-lifecycles-compat">react-lifecycles-compat</a>). Webpack would make it work but not Node.js. An example for a properly configured package is <a href="https://github.com/graphql/graphql-js">graphql-js</a>. It uses the <code>*.mjs</code> extension for ESM files.</li>
</ol>
</li>
<li>Imports need file extensions. You can import from a package name (<code>import _ from &#39;lodash&#39;</code>) like before but you can not import from a file (or a folder containing an <code>index.(m)js</code>) without the <em>complete</em> path: <code>import x from &#39;./otherfile.js&#39;</code> will work but <code>import x from &#39;./otherfile&#39;</code> won&#39;t. <code>import y from &#39;./that-folder/index.js&#39;</code> will work but <code>import y from &#39;./that-folder&#39;</code> won&#39;t.</li>
<li>There is a way around the file extension rule but you have to force your users to do it: They must run their program with a second flag: <code>--es-module-specifier-resolution=node</code>. That will restore the resolution pattern Node.js users know from CommonJS. <strong>Unfortunately that is also necessary if you have Babel runtime helpers included by Babel.</strong> Babel will inject default imports which is good, but it omits the file extensions. So if your library depends on Babel transforms, you have to tell your users that they will have to use the second flag. (Not too bad because they already know how to pass ESM related flags when they want to opt into ESM.)</li>
</ol>
<p>For all other users that are not so into experimental features we also publish in CommonJS. To support CommonJS we do something, let&#39;s say, non-canonical in the Node.js world: we deliver a single-file bundle. Normally, people don&#39;t bundle for Node.js because it&#39;s not necessary. But because we need a second compile one way or the other, it&#39;s the easiest path. Also note that other than in the web we don&#39;t have to care too much for size as everything executes locally and is installed beforehand.</p>
<p><strong>Conclusion: Bundle needed if we want to ship both, CommonJS and ESM.</strong></p>
<h4>Web applications (II)</h4>
<p>There is another use case regarding web applications. Sometimes people want to be able to include a library by dropping a <code>&lt;script&gt;</code> tag into their HTML and refer to the library via a global variable. (There are also other scenarios that may need such a kind of package.) To make that possible without additional setup by the user, all of your library&#39;s code must be bundled together in one file.</p>
<p><strong>Conclusion: Bundle needed to make usage as easy as possible.</strong></p>
<h4>Special &quot;imports&quot;</h4>
<p>There is a class of use cases that came up mainly with the rise of Webpack and its rich &quot;loader&quot; landscape. And that is: importing every file type that you can imagine <em>into your JavaScript</em>. It probably started with requiring accompanying CSS files into JS components and went over images and what not. <strong>If you do something like that in your library, you have to use a bundler.</strong> Because otherwise the consumers of your library would have to use a bundler themselves that is at least configured exactly in a way that handles all strange (read: not JS-) imports in your library. Nobody wants to do that.</p>
<p>If you deliver stylings alongside with your JS code, you should do it with a separate CSS file that comes with the rest of the code. And if you write a whole UI library like Bootstrap then you probably don&#39;t want to ask your users for importing hundreds of CSS files but one compiled file. And the same goes for other non-JS file types.</p>
<p><strong>Conclusion: Bundle needed</strong></p>
<h3>Ok, ok, now tell me how to do it!</h3>
<p>Alright. Now you can decide if you really need to bundle your library. Also, you have an idea of what the bundle should &quot;look&quot; like from outside: For classic usage with Node.js, it should be a big CommonJS module, consumable with <code>require()</code>. For further bundling in web applications it may be better to have a big ES module that is tree-shakable.</p>
<p>And here is the cliffhanger: Each of the common bundling tools will get their own article in this series. This post is already long enough.</p>
<p>Next up: Use Webpack for bundling your library.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Mein Blog</title>
        <link>https://tobias-barth.net/blog/Mein-Blog</link>
        <pubDate>Sat, 25 Jul 2015 02:24:16 GMT</pubDate>
        
        <description>Endlich habe ich es doch geschafft. Mein Blog ist funktionsfähig und online.</description>
        <content:encoded><![CDATA[<p>Endlich habe ich es doch geschafft. Mein Blog ist funktionsfähig und online.</p>
<p>Ich hatte ja schon im zugehörigen <a href="https://github.com/4nduril/my-website/issues/5">Github-Issue</a> geschrieben, dass ich mich für das <a href="https://hexo.io">Hexo-Framework</a> entschieden hatte. Ich fand es spannend, ein bisschen ungewöhnlich und natürlich toll, dass es in Nodejs geschrieben ist. So kann ich schnell Dinge ändern bzw. fixen, die mich stören.</p>
<p>Das erste habe ich schon gefixt, nämlich das hexo-generator-feed-Plugin. Das erzeugte in meiner Konfiguration (also Hexo-Blog in Unterordner) falsche Permalinks. Ich habe den Autor zwar unter dem betreffenden Commit gefragt, wofür der war (er hat es kaputt geändert), aber bisher keine Antwort. Ich werd&#39;s wohl demnächst auch mal als Bug oder gleich als Pull Request filen. Die reparierte Version findet man zur Zeit bei mir: <a href="https://github.com/4nduril/hexo-generator-feed">4ndurils hexo-generator-feed</a>.</p>
<p>Ich habe zwar schon länger ein privates Blog gehabt, in dem auch hin und wieder Web-Kram landete, aber ich wollte jetzt gern im richtigen Rahmen und öfter Sachen aus dem Frontend-Alltag (oder Säuremiene, wie manche sagen) posten. Zwei, drei alte Posts vom anderen Blog habe ich hier der Vollständigkeit halber importiert, aber die sind wirklich schon betagt.</p>
<p>Achso, zwei Dinge noch:</p>
<ol>
<li>Das hier ist nur die minimalist working version. Es gibt z.B. noch keine vernünftige Integration von Kategorien und Tags (sie werden zumindest noch nicht dargestellt). Das wird sich über die Zeit aber immer weiter verbessern. Die Grundfunktion eines Blogs sind Einträge und der Feed. Beides geht.</li>
<li>Es gibt keine Kommentarsektion. Das ist auch Absicht und wird erstmal so bleiben. Ob ich meine Meinung ändere, weiß ich noch nicht, derzeit steht sie aber fest. Kommentare und Anmerkungen erhalte ich aber trotzdem sehr gern per Twitter oder Email.</li>
</ol>
<p>Viel Spaß. Ich freu mich!</p>
]]></content:encoded>
      </item>
      <item>
        <title>Publish a modern JavaScript (or TypeScript) library</title>
        <link>https://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library</link>
        <pubDate>Fri, 05 Jul 2019 16:02:26 GMT</pubDate>
        
        <description>Did you ever write some library code together and then wanted to publish it as an NPM package but realized you have no idea what is the technique du jour to do so? Which transpiler, which bundler, which other tools and why? You have found the right place.</description>
        <content:encoded><![CDATA[<p>Did you ever write some library code together and then wanted to publish it as an NPM package but realized you have no idea what is the technique du jour to do so?</p>
<p>Did you ever wonder &quot;Should I use Webpack or Rollup?&quot;, &quot;What about ES modules?&quot;, &quot;What about any other package format, actually?&quot;, &quot;How to publish Types along with the compiled code?&quot; and so on?</p>
<p>Perfect! You have found the right place. In this series of articles I will try to answer every one of these questions. With example configurations for most of the possible combinations of these tools and wishes.</p>
<h3>Technology base</h3>
<p>This is the set of tools and their respective version range for which this tutorial is tested:</p>
<ul>
<li>ES2018</li>
<li>Webpack &gt;= 4</li>
<li>Babel &gt;= 7.4</li>
<li>TypeScript &gt;= 3</li>
<li>Rollup &gt;= 1</li>
<li>React &gt;= 16.8
( code aimed at other libraries like Vue or Angular should work the same )</li>
</ul>
<p>Some or even most of that what follows could be applied to older versions of these tools, too. But I will not guarantee or test it.</p>
<h3>Creation</h3>
<p>The first thing to do before publishing a library is obviously to write one. Let&#39;s say we have already done that. In fact, it&#39;s <a href="https://github.com/4nduril/library-starter/tree/init">this one</a>. It consists of several source files and therefore, modules. We have provided our desired functionality, used our favorite, modern JavaScript (or TypeScript) features and crafted it with our beloved editor settings.</p>
<p>What now? What do we want to achieve in this tutorial?</p>
<ol>
<li>Transpile modern language features so that every browser in one of the last 2 versions can understand our code.</li>
<li>Avoid duplicating compile-stage helpers to keep the library as small as possible.</li>
<li>Ensure code quality with linting and tests.</li>
<li>Bundle our modules into one consumable, installable file.</li>
<li>Provide ES modules to make the library tree-shakable.</li>
<li>Provide typings if we wrote our library in TypeScript.</li>
<li>Improve collaborating with other developers (from our team or, if it is an open source library, from the public).</li>
</ol>
<p>Wow. That&#39;s a whole lot of things. Let&#39;s see if we can make it.</p>
<p>Note that some of these steps can be done with different tools or maybe differ depending on the code being written in TypeScript or JavaScript. We&#39;ll cover all of that. Well, probably not all of that, but I will try to cover the most common combinations.</p>
<p>The chapters of this series will not only show configurations I think you should use, but also will I explain the reasoning behind it and how it works. If you aren&#39;t interested in these backgrounds, there will be a link right at the top of each post down to the configurations and steps to execute without much around.</p>
<h3>Go!</h3>
<p>We will start with the first points on our list above. As new articles arrive, I will add them here as links and I will also try to keep the finished articles updated when the tools they use get new features or change APIs. If you find something that&#39;s not true anymore, please give me a hint.</p>
<ol>
<li><a href="https://tobias-barth.net/blog/Transpile-modern-language-features-with-Babel/">Transpile modern language features – With Babel</a>.</li>
<li><a href="https://tobias-barth.net/blog/Compiling-modern-language-features-with-the-TypeScript-compiler/">Compiling modern language features with the TypeScript compiler</a>.</li>
<li><a href="https://tobias-barth.net/blog/Building-your-library-Part-1/">Building your library: Part 1</a></li>
<li><a href="https://tobias-barth.net/blog/Check-types-and-emit-type-declarations">Check types and emit type declarations</a></li>
<li><a href="https://tobias-barth.net/blog/How-to-bundle-your-library-and-why">How to bundle your library and why</a></li>
<li><a href="https://tobias-barth.net/blog/Bundling-your-library-with-Webpack">Bundling your library with Webpack</a></li>
</ol>
<p>Oh and one last thing™: I&#39;ll be using <code>npm</code> throughout the series because I like it. If you like <code>yarn</code> better, just exchange the commands.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Resize LVM on LUKS partition without messing everything up</title>
        <link>https://tobias-barth.net/blog/Resize-LVM-on-LUKS-partition-without-messing-everything-up</link>
        <pubDate>Mon, 10 Jun 2019 13:49:27 GMT</pubDate>
        
        <description>I needed to shrink /dev/sda3, move it to the end of the SSD and grow /dev/sda2 as necessary. But I did not know if this was even possible with my setup.</description>
        <content:encoded><![CDATA[<p>Since forever I run my work computer operating systems on a full-disk-encrypted partition. Currently this is Manjaro Linux. When I set up my current machine I made the following partition scheme:</p>
<pre><code>sda                  238,5G  disk
├─sda1                 260M  part  /boot/efi
├─sda2                 128M  part  /boot
└─sda3                 237G  part
  └─tank               237G  crypt
</code></pre>
<p>Somewhere, I can&#39;t even remember when, I read that 128M for <code>/boot</code> would be sufficient. And it was for a few years. But kernel images and/or initram disks grew bigger and bigger until I could not upgrade to a newer kernel anymore. The last kernel I ran was Linux 4.16 and the files in <code>/boot</code> took around 75M space and so <code>mhwd-kernel -i linux417</code> had too little space on the device left.</p>
<p>What I needed to do was to shrink <code>/dev/sda3</code>, move it to the end of the SSD and grow <code>/dev/sda2</code> as necessary.</p>
<p>But I did not know if this was even possible with my setup. Inside the encrypted partition there is an LVM container with 5 logical volumes including <code>/</code>. I pushed it into the future again and again because most of the time I am working in running projects and can not afford to have a non-functioning machine for &lt;absurd amount of time that you never expect before a hardware near change&gt;.</p>
<p>But in the end it was relatively easy. I had feared that in the worst case I would have to re-setup my whole machine and restore backups for the data and system partitions. Which then maybe would need endless tweaking until it runs again (No, I never had a hard disk failure or similar, so I never had to actually do anything like that).</p>
<p>So, here are the things I needed to do:</p>
<h2>1. Backup</h2>
<p>List all logcal volumes:</p>
<pre><code># lvs
LV     VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
docker tank -wi-ao----   5,00g
home   tank -wi-ao---- 100,00g
mongo  tank -wi-ao----   1,00g
root   tank -wi-ao----  25,00g
swap   tank -wc-ao----  32,00g
</code></pre>
<p>For each lv do the following:</p>
<pre><code># lvcreate -s -n &lt;name&gt;snap /dev/tank/&lt;name&gt;
# dd if=/dev/tank/&lt;name&gt;snap of=/path/to/external/storage/&lt;name&gt;.img
</code></pre>
<p>Where <code>&lt;name&gt;</code> must be replaced by the actual names of the lvs. Then I backed up both the <code>/boot</code> and the <code>/boot/efi</code> partitions, also with <code>dd</code>.
Finally I made a backup of the LUKS header for the crypto-partition:</p>
<pre><code># cryptsetup luksHeaderBackup /dev/sda3 --header-backup-file /path/to/external/storage/luks-header.bkp
</code></pre>
<h2>2. Boot in a live system from an USB stick and decrypt the device</h2>
<pre><code># cryptsetup open /dev/sda3 tank --type luks
</code></pre>
<h2>3. Resize the physical volume</h2>
<p><strong>Note:</strong> I have free space inside my LVM container. As you can see from the output of <code>lvs</code> above I currently use only 143GB out of roughly 238GB. That means I do not have to resize logical volumes <em>before</em> I resize the containing physical volume. If you use all of the available space for logical volumes, look into <a href="https://jlk.fjfi.cvut.cz/arch/manpages/man/lvresize.8"><code>lvresize(8)</code></a> first: For example in the <a href="https://wiki.archlinux.org/index.php/LVM#Resizing_volumes">Arch Wiki</a>.</p>
<p>I generously shrank the volume from 238,07G to 236G with:</p>
<pre><code># pvresize --setphysicalvolumesize 236G /dev/mapper/tank
</code></pre>
<h2>4. Resize the crypto-device</h2>
<p>Find out how many sectors is the current size (note that my crypto-device has the same name like my volume group: <code>tank</code>. That could be different in your setup):</p>
<pre><code># cryptsetup status tank
...
sector size:  512
size:  499122176
...
</code></pre>
<p>In the end I want to add about 1G to the <code>/boot</code> partition. That is <code>1024 * 1024 * 1024 / 512 = 2097152</code> sectors.</p>
<pre><code># cryptsetup -b 497025024 resize tank
</code></pre>
<h2>5. Resize the GUID partition</h2>
<p>You see we go from innermost to outermost: LVM -&gt; crypto -&gt; GUID. I use <code>parted</code> to resize the partition <code>/dev/sda3</code>:</p>
<pre><code># parted
(parted) unit s
(parted) print
...
Number  Begin     End         Size                     Name  Flags
...
3      3100672s  500115455s  497014784s               TANK  lvm
</code></pre>
<p>These numbers were actually different as I write this blog post in hindsight. The point is that partition number 3 went all the way to the last sector of the disk and I now must calculate where it should end in the future. Because <code>resizepart</code> takes not the future size but the future end sector of the partition as argument. So I subtract the same sector count as calculated above for cryptsetup (<code>2097152</code>) from the <em>end sector</em> of partition 3 (<code>500115455</code>) which gives <code>498018303</code>.</p>
<pre><code>
(parted) resizepart 3 498018303s
</code></pre>
<p>Now we have free space on the SSD <em>after</em> the main partition. But I want to grow partition 2.</p>
<h2>6. Reorder partitions and resize partition 2</h2>
<p>I did that with GParted instead of a command line tool. Probably there is a way to do it with <code>gdisk</code> but <code>parted</code> has removed its command to <code>move</code> partitions. And because I was in a graphical live system anyway and also read that you could do it with GParted I just went for it.
First I closed the crypto device because GParted would not let me move the partition otherwise:</p>
<pre><code># vgchange -an tank
# cryptsetup close tank
</code></pre>
<p>Then I opened GParted and right-clicked on the crypto partition. I chose &quot;Change size|Move&quot; and moved the free space after the partition before it. Then I opend the same dialog for the <code>/boot</code> partition and extended it to cover all of the free space. Finally I committed the changes.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Set up a FreeBSD server - Part 2: Add user and firewall setup</title>
        <link>https://tobias-barth.net/blog/Set-up-a-FreeBSD-server-Part-2-Add-user-and-firewall-setup</link>
        <pubDate>Thu, 21 Jun 2018 18:14:14 GMT</pubDate>
        
        <description>Some initial setup work like adding a user, properly configuring SSH access and adding a firewall.</description>
        <content:encoded><![CDATA[<p><a href="/blog/2018/06/Set-up-a-FreeBSD-server-on-DigitalOcean-with-jails-Part-1/">Last time</a> I stopped when my new droplet was initialized. Next, I will do some initial setup work like adding a user, properly configuring SSH access and adding a firewall.</p>
<p>I added my SSH public key to the droplet when I created it, so I can now login by typing:</p>
<pre><code>~ $ ssh -l root &lt;droplet-ip&gt;
</code></pre>
<p>and then providing my passphrase.</p>
<h3>User setup</h3>
<p>First, I will update existing packages. (Sidenote: Since the DO-droplets are not the most heavy-lifting machines, at least in my version, I will install everything as precompiled packages instead of using ports.)</p>
<pre><code>root@pioneer-3:~ # pkg upgrade
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01
Fetching packagesite.txz: 100%    6 MiB   6.4MB/s    00:01
Processing entries: 100%
FreeBSD repository update completed. 31140 packages processed.
All repositories are up to date.
New version of pkg detected; it needs to be installed first.
The following 1 package(s) will be affected (of 0 checked):

Installed packages to be UPGRADED:
        pkg: 1.10.1 -&gt; 1.10.5

Number of packages to be upgraded: 1

3 MiB to be downloaded.

Proceed with this action? [y/N]: y
[1/1] Fetching pkg-1.10.5.txz: 100%    3 MiB   3.0MB/s    00:01
Checking integrity... done (0 conflicting)
[1/1] Upgrading pkg from 1.10.1 to 1.10.5...
Extracting pkg-1.10.5: 100%
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
Checking for upgrades (42 candidates): 100%
Processing candidates (42 candidates): 100%
The following 47 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        py27-asn1crypto: 0.22.0
        oniguruma: 6.8.1
        e2fsprogs-libuuid: 1.44.2
        e2fsprogs-libblkid: 1.44.2
        e2fsprogs-libss: 1.44.2

Installed packages to be UPGRADED:
        sudo: 1.8.20p2_2 -&gt; 1.8.22
        rsync: 3.1.2_7 -&gt; 3.1.3
        readline: 7.0.3 -&gt; 7.0.3_1
        python27: 2.7.13_6 -&gt; 2.7.15
        py27-yaml: 3.11_2 -&gt; 3.12
        py27-urllib3: 1.21.1 -&gt; 1.22
        py27-six: 1.10.0 -&gt; 1.11.0
        py27-setuptools: 36.0.1 -&gt; 39.0.1
        py27-serial: 3.2.1 -&gt; 3.4
        py27-requests: 2.18.1 -&gt; 2.18.4
        py27-pytz: 2016.10,1 -&gt; 2018.3,1
        py27-pysocks: 1.6.7 -&gt; 1.6.8
        py27-pycparser: 2.10 -&gt; 2.18
        py27-pyasn1: 0.2.2 -&gt; 0.4.2
        py27-openssl: 16.2.0 -&gt; 17.5.0_1
        py27-jsonpointer: 1.9 -&gt; 1.9_1
        py27-jsonpatch: 1.9 -&gt; 1.21
        py27-ipaddress: 1.0.18 -&gt; 1.0.19
        py27-idna: 2.5 -&gt; 2.6
        py27-cryptography: 1.7.2 -&gt; 2.1.4
        py27-cloud-init: 0.7.6 -&gt; 0.7.6_1
        py27-chardet: 3.0.3 -&gt; 3.0.4
        py27-cffi: 1.7.0 -&gt; 1.11.2
        py27-certifi: 2017.4.17 -&gt; 2018.1.18
        py27-boto: 2.47.0 -&gt; 2.48.0
        py27-Jinja2: 2.9.5 -&gt; 2.10
        py27-Babel: 2.3.4 -&gt; 2.5.1
        libnghttp2: 1.23.1 -&gt; 1.31.1
        libiconv: 1.14_10 -&gt; 1.14_11
        libffi: 3.2.1 -&gt; 3.2.1_2
        jq: 1.5 -&gt; 1.5_3
        indexinfo: 0.2.6 -&gt; 0.3.1
        e2fsprogs: 1.43.4 -&gt; 1.44.2
        curl: 7.54.1 -&gt; 7.60.0
        ca_root_nss: 3.31 -&gt; 3.37.3

Installed packages to be REINSTALLED:
        py27-prettytable-0.7.2_2 (direct dependency changed: py27-setuptools)
        py27-oauth-1.0.1_2 (direct dependency changed: py27-setuptools)
        py27-markdown-2.6.8 (direct dependency changed: py27-setuptools)
        py27-enum34-1.1.6 (direct dependency changed: py27-setuptools)
        py27-configobj-5.0.6_1 (direct dependency changed: py27-six)
        py27-cheetah-2.4.4_1 (direct dependency changed: py27-setuptools)
        py27-MarkupSafe-1.0 (direct dependency changed: py27-setuptools)

Number of packages to be installed: 5
Number of packages to be upgraded: 35
Number of packages to be reinstalled: 7

The process will require 2 MiB more space.
24 MiB to be downloaded.

Proceed with this action? [y/N]:y
.
.
.
root@pioneer-3:~ #
</code></pre>
<p>Now I can add my non-privileged user (aptly named &quot;tobi&quot;):</p>
<pre><code>root@pioneer-3:~ # adduser
Username: tobi
Full name: Tobias Barth
Uid (Leave empty for default):
Login group [tobi]:
Login group is tobi. Invite tobi into other groups? []: wheel
Login class [default]:
Shell (sh csh tcsh nologin) [sh]:
Home directory [/home/tobi]:
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]:
Enter password:
Enter password again:
Lock out the account after creation? [no]:
Username   : tobi
Password   : *****
Full Name  : Tobias Barth
Uid        : 1002
Class      :
Groups     : tobi wheel
Home       : /home/tobi
Home Mode  :
Shell      : /bin/sh
Locked     : no
OK? (yes/no): yes
adduser: INFO: Successfully added (tobi) to the user database.
Add another user? (yes/no): no
Goodbye!
</code></pre>
<p>Notably, I added my new user to the group &quot;wheel&quot; to enable the use of &quot;sudo&quot; for this user. To make that work, I have to edit the file <code>/usr/local/etc/sudoers</code>. This is not done directly, but with help from the command <code>visudo</code>:</p>
<pre><code>root@pioneer-3:~ # visudo
</code></pre>
<p>Now, uncomment the line</p>
<pre><code>%wheel ALL=(ALL) ALL
</code></pre>
<h3>SSH-Configuration</h3>
<p>At this point, I can login per SSH as the user &quot;tobi&quot; with my password. That is a step in the right direction (I want to disable root logins), but not ideal. Authentication with public/private key pairs is more secure than using a password. So I will configure that. My public key is already on the server, but within the home directory of root. I can just copy it over to my home folder:</p>
<pre><code>root@pioneer-3:~ # su tobi
[tobi@pioneer-3 ~]$ cd
[tobi@pioneer-3 ~]$ mkdir .ssh
[tobi@pioneer-3 ~]$ chmod 700 .ssh
[tobi@pioneer-3 ~]$ sudo cp /root/authorized_keys .ssh
[tobi@pioneer-3 ~]$ sudo chown tobi:tobi .ssh/authorized_keys
</code></pre>
<p>I can now login with my SSH-Key and a passphrase as the unprivileged user &quot;tobi&quot;. Next, I edit the SSH config in <code>/etc/ssh/sshd_config</code></p>
<p>I change it so that it contains the following lines and values:</p>
<pre><code>PasswordAuthentication no
ChallengeResponseAuthentication no
PubkeyAuthentication yes
PermitRootLogin no
</code></pre>
<p>With this root is excluded from remote and users can only authenticate with a key.</p>
<p>I have sneaked in a different shell prompt and not only that – it&#39;s an entire different shell: ZSH. I installed it with <code>pkg install zsh</code> and then made it the default shell for both, the root user and the user &quot;tobi&quot;. Changing the shell is as easy as:</p>
<pre><code>chsh -s zsh
</code></pre>
<p>while &quot;being&quot; the user I want to change. Alternatively, I can append the username to the command. Additionally I provided an absolute basic <code>.zshrc</code> configuration file for both users:</p>
<pre><code># Lines configured by zsh-newuser-install
setopt appendhistory autocd extendedglob nomatch notify
unsetopt beep
# End of lines configured by zsh-newuser-install
autoload -Uz promptinit compinit
compinit
promptinit
prompt redhat

alias l=&quot;ls -al&quot;
</code></pre>
<h3>Conclusion</h3>
<p>This post is already long enough and firewall configuration is a complete new topic. So I will end just here and continue in part 3 of this series.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Set up a FreeBSD server on DigitalOcean with jails – Part 1</title>
        <link>https://tobias-barth.net/blog/Set-up-a-FreeBSD-server-on-DigitalOcean-with-jails-Part-1</link>
        <pubDate>Wed, 13 Jun 2018 16:01:56 GMT</pubDate>
        
        <description>I have a few Droplets running on DigitalOcean (DO) for small puposes like Dropbox-replacement, private Git repositories and so on. Now I want to move my website and so, my webserver from another hoster to DO.</description>
        <content:encoded><![CDATA[<p>I have a few Droplets running on DigitalOcean (DO) for small puposes like Dropbox-replacement, private Git repositories and so on. Now I want to move my website and so, my webserver from another hoster to DO. Also, in the (middle) long run, I want to set up my own mail server which will handle all email for my domain. Besides that I am thinking of running my own DNS server—maybe just a resolver for my own computers or even a real server which handles request to my domain.</p>
<p>So, much to do. I will use this enterprise to learn and document for myself primarily. If it is of any help to others, even better.</p>
<p>Let&#39;s start. I go to my DO account and add a new Droplet. After logging in I click on &quot;Create&quot; and choose &quot;Droplets&quot;. I am prompted to choose an image. I will have FreeBSD, version &quot;11.1 x64 ZFS&quot;. Next is choosing a size. I go with the smallest (and cheapest) size because I can resize it later if I need to. So &quot;1GB RAM, 1 vCPU, 25GB disk, 1TB transfer&quot; it is. I skip backups and block storage for now and change the datacenter location to Frankfurt, Germany because that&#39;s nearest to where I live. Under &quot;Select additional options&quot; I select IPv6. Don&#39;t ask. Do. Finally I need to choose an SSH-Key to preload the Droplet with. Here I can just select my previously generated (on my own computer!) and uploaded public key. If you don&#39;t have one, generate it on your machine (see &quot;man ssh-keygen&quot;) and upload it via the &quot;New SSH key&quot; button.</p>
<p>The very last thing to do is to enter a name. I&#39;ll go with pioneer-3 for now. Hit create.</p>
<p>After a few seconds the Droplet is ready. In my dashboard I switch it off for now.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Transpile modern language features with Babel</title>
        <link>https://tobias-barth.net/blog/Transpile-modern-language-features-with-Babel</link>
        <pubDate>Fri, 05 Jul 2019 16:05:11 GMT</pubDate>
        
        <description>Part 2 of the series "Publish a modern JavaScript (or TypeScript) library". Babel can transpile JavaScript as well as TypeScript. I would argue that it's even better to use Babel instead of the TypeScript compiler for compiling the code (down) to compatible JavaScript because it is faster.</description>
        <content:encoded><![CDATA[<h3>Preface</h3>
<p>This article is part 2 of the series &quot;Publish a modern JavaScript (or TypeScript) library&quot;. Check out the motivation and links to other parts <a href="http://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/">in the introduction</a>.</p>
<h3>Why Babel and how should you use it in a library?</h3>
<p><strong>If you are not interested in the background and reasoning behind the setup, <a href="#tmplfwb-conclusion">jump directly to the conclusion</a></strong></p>
<p>Babel can transpile JavaScript as well as TypeScript. I would argue that it&#39;s even better to use Babel instead of the TypeScript compiler for compiling the code (down) to compatible JavaScript because it is faster. What Babel does when it compiles TypeScript is it just discards everything that isn&#39;t JavaScript. <strong>Babel does no type checking.</strong> Which we don&#39;t need at this point.</p>
<p>To use Babel you have to install it first: Run <code>npm install -D @babel/core @babel/cli @babel/preset-env</code>. This will install the core files, the preset you are going to need always and the command line interface so that you can run Babel in your terminal. Additionally, you should install <code>@babel/preset-typescript</code> and/or <code>@babel/preset-react</code>, both according to your needs. I will explain in a bit what each of is used for but you can imagine from their names in which situations you need them.</p>
<p>So, setup time! Babel is configured via a configuration file. (For details and special cases see <a href="https://babeljs.io/docs/en/config-files">the documentation</a>.) The project-wide configuration file should be <code>babel.config.js</code>. It looks at least very similar to this one:</p>
<pre><code class="language-javascript">module.exports = {
  presets: [
    [
      &#39;@babel/env&#39;,
      {
        modules: false,
      },
    ],
    &#39;@babel/preset-typescript&#39;,
    &#39;@babel/preset-react&#39;,
  ],
  plugins: [[&#39;@babel/plugin-transform-runtime&#39;, { corejs: 3 }]],
  env: {
    test: {
      presets: [&#39;@babel/env&#39;],
    },
  },
}
</code></pre>
<p>Let&#39;s go through it because there are a few assumptions used in this config which we will need for other features in our list.</p>
<h3><code>module.exports = {…}</code></h3>
<p>The file is treated as a CommonJS module and is expected to return a configuration object. It is possible to export a function instead but we&#39;ll stick to the static object here. For the function version look into the <a href="https://babeljs.io/docs/en/config-files#config-function-api">docs</a>.</p>
<h3><code>presets</code></h3>
<p>Presets are (sometimes configurable) sets of Babel plugins so that you don&#39;t have to manage yourself which plugins you need. The one you should definitely use is <code>@babel/preset-env</code>. You have already installed it. Under the <code>presets</code> key in the config you list every preset your library is going to use along with any preset configuration options.</p>
<p>In the example config above there are three presets:</p>
<ol>
<li><code>env</code> is the mentioned standard one.</li>
<li><code>typescript</code> is obviously only needed to compile files that contain TypeScript syntax. As already mentioned it works by <strong>throwing away</strong> anything that isn&#39;t JavaScript. It does not interpret or even check TypeScript. <em>And that&#39;s a Good Thing.</em> We will talk about that point later. If your library is not written in TypeScript, you don&#39;t need this preset. But if you need it, you have to install it of course: <code>npm install -D @babel/preset-typescript</code>.</li>
<li><code>react</code> is clearly only needed in React projects. It brings plugins for JSX syntax and transforming. If you need it, install it with: <code>npm i -D @babel/preset-react</code>. Note: With the config option <code>pragma</code> (and probably <code>pragmaFrag</code>) you can transpile JSX to other functions than <code>React.createElement</code>. See <a href="https://babeljs.io/docs/en/babel-preset-react#pragma">documentation</a>.</li>
</ol>
<p>Let us look at the <code>env</code> preset again. Notable is the <code>modules: false</code> option for <code>preset-env</code>. The effect is this: As per default Babel transpiles ESModules (<code>import</code> / <code>export</code>) to CommonJS modules (<code>require()</code> / <code>module.export(s)</code>). With <code>modules</code> set to <code>false</code> Babel will output the transpiled files with their ESModule syntax untouched. The rest of the code will be transformed, just the module related statements stay the same. This has (at least) two benefits:</p>
<p>First, this is a library. If you publish it as separate files, users of your library can import exactly the modules they need. And if they use a bundler that has the ability to treeshake (that is: to remove unused modules on bundling), they will end up with only the code bits they need from your library. With CommonJS modules that would not be possible and they would have your whole library in their bundle.</p>
<p>Furthermore, if you are going to provide your library as a bundle (for example a UMD bundle that one can use via <a href="http://unpkg.com">unpkg.com</a>), you can make use of treeshaking and shrink your bundle as much as possible.</p>
<p>There is another, suspiciously absent option for <code>preset-env</code> and that is the <code>targets</code> option. If you omit it, Babel will transpile your code down to ES5. That is most likely not what you want—unless you live in the dark, medieval times of JavaScript (or you know someone who uses <abbr title="Internet Explorer">IE</abbr>). Why transpiling something (and generating much more code) if the runtime environment can handle your modern code? What you could do is to provide said <code>targets</code> key and give it a <a href="https://github.com/ai/browserslist">Browserslist</a> compatible query (see <a href="https://babeljs.io/docs/en/babel-preset-env#targets">Babel documentation</a>). For example something like <code>&quot;last 2 versions&quot;</code> or even <code>&quot;defaults&quot;</code>. In that case Babel would use the browserslist tool to find out which features it has to transpile to be able to run in the environments given with <code>targets</code>.</p>
<p>But we will use another place to put this configuration than the <code>babel.config.js</code> file. You see, Babel is not the only tool that can make use of browserslist. But any tool, including Babel, will find the configuration if it&#39;s in the right place. The documentation of browserslist recommends to put it inside <code>package.json</code> so we will do that. Add something like the following to your library&#39;s <code>package.json</code>:</p>
<pre><code class="language-json">  &quot;browserslist&quot;: [
    &quot;last 2 Chrome versions&quot;,
    &quot;last 2 Firefox versions&quot;,
    &quot;last 2 Edge versions&quot;,
    &quot;last 2 Opera versions&quot;,
    &quot;last 2 FirefoxAndroid versions&quot;,
    &quot;last 2 iOS version&quot;,
    &quot;last 2 safari version&quot;
  ]
</code></pre>
<p>I will admit this query is a bit opinionated, maybe not even good for you. You can of course roll your own, or if you are unsure, just go with this one:</p>
<pre><code class="language-json">  &quot;browserslist&quot;: &quot;defaults&quot; // alias for &quot;&gt; 0.5%, last 2 versions, Firefox ESR, not dead&quot;; contains ie 11
</code></pre>
<p>The reason I propose the query array above is that I want to get an optimized build for modern browsers. <code>&quot;defaults&quot;</code>, <code>&quot;last 2 versions&quot;</code> (without specific browser names) and the like will include things like Internet Explorer 11 and Samsung Internet 4. These ancient browsers do not support so much even of ES2015. We would end up with a much much bigger deliverable than modern browsers would need. But there is something you can do about it. You can deliver modern code to modern browsers and still support The Ancients™. We will go into further details in a future section but as a little cliffhanger: browserslist supports multiple configurations. For now we will target only modern browsers.</p>
<h3><code>plugins</code></h3>
<p>The Babel configuration above defines one extra plugin: <code>plugin-transform-runtime</code>. The main reason to use this is deduplication of helper code. When Babel transpiles your modules, it injects little (or not so little) helper functions. The problem is that it does so in every file where they are needed. The <code>transform-runtime</code> plugin replaces all those injected functions with <code>require</code> statements to the <code>@babel/runtime</code> package. That means <strong>in the final application there has to be this runtime package</strong>.</p>
<p>To make that happen you could just add <code>@babel/runtime</code> to the prod dependencies of your library (<code>npm i @babel/runtime</code>). That would definitely work. But here we will add it to the <code>peerDependencies</code> in <code>package.json</code>. That way the user of your library has to install it themselves but on the other hand, they have more control over the version (and you don&#39;t have to update the dependency too often). And maybe they have it installed already anyway. So we just push it out of our way and just make sure that it is there when needed.</p>
<p>Back to the Babel plugin. To use that plugin you have to install it: <code>npm i -D @babel/plugin-transform-runtime</code>. Now you&#39;re good to go.</p>
<p>Before we go on to the <code>env</code> key, this is the right place to talk about polyfills and how to use them with Babel.</p>
<h3>How to use polyfills in the best way possible</h3>
<p>It took me a few hours reading and understanding the problem, the current solutions and their weaknesses. If you want to read it up yourself, start at <a href="https://babeljs.io/docs/en/babel-polyfill">Babel polyfill</a>, go on with <a href="https://babeljs.io/docs/en/babel-plugin-transform-runtime">Babel transform-runtime</a> and then read <a href="https://github.com/zloirock/core-js/blob/master/docs/2019-03-19-core-js-3-babel-and-a-look-into-the-future.md">core-js@3, babel and a look into the future</a>.</p>
<p>But because I already did you don&#39;t have to if you don&#39;t want to. Ok, let&#39;s start with the fact that there two standard ways to get polyfills into your code. Wait, one step back: Why polyfills?</p>
<p>If you already know, skip to <a href="#tmplfwb-import-core-js">Import core-js</a>. When Babel transpiles your code according to the target environment that you specified, it just changes syntax. Code that the target (the browser) does not understand is changed to (probably longer and more complicated) code that does the same and is understood. But there are things beyond syntax that are possibly not supported: features. Like for example Promises. Or certain features of other builtin types like <code>Object.is</code> or <code>Array.from</code> or whole new types like <code>Map</code> or <code>Set</code>. Therefore we need polyfills that recreate those features for targets that do not support them natively.</p>
<p>Also note that we are talking here only about polyfills for ES-features or some closely related Web Platform features (see the <a href="https://github.com/zloirock/core-js/blob/master/README.md#features">full list here</a>). There are browser features like for instance the global <code>fetch</code> function that need separate polyfills.</p>
<h3><a name="tmplfwb-import-core-js"></a>Import core-js</h3>
<p>Ok, so there is a Babel package called <code>@babel/polyfill</code> that you can import at the entry point of your application and it adds all needed polyfills from a library called <a href="https://github.com/zloirock/core-js"><code>core-js</code></a> as well as a separate runtime needed for <code>async/await</code> and generator functions. <strong>But since Babel 7.4.0 this wrapper package is deprecated.</strong> Instead you should install and import two separate packages: <code>core-js/stable</code> and <code>regenerator-runtime/runtime</code>.</p>
<p>Then, we can get a nice effect from our <code>env</code> preset from above. We change the configuration to this:</p>
<pre><code class="language-javascript">    [
      &#39;@babel/env&#39;,
      {
        modules: false,
        corejs: 3,
        useBuiltIns: &#39;usage&#39;
      }
    ],
</code></pre>
<p>This will transform our code so that the import of the whole <code>core-js</code> gets removed and instead Babel injects specific polyfills in each file where they are needed. And only those polyfills that are needed in the target environment which we have defined via <code>browserslist</code>. So we end up with the bare minimum of additional code.</p>
<p>Two additional notes here: (1) <a name="tmplfwb-corejs-3"></a>You have to explicitly set <code>corejs</code> to <code>3</code>. If the key is absent, Babel will use version 2 of <code>corejs</code> and you don&#39;t want that. Much has changed for the better in version 3, especially feature-wise. But also bugs have been fixed and the package size is dramatically smaller. If you want, read it all up <a href="https://github.com/zloirock/core-js/blob/master/docs/2019-03-19-core-js-3-babel-and-a-look-into-the-future.md#what-changed-in-core-js3">here (overview)</a> and <a href="https://github.com/zloirock/core-js/blob/master/CHANGELOG.md#300---20190319">here (changelog for version 3.0.0)</a>.</p>
<p>And (2), there is another possible value for <code>useBuiltIns</code> and that is <code>entry</code>. This variant will not figure out which features your code actually needs. Instead, it will just add <em>all</em> polyfills that exist for the given target environment. It works by looking for <code>corejs</code> imports in your source (like <code>import corejs/stable</code>) which should only appear once in your codebase, probably in your entry module. Then, it replaces this &quot;meta&quot; import with all of the specific imports of polyfills that match your targets. This approach will likely result in a much, much larger package with much of unneeded code. So we just use <code>usage</code>. (With <code>corejs@2</code> there were a few problems with <code>usage</code> that could lead to wrong assumptions about which polyfills you need. So in some cases <code>entry</code> was the more safe option. But these problems are appearently fixed with version 3.)</p>
<h3>Tell transform-runtime to import core-js</h3>
<p>The second way to get the polyfills that your code needs is via the <code>transform-runtime</code> plugin from above. You can configure it to inject not only imports for the Babel helpers but also for the <code>core-js</code> modules that your code needs:</p>
<pre><code class="language-javascript">  plugins: [
    [
      &#39;@babel/plugin-transform-runtime&#39;,
      {
        corejs: 3
      }
    ]
  ],
</code></pre>
<p>This tells the plugin to insert import statements to corejs version 3. The reason for this version I have mentioned <a href="https://github.com/zloirock/core-js/blob/master/docs/2019-03-19-core-js-3-babel-and-a-look-into-the-future.md#what-changed-in-core-js3">above</a>.</p>
<p>If you configure the plugin to use <code>core-js</code>, you have to change the runtime dependency: The <code>peerDependencies</code> should now contain not <code>@babel/runtime</code> but <code>@babel/runtime-corejs3</code>!</p>
<h3>Which way should you use?</h3>
<p>In general, the combination of manual import and the <code>env</code> preset is meant for applications and the way with <code>transform-runtime</code> is meant for libraries. One reason for this is that the first way of using <code>core-js</code> imports polyfills that &quot;pollute&quot; the global namespace. And if your library defines a global <code>Promise</code>, it could interfere with other helper libraries used by your library&#39;s users. The imports that are injected by <code>transform-runtime</code> are contained. They import from <code>core-js-pure</code> which does not set globals.</p>
<p>On the other hand, using the transform plugin does not account for the environment you are targeting. Probably in the future it could also use the same heuristics as <code>preset-env</code> but at the moment it just adds every polyfill that is theoretically needed by your code. Even if the target browsers would not need them or not all of them. For the development in that direction see the <a href="https://github.com/zloirock/core-js/blob/master/docs/2019-03-19-core-js-3-babel-and-a-look-into-the-future.md#babelruntime-for-target-environment">comment from the corejs maintainer</a> and this <a href="https://github.com/babel/babel/issues/10008">RFC issue at Babel</a>.</p>
<p>So it looks like you have to choose between a package that adds as few code as possible and one that plays nicely with unknown applications around it. I have played around with the different options a bit and bundled the resulting files with webpack and this is my result:</p>
<p>You get the smallest bundle with the <code>core-js</code> globals from <code>preset-env</code>. But it&#39;s too dangerous for a library to mess with the global namespace of its users. Besides that, in the (hopefully very near) future the transform-runtime plugin will also use the browserslist target environments. So the size issue is going to go away.</p>
<h3>The <code>env</code> key</h3>
<p>With <code>env</code> you can add configuration options for specific build environments. When Babel executes it will look for <code>process.env.BABEL_ENV</code>. If that&#39;s not set, it will look up <code>process.env.NODE_ENV</code> and if that&#39;s not found, it will fallback to the string <code>&#39;development&#39;</code>. After doing this lookup it will check if the config has an <code>env</code> object and if there is a key in that object that matches the previously found env string. If there is such a match, Babel applies the configuration under that env name.</p>
<p>We use it for example for our test runner <a href="https://jestjs.io/">Jest</a>. Because Jest can not use ESModules we need a Babel config that transpiles our modules to CommonJS modules. So we just add an alternative configuration for <code>preset-env</code> under the env name <code>&#39;test&#39;</code>. When Jest runs (We will use <code>babel-jest</code> for this. See in a later part of this series.) it sets <code>process.env.NODE_ENV</code> to <code>&#39;test&#39;</code>. And so everything will work.</p>
<h3><a name="tmplfwb-conclusion"></a>Conclusion and final notes for Babel setup</h3>
<p>Install all needed packages:</p>
<p><code>npm i -D @babel/core @babel/cli @babel/preset-env @babel/plugin-transform-runtime</code></p>
<p>Add a peerDependency to your <code>package.json</code> that your users should install themselves:</p>
<pre><code>...
  &quot;peerDependencies&quot;: {
      &quot;@babel/runtime-corejs3&quot;: &quot;^7.4.5&quot;, // at least version 7.4; your users have to provide it
  }
...
</code></pre>
<p>Create a <code>babel.config.js</code> that contains at least this:</p>
<pre><code class="language-javascript">// babel.config.js

module.exports = {
  presets: [
    [
      &#39;@babel/env&#39;, // transpile for targets
      {
        modules: false, // don&#39;t transpile module syntax
      },
    ],
  ],
  plugins: [
    [
      &#39;@babel/plugin-transform-runtime&#39;, // replace helper code with runtime imports (deduplication)
      { corejs: 3 }, // import corejs polyfills exactly where they are needed
    ],
  ],
  env: {
    test: {
      // extra configuration for process.env.NODE_ENV === &#39;test&#39;
      presets: [&#39;@babel/env&#39;], // overwrite env-config from above with transpiled module syntax
    },
  },
}
</code></pre>
<p>If you write TypeScript, run <code>npm i -D @babel/preset-typescript</code> and add <code>&#39;@babel/preset-typescript&#39;</code> to the <code>presets</code>.</p>
<p>If you write React code, (JSX) run <code>npm i -D @babel/preset-react</code> and add <code>&#39;@babel/preset-react&#39;</code> to the <code>presets</code>.</p>
<p>Add a <code>browserslist</code> section in your package.json:</p>
<pre><code class="language-json">...
  &quot;browserslist&quot;: [
    &quot;last 2 Chrome versions&quot;,
    &quot;last 2 Firefox versions&quot;,
    &quot;last 2 Edge versions&quot;,
    &quot;last 2 Opera versions&quot;,
    &quot;last 2 FirefoxAndroid versions&quot;,
    &quot;last 2 iOS version&quot;,
    &quot;last 2 safari version&quot;
  ]
...
</code></pre>
<p>In case of using another browserslist query that includes targets that do not have support for generator functions and/or async/await, there is something you have to tell your users:</p>
<p>Babel&#39;s transform-runtime plugin will import <code>regenerator-runtime</code>. This library depends on a globally available Promise constructor. <strong>But</strong> Babel will not include a promise polyfill for regenerator-runtime. Probably because it adds polyfills only for things genuinely belonging to <em>your</em> code, not external library code. That means, if your usecase meets these conditions, you should mention it in your README or installation instructions that the users of your lib have to make sure there is a Promise available in their application.</p>
<p>And that is it for the Babel setup.</p>
<p>Next up: Compiling with the TypeScript compiler</p>
]]></content:encoded>
      </item>
      <item>
        <title>Vererbung von viewport-percentage lengths in Chrome</title>
        <link>https://tobias-barth.net/blog/Vererbung-von-viewport-percentage-lengths-in-Chrome</link>
        <pubDate>Wed, 05 Jun 2013 16:59:00 GMT</pubDate>
        
        <description>Ich stolperte neulich über ein Problem mit viewport-relativen Längen in CSS.</description>
        <content:encoded><![CDATA[<p>Ich stolperte neulich über ein Problem mit <a href="http://www.w3.org/TR/css3-values/#viewport-relative-lengths">viewport-relativen Längen</a> in CSS. Die Situation war konkret folgende:</p>
<p>Ich hatte eine ungeordnete Liste, deren Elemente je ein <code>&lt;div&gt;</code> enthielten, in dem sich wiederum ein <code>&lt;a&gt;</code> befand:</p>
<pre><code class="language-html">&lt;ul&gt;
  &lt;li&gt;
    &lt;div&gt;&lt;a href=&quot;#&quot;&gt;Test&lt;/a&gt;&lt;/div&gt;
  &lt;/li&gt;
&lt;/ul&gt;
</code></pre>
<p>Zuerst ein Beispiel, wie es funktionieren sollte. Die <code>&lt;li&gt;</code>-Elemente sollen eine definierte Höhe haben und die <code>&lt;div&gt;</code>s sollten genauso hoch sein. Das CSS dazu:</p>
<pre><code class="language-css">* {
  margin: 0;
  padding: 0;
}
ul {
  list-style-type: none;
}
li {
  background: blue;
  height: 10em;
}
div {
  background: red;
  height: 100%; /* Genau so hoch wie sein Container */
}
</code></pre>
<p><a href="http://jsfiddle.net/PgTZ2/2/">Hier ist ein JS-Fiddle dazu</a>. Wie erwartet bedeckt das rote <code>&lt;div&gt;</code> das gesamte Listenelement. Ruhig auch mal in verschiedenen Browsern ansehen.</p>
<p>Jetzt geben wir dem <code>&lt;li&gt;</code>-Element aber eine vom Viewport abhängige Größe:</p>
<pre><code class="language-css">li {
  height: 30vw; /* Die Höhe soll 30 Prozent der Breite des Viewports betragen */
}
</code></pre>
<p><a href="http://jsfiddle.net/PgTZ2/3/">Hier die geänderte Demo</a>.</p>
<p>Im Firefox sieht es immer noch genau so aus wie vorher. Das würde man (ich) auch erwarten. Schließlich hat das Listenelement eine definierte Größe und ich sage, dass sein direktes Kindelement 100% dieser Größe haben soll. Sieht man sich das Ergebnis in Chrome an, zeigt sich aber ein anderes Bild.</p>
<p>Das <code>&lt;li&gt;</code> hat zwar die richtige Größe, aber das <code>&lt;div&gt;</code> ist nur so hoch, wie es sein Inhalt (eine Textzeile) erfordert. Sogar im IE9 wird es korrekt (wie im FF) dargestellt. Opera unterstützt derzeit keine viewport-related lengths, aber da auch dort <a href="http://business.opera.com/press/releases/general/opera-gears-up-at-300-million-users">demnächst ein Webkit rendert</a>, wird es sich wohl auch nur mittelmäßig zum Guten ändern.</p>
<p>Anscheinend muss man derzeit also entweder für Chrome in solchen Fällen auch jedem Kindelement die <code>v*</code>-Größe zuweisen, oder mit anderen Längeneinheiten arbeiten. Schade.</p>
]]></content:encoded>
      </item>
      <item>
        <title>jQuerys scrollTop() und border-box</title>
        <link>https://tobias-barth.net/blog/jQuerys-scrollTop-und-border-box</link>
        <pubDate>Mon, 28 Jan 2013 22:01:00 GMT</pubDate>
        
        <description>Der Fehler, den der Kunde berichtete, beruhte darauf, dass das ursprünglich verwendete `$(document).scrollTop()` nicht im IE8 funktionierte und nachdem er dies durch das funktionierende `$('html').scrollTop()` ersetzt hatte, lief es nicht mehr in Webkit-Browsern.</description>
        <content:encoded><![CDATA[<p><strong>Das Folgende betrifft soweit ich sehe nur jQuery &lt; 1.8.</strong></p>
<p>Letzte Woche habe ich den ersten Kandidaten für den irrsten Bug in diesem Jahr gefunden.</p>
<p>Der Fehler, den der Kunde berichtete, beruhte darauf, dass das ursprünglich verwendete <code>$(document).scrollTop()</code> nicht im IE8 funktionierte und nachdem er dies durch das funktionierende <code>$(&#39;html&#39;).scrollTop()</code> ersetzt hatte, lief es nicht mehr in Webkit-Browsern.</p>
<p>Die Lösung dafür war relativ schnell gefunden: Ich ersetzte einfach in dem betreffenden Script den einen <code>scrollTop()</code>-Aufruf durch den Ausdruck <code>($(document).scrollTop() || $(&#39;html&#39;).scrollTop())</code>. Damit war der Drops gelutscht und jeder Browser konnte sich aussuchen, was ihm passte (bzw. was nicht gleich 0 war, da der Code nur beim Scrollen zur Anwendung kam, reichte das aus).</p>
<p>Aber was war die Ursache? Nach viel Ausprobieren blieb mir nichts übrig als in die jQuery-Source zu schauen und mir anzusehen, wie jQuery.scrollTop() definiert ist. Für das verwendete jQuery 1.6.4 sieht das so aus:</p>
<pre><code class="language-javascript">// Create scrollLeft and scrollTop methods
jQuery.each( [&quot;Left&quot;, &quot;Top&quot;], function( i, name ) {
  var method = &quot;scroll&quot; + name;

  jQuery.fn[ method ] = function( val ) {
    var elem, win;

    if ( val === undefined ) {
      elem = this[ 0 ];

      if ( !elem ) {
        return null;
      }

      win = getWindow( elem );

      // Return the scroll offset
      return win ? (&quot;pageXOffset&quot; in win) ? win[ i ? &quot;pageYOffset&quot; : &quot;pageXOffset&quot; ] :
      jQuery.support.boxModel &amp;amp;&amp;amp; win.document.documentElement[ method ] ||
      win.document.body[ method ] :
      elem[ method ];
    }

    // Set the scroll offset

    // Interessiert uns hier nicht
  };
});
</code></pre>
<p>Der Fall des IE8 sollte mit der Zeile abgedeckt werden, die mit <code>jQuery.support.boxModel</code> beginnt. Diese Eigenschaft dient eigentlich dazu, zu überprüfen, ob das W3C-Box-Modell vom Browser unterstützt wird. Das Problem ist, in diesem Fall gibt sie <code>false</code> zurück, obwohl das Dokument in Ordnung ist und der Browser sich im Standardmodus befindet. Warum? Also nachsehen, wie <code>jQuery.support.boxModel</code> gesetzt wird:</p>
<pre><code class="language-javascript">// Figure out if the W3C box model works as expected
div.style.width = div.style.paddingLeft = &#39;1px&#39;
support.boxModel = div.offsetWidth === 2
</code></pre>
<p>Es wird ein Test-Div erzeugt, dem verschiedene Eigenschaften zugewiesen und das dann an den <code>body</code> angehängt wird. Da das Element hier einen Pixel breit ist und außerdem ein Padding von einem Pixel erhält, sollte <code>offsetWidth</code>, das die Gesamtbreite enthält, 2px zurückgeben.</p>
<p>Tut es aber nicht.</p>
<p>Der Grund dafür ist, dass ich im CSS <code>box-sizing: border-box;</code> gesetzt habe. Das ist eigentlich eine prima Sache, denn dadurch sind Elemente, denen man eine bestimmte Breite zuweist, auch wirklich so breit &ndash; egal ob sie Innenabstände oder Rahmen enthalten. Aber in diesem Fall passiert dann folgendes: Wir geben einem Element den linken Innenabstand 1px und sagen dann, dass das gesamte Element, mit Innenabständen und Rahmen, 1px breit sein soll. Das bedeutet, für den eigentlichen Inhalt ist kein Platz mehr, was egal ist, weil das Element sowieso keinen Inhalt hat und nur zum Testen da ist. Aber das bedeutet auch, dass <code>offsetWidth</code> nun auch 1 enthält, das ist schließlich die Gesamtbreite des Elements. Damit schlägt der Test <code>div.offsetWidth === 2</code> natürlich fehl, und <code>scrollTop()</code> gibt nicht mehr <code>window.document.documentElement.scrollTop</code> zurück, sondern 0.</p>
<p>Darauf muss man erstmal kommen</p>
<p>Ab jQuery 1.8 besteht das Problem nicht mehr, weil dann nicht mehr auf Box-Model-Support getestet wird bzw. dieser Test nicht mehr in <code>scrollTop()</code> abgerufen wird.</p>
<p>Warum Webkit übrigens <code>$(&#39;html&#39;).scrollTop()</code> nicht versteht, ist mir noch nicht so ganz klar.</p>
]]></content:encoded>
      </item>
      <item>
        <title>Node.js Performance und socket hangup</title>
        <link>https://tobias-barth.net/blog/node-js-performance-socket-hangup</link>
        <pubDate>Sat, 09 Apr 2016 18:45:00 GMT</pubDate>
        
        <description>Ungewöhnlich viele "socket hang up"-Meldungen. Nicht so viele, dass wir ernsthafte Schwierigkeiten hätten, die auch sichtbar wären, aber, und das ist besonders interessant, es sind wesentlich mehr als bei anderen Webapps, die Java-basiert sind und dieselben Services bei jedem Request anfragen.</description>
        <content:encoded><![CDATA[<p>In dem Team bei einer großen deutschen Internetfirma, in dem ich gerade arbeite, haben wir ein bisher eher unerklärliches Problem, das in unseren Logfiles aufschlägt.</p>
<p>Wir haben dort eine Express-basierte Webapp, die den Content im Grunde vollständig auf dem Server rendert. Dafür werden unter anderem diverse Microservices angesprochen, d.h. bei jedem Request an die Express-App fragt diese per AJAX mehrere interne, separate Dienste nach Daten und rendert basierend auf deren Antworten den Content, der an den Client gesendet wird.</p>
<p>Das funkioniert auch ziemlich gut, allerdings haben wir in den Logfiles unserer App ungewöhnlich viele &quot;ERROR: socket hang up&quot;-Meldungen. Nicht so viele, dass wir ernsthafte Schwierigkeiten hätten, die auch sichtbar wären, aber, und das ist besonders interessant, es sind wesentlich mehr als bei anderen Webapps, die Java-basiert sind und dieselben Services bei jedem Request anfragen.</p>
<p>Es scheint also primär nichts mit den internen Services zu tun zu haben, sondern mit unserer Node-App. Bisher haben wir nicht so wirklich eine Idee, was es sein könnte. Für die AJAX-calls nutzen wir <a href="https://github.com/mzabriskie/axios">Axios</a> und definieren damit auch ein Timeout, das bei 60ms liegt. Das kann es allerdings nicht sein, denn dann sähen wir timeout-Errors und nicht Socket-hang-ups.</p>
<p>Also habe ich mal das Internet danach durchforstet, was diese Socket-hangups bei Node erzeugen könnte. Dabei bin ich auf einen interessanten <a href="http://www.murvinlai.com/remote-http-request.html">Blogpost</a> gestoßen. Dort wird beschrieben, dass solche Fehler zwar auch von der Remote-Seite kommen können, aber eben auch von den Limitierungen des Systems, auf dem der Node-Server läuft. Es gibt ein Limit für die Zahl der gleichzeitig offenen Files, wobei dieses Limit pro Login=User gilt. Der Standardwert ist 1024. Ein Socket ist wie alles auf unixoiden Systemen eine Datei. Ein User (in dem Fall der User, dem der Node-Prozess gehört), kann also maximal 1024 Dateien gleichzeitig öffnen.</p>
<p>Das wäre auf jeden Fall mal ein Ansatz. Der Verfasser des erwähnten Blogartikels empfiehlt ein Limit von 10240, also das Zehnfache. Außerdem sollte der maxSockets-Wert des http.Agents von Node auf eine Zahl knapp unter dem ulimit gesetzt werden. Nodes Standard für maxSockets ist (mittlerweile) <code>Infinity</code>, aber bei uns ist es derzeit auf 25 gesetzt. Ich muss noch mal nachforschen, was der Grund für dieses Limit war.</p>
<p>Ohne dass ich weiß, wie unsere Java-Apps funktionieren oder konfiguriert sind, gefällt mir der Gedanke, dass unsere Node.js-App ankommende Anfragen so schnell verarbeitet und dementsprechend oft parallel die Services anspricht, dass es an Systemlimits stößt, die von den Java-Kreuzern nie tangiert werden.</p>
<p>Schauen wir mal.</p>
]]></content:encoded>
      </item>
    </channel>
  </rss>
