AWS Amplify, React, Babel, and Webpack Setup

amplify, aws, javascript

This post will likely become out of date very shortly as Amplify Improves. If it has been more than 2 months since this post, please consider this “reader beware”.

A few months ago, I noted how wonderful I found AWS Amplify. Since then, Amplify has only improved.

Amplify is a JavaScript library and CLI toolkit that (1) brings together several existing AWS serverless products into one easy-to-use package in your front end, and (2) provides an easy way to set up and manage the backend infrastructure those front end features would rely on.

With Amplify, you can get an almost Rails + Heroku like experience, e.g. using amplify add auth to add authentication to your application from front end through AWS Identity pool, amplify add hosting to set up S3 or S3 + CloudFront based hosting for the static assets of the site, and “amplify publish” to both deploy Amplify-generated CloudFront setups and simultaneously push static code to S3, much like heroku push.

Amplify additionally has some associated libraries to better integrate Amplify with popular front end JavaScript frameworks like React and Angular.

However, I had some trouble getting Amplify set up with React, ES6, and Webpack. I believe that at this time there are several library versions that are in flux, meaning that many existing tutorials are out of date.

At the time of writing, this should give you a functional setup. First, set up a new project with NPM. Note that this does not include the aws-amplify-react package.

First, install the amplify CLI, per the Amplify Quick Start and configure Amplify:

1
2
npm install -g @aws-amplify/cli
amplify configure

Then make a basic folder structure - you can just ‘touch’ index.html and ‘package.json’.

1
2
3
4
5
6
- amplify-js-app
    - index.html
    - package.json
    - webpack.config.js
    - /src
        |- app.js

And inside your app folder, initialize your amplify project with amplify init.

Then, following the prompts as appropriate,

1
2
3
4
5
6
7
8
9
10
11
12
13
npm init
npm install aws-amplify --save
npm install webpack --save-dev

npm install react --save
npm install react-dom --save

npm install babel-core@6 --save-dev
npm install babel-loader@7 --save-dev
npm install babel-preset-env --save-dev
npm install babel-preset-react --save-dev
npm install babel-polyfill --save
npm-install babel-runtime --save

Note that we’re using Babel 6 here. At the time of writing, Babel 7 had been released. Babel 7 renames several of the babel packages - for example, babel-runtime becomes @babel/runtime see package. However, at least to the best of my knowledge, there is some dependency in the React and / or aws-amplify packages that prevents using webpack + babel 7 + react + amplify together.

Once this is in, add the following scripts to your package.json:

1
2
3
4
"scripts": {
  "start": "webpack-dev-server --port 8080",
  "build": "webpack"
}

And setup your webpack.config.js as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
const CopyWebpackPlugin = require('copy-webpack-plugin')

module.exports = {
  mode: 'development',
  entry: './src/app.js',
  output: {
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      }
    ]
  },
  resolve: {
    extensions: ['.js', '.jsx']
  },
  plugins: [
    new CopyWebpackPlugin(['index.html'])
  ]
};

Note that the key aspects here are: adding ‘.js’ and ‘.jsx’ to your resolve: extensions section, and adding the ‘CopyWebpackPlugin’ to your ‘plugins’ section.

To use this setup, you need an index.html in your project root that references ‘bundle.js’. You’ll need a file entrypoint in ‘src/app.js’ that will be compiled to ‘bundle.js’. In ‘src/app.js’, you can use the following imports:

1
2
3
4
import React from 'react';
import ReactDom from 'react-dom';
import Amplify from 'aws-amplify';
import awsmobile from './aws-exports';

Once this is done, you can add hosting with amplify add hosting, and then push any changes you’ve made to your app with amplify publish.

_Something wrong? Or was this helpful? Let me know - twitter @jamescgibson, Mastodon @jamescgibson@refactorcamp.org_

Mapping Versus Capital Budgeting

finance, mapping

Preface: I’ve become obsessed with Wardley Maps. Wardley maps are true maps - that is, planes with direction and movement - that help you understand the implications of a business’ strategy. If you’re not familiar with them, I recommend reading Simon Wardley’s WIP book on Medium or trying Ben Mosior’s Build your First Wardley Map.

Wardley maps have become an important part of my decision making toolkit. I use them to understand & communicate the strategic considerations of companies I interact with.

However, the strategic implications of Wardley maps can come into opposition with other decision making tools. Whenever we see contradictions from trusted rules, we must dive deeper - either one of our tools is broken, or we’re not seeing the whole situation.

In this post, I’ll consider what we’re missing when the Wardley Map contradicts a capital budgeting rule, like the IRR rule.

This post will assume you are familiar with Wardley Maps. It will also assume you understand the IRR rule. If you are unfamiliar with the IRR rule, see a brief explanation at the bottom of this post 1.

Let’s consider a real-life situation. Here’s an example map for, say, Dropbox, prior to them moving their storage services from S3 to a custom-built solution.

Dropbox Original Map

Link to Full Size

Let’s say we’re Dropbox, and we’re evaluating a proposal to invest significantly in building our own storage service, and move off of AWS. The IRR rule works out - at least on our projects, we can invest some reasonable amount, and save immensely on our AWS bills.

But building a custom replacement for Amazon S3 is replacing something that is at least a ‘product’, if not a utility, with a custom-build option. That is moving our dependency to the left on the Wardley map - against the grain of history! By doing so, we’ll lose out, or have to pay to keep up with, any benefits that would accrue naturally over time, as components move rightwards on the map.

Let’s assume that we’ve already negotiated as hard as we can against Amazon, and that we can’t get a suitable quote from any other provider (replacing S3 with another competing service would not imply changing our map, and so there would be no conflict, and negotiating well is just good business sense).

We appear to be in a dilemma: either (a) our Wardley Map is steering us wrong, and custom built is the way, (b) our IRR rule is no good, and we should keep paying Amazon for S3.

I propose that in this situation, our cost of capital is too low. We should find ways to increase our cost of capital, by adding risk, by developing new products, by buying back equity.

Consider how the cost of capital varies with the x-axis of the Wardley map. To the left, in the Genesis and Custom Built zones, we have high risk and high uncertainty. Our cost of capital is high. As we move rightwards, towards utilities, our cost of capital should decrease - building a new power plant can be done for practically zero cost of capital, with government bonds.

Consider Dropbox’s map above. Like maps for most companies, the end user need that is being fulfilled is leftwards of many of the internal dependencies. That is, the cost of capital for the business as a whole should be higher than the cost of capital for some of the internal dependencies, considered in isolation. If our cost of capital happens to be such that it looks like we can make profitable but non-strategic investments low in our value chain, it must be because our cost of capital is that of a more mature company than we actually are - and we should increase our cost of capital (share buybacks! dividends!) to compensate; not tilt the windmill of building a custom utility.


  1. If you are unfamiliar, the IRR rule is this: when a company decides how to invest (that is, how to allocate or budget capital), it should consider each project’s Internal Rate of Return - that is, the expected return from the results of the investment - and compare it to the company’s cost of capital. If the IRR > Cost of Capital, then the project may be a good choice; if IRR < Cost of Capital, the project should not move forward. Fundamentally, the IRR rule expresses this: if the company makes this investment, do we expect to make profit in excess of our cost of capital? As an example, consider a company that makes TV remotes may consider to spend $1m on a new machine that will enable them to sell, after the cost of production and sales, an extra $200k’s worth of TV remotes each year for 5 years and then scrap the machine and recover $700k at the end of the 5th year. The internal rate of return of this project is 15.60%. If the company can borrow $1m from the bank for 7%, then 15.6% > 7%, and the investment should be made.

This Post Was Supposed to Be About AWS Cognito

Some notes from setting up AWS Cognito for authentication on a new SPA, but turns into notes about AWS Amplify.

Some notes on setting up AWS Cogntio User Pools:

I have been using the AWS SDK for Javascript using Node as much as possible, instead of using the dashboard, in order to help keep steps reproducible.

  1. Note that the aws-sdk library will default to use the [default] credentials configured in ~/.aws/credentials, and may fail silently to override them.

  2. Note that it appears that Identity Pools created from the SDK will not appear on the dashboard until you have added a user. Alternatively, this may just be an artifact of eventual consistency on the API.

If you’re looking for the above, you should try Amplify

If you are interested in using AWS Cognito, I suggest you additionally look at AWS Amplify, which helps get you started with Cognito + several other AWS product in one neat package.

Note, however, that if you use the AWS Amplify CLI,

  1. The default Cognito User Pool that is created will have a phone number and 2fa on
  2. You cannot change the above setting.

As such, I suggest not creating a Cognito User Pool through the Amplify CLI; it makes local development far more painful than necessary.

Instead, create the Cognito User Pool through the dashboard or using a script.

If you are using Amplify, I’ve started to create a quickstart development environment, though note that it does not have a local stub of AWS Cognito User Pools.

Type Systems and Incidental Versus Inherent Complexity

programming

Notes (‘epistemic status’): This post is conjecture and hypothesis; the purpose of this post is for me to work through ideas. No authority is claimed.

I am dissatisfied with my throughput as a product engineer 1.

While I hope and believe that I am at least an above-average developer, by throughput 2, in my tool sets of choice, I do not believe my throughput has increased significantly in the last year. Additionally, I am troubled by a nagging feeling that there must be a better way to develop software.

As a (chiefly) Ruby programmer who has never used a strongly and statically typed language professionally, I have been exploring type systems, type theory, and category theory in order to determine if a type system may enable significant throughput increases.

In theory, the benefits of a type system are (a) reducing programming errors, and revealing errors sooner, using the type system, and (b) for statically typed languages, enabling more-powerful program analysis.

In practice, the benefits ability of a type system to reduce programming errors is restricted by the power 3 of the type system. The benefits of more-powerful program analysis are reduced by poor tools, and made unnecessary by powerful language features 4.

What both of these amount to is: a type system can make a programmer more effective in managing complexity. Ideally, they help the programmer manage inherent complexity, the complexity that is required by the problem.

The cost of a type system is incidental complexity. When programmers complain about the boilerplate that languages like Java and C++ force for defining interfaces, they are complaining about having to manage complexity not inherent to the problem they are solving.

The “loss of flexibility” complaint is, in my mind, a symptom of extra incidental complexity; more complex systems are harder to change, regardless of whether the complexity is incidental or inherent.

The question, then, is: does a type system provide more benefit in managing inherent complexity than it costs in incidental complexity?

When phrased this way, the problem is more clearly a matter of choice: a programmer’s method of managing complexity is personal to them.


  1. I use the term ‘product engineer’ to mean ‘a software engineer who has and takes responsibility for designing product features as well’.

  2. “Throughput” in the sense used by Eliyahu M. Goldratt in his books; I choose it to avoid the existing connotations and preconceptions of ‘productivity’.

  3. No precise definition of ‘power’ or ‘powerful’, but a. A type system’s power increases with its ability to correctly and automatically infer types. b. A type system’s power increases with the quantity of restrictions that can be placed on a type. That is, Liquid Haskell’s refinement types are more powerful than regular Haskell types; Idris’ dependent types are more powerful than both, in terms of specificity. The two senses of power can be made to trade off against each other; whether Liquid Haskell’s type system is more powerful than Idris’ is a matter of debate, as I do not know how precisely weight Idris’ dependent types against Liquid Haskell’s much better automatic type inference.

  4. “Powerful” language can be taken to be something like “expressive”; think of the trade off between Java with great IDE refactoring tools, and a ‘more powerful’ language like Ruby, where you rarely need to use those tools in the first place. The ability to manipulate the text of the language source easily is among the most powerful features. A good library for manipulating the AST of the language can substitute. This is why I consider lisp to be one of the most-powerful languages.

Debugging Input Lag on Ubuntu 17.10

linux, ubuntu

I recently had an issue with very slight display lag on my Ubuntu 17.10 desktop, which had a very interesting fix.

For context, I have an i5-4XXX series desktop with a nVidia GTX 970, and two 1080p Samsung BX24440 ‘SyncMaster’ monitors connected via DVI.

A few nights ago, I turned off my computer to move it so that I could switch to a new desk setup. By doing so, I allowed my OS to apply a few kernel updates that require a reboot to be fully applied.

After rebooting, my machine had a noticeable display lag, on everything from mouse movement to hardware-accelerated video playback on YouTube.

My first thought was that by updating my kernel I had inadvertently applied some update that did not agree with my system. In my experience, this is very, very rare on modern Linux distributions, especially Ubuntu, but with the recent Spectre / Meltdown patches, I figured it was still possible.

So, I began rolling back each package that had been updated, one by one. An hour or so later, and after an hour of cursing myself for not just installing NixOS already, I figured I had hit a dead end.

So I dove deeper. Two things were apparent: During the period of lag, Xorg was using about 30% of a core, which is very high for Xorg.

Inspecting the Xorg log in /var/log/Xorg.0.log revealed something curious. Hundreds of messages blocks like this;

1
2
3
4
[    12.049] (--) NVIDIA(GPU-0): Samsung SMBX2440 (DFP-4): connected
[    12.049] (--) NVIDIA(GPU-0): Samsung SMBX2440 (DFP-4): Internal TMDS
[    12.049] (--) NVIDIA(GPU-0): Samsung SMBX2440 (DFP-4): 330.0 MHz maximum pixel clock
[    12.049] (--) NVIDIA(GPU-0):

So what was the solution?

Tighten my DVI cable.

As far as I can tell, when I moved my computer, the DVI connection between my GPU and my monitor became ever so slightly loose, in such a way that the GPU was detecting the monitor to become disconnected and reconnected about once per second. Even though there was no perceptible monitor flicker or “disconnected” messages, the process of disconnecting and reconnecting appears to have been consuming enough resources to delay frame rendering slightly every time it happened.

So, lesson for you - if you have display lag on a Linux system that you just can’t debug, maybe you just need to tighten your monitor connections.