Michael Melanson

Up to speed on the new features of Javascript

Michael Melanson
21 November 2019
15 minute reading time

If you're someone who learned Javascript a few years ago but haven't learned these new features, this article will help get you up to speed.

A few years ago, many great new features were added to Javascript in the ES5, ES6 and ES7 language versions that make it a much better language for building large web applications and services. If you're someone who learned Javascript a few years ago but haven't learned these new features, this article will help get you up to speed.

A short history of Javascript

Javascript was created for the Netscape browser in 1995. The first version of the interpreter was written in only about two weeks, and so it was a quick and dirty implementation intended to be used for small interactive features in web pages.

In the next year it was standardized as "ECMAScript" (named after the standards body "European Computer Manufacturers Association"). This is where the name "ES6" comes from, but otherwise the name ECMAScript is rarely used. The language spread and became one of the most widely used languages in the world. A few attempts were made to evolve the language (ECMAScript 2-4) that never really caught on.

But once developers started using Javascript for single-page applications (web applications where all the routing between pages is handled in the browser, not by requesting new HTML pages from the server) in about 2005-2008, people started getting frustrated by its limitations. But it's hard to change a language that's used so widely.

This led to languages such as CoffeeScript (released in 2009) where developers could write code in a more sophisticated language, which was then compiled to Javascript so it could be run in a browser. This acted as a way to prototype new language features, and many parts of CoffeeScript were eventually rolled back into Javascript in ES5 (2015) and ES6 (2016). The latest version is ES7, released in 2017.

In the rest of this article I'm going to talk about many of the new features in these most recent releases of the Javascript language.

New variable bindings (let and const)

In old Javascript there was only one way to declare a variable – by using var.

var name = "Michael";
console.log(name); // prints "Michael"

var still exists of course, but ES6 added two new binding types (ways of declaring variables): let and const. The difference between them is that let can be reassigned to it, and const can't:

const name = "Michael";
name = "Bob"; // error

let name = "Michael";
name = "Bob"; // okay

But, it's important to understand that the const binding doesn't make things immutable (unable to be changed at all). It just means that you can't reassign it to an entirely new value. You can, however, mutate the value it's pointing to – you can change characters in a string, or fields in an object like this:

const michael = { name: "Michael", age: 34 };

// it's my birthday!
michael.age = 35;

What's the difference between let and var?

If both let and var can be changed, why create a whole new binding type? The difference between them is that let doesn't cause the variable to be hoisted. What's hoisting?

In most programming languages, if you declare a variable in the middle of a scope then you can access that variable only later in the scope. If this were another language, this would throw an error about message not being defined:

var message = "Hello world!";

But Javascript is different; that snippet won't throw an error. What do you think that will print?

The answer is that it will print the value undefined. On the first line, message exists as a variable in that scope even though it's before the var statement. It just has the value undefined. Why does it exist? Because the var binding hoisted the variable to the top of the scope as if it were declared at the very top of the scope. Then the var statement itself just assigns it the value "Hello world". It exactly the same as if you wrote this:

var message = undefined;
message = 'Hello world!';

But this doesn't happen with either let or const. You can only reference one of these bindings after they are declared. So let is like var except that it's not hoisted.

tl;dr My recommendation is that there's no reason to use var anymore. You should also prefer to use const unless you actually need to reassign a variable.

Array and object matching (destructuring)

The next two features – destructuring, and object shorthand – are probably the new features I use the most.

Destructuring is a syntax for declaring local variables and assigning them to fields from an object. It's best explained with an example:

const michael = { name: "Michael", age: 34 };
const { name } = michael; // (1)
console.log(name); // prints 'Michael'

The second line there, marked (1), destructures the michael variable, creating a variable name initialized to michael.name. It means the same thing as this:

const name = michael.name;

So far this seems not too useful. But you can do this for multiple fields at the same time:

const { name, age } = michael;
console.log(name, 'is', age, 'years old'); // print 'Michael is 34 years old'

You can also do this on function parameters:

function printBirthday({name, age}) {
  console.log(name, 'is', age, 'years old');
printBirthday(michael); // prints 'Michael is 34 years old'

The other feature of destructuring is that you can also use the spread operator (...) to create an object with all the remaining fields in the object that you haven't mentioned:

function printBirthday({name, age, ...rest}) {
  console.log(name, 'is', age, 'years old');
  console.log("Here's some more things about them:", rest);

const michael = { name: "Michael", age: 34, city: "Ottawa" };
printBirthday(michael); // prints "Michael is 34 years old" and "Here's some more things about them: { city: 'Ottawa' }"

Notice that it only showed the city field, because that was the only field in the input object other than the name and age that were pulled out specifically.

Destructuring is one of those features that when it's described like this it sounds kind of interesting. But once you start using it, it becomes a natural way of working with object values and you use it all the time.

Shorthands for creating objects

If destructuring is for taking apart objects, the flip side is the shorthand for putting them together. The first is a notation for creating fields bound to local variables:

const name = "Michael";
const age = 34;
const michael = { name, age }; // (1)
console.log(michael.name, 'is', michael.age, 'years old');

On the line marked (1), it creates an object with two fields, name and age, and assigns them to the values from variables of the same name. It's the same thing as writing this:

const michael = { name: name, age: age };	

That's nifty and convenient and avoids extra noise giving the same name twice if the variables are the same name. I often use it to build up return objects like this:

function someFunction() {
  const one = "first value";
  const two = "second value";
  return { one, two };

This lets you deal with each field separately, then the return line just packages them up together into an object with the right shape.

You can also use the spread operator we saw earlier when creating objects. This copies all the fields from one object into the new object. One place I use this a lot is when creating objects in test cases, where you have some fields that are required and some are specific to the test.

// at the top of the file
const defaultPerson = { name: "Michael", age: 34 };

// the test function
it('people are adults when they turn 18', () => {
  const person = { ...defaultPerson, age: 18 }; // (1)

On the line marked (1), we create a new person that's like the default one except that it has a different age. Any other fields, like the name: "Michael" field here, will have the value from defaultPerson.

If you want you can spread in multiple objects. That's perfectly okay – it'll just do any of the assignments in order:

const person = {
  age: 18,

In this case it will copy the fields from defaultPerson, then assign age to be 18, then copy all the fields from overrides.

Arrow functions

It's always been possible to write simple anonymous functions in Javascript and people have used this to write callbacks, for example. But one of the awkward things about Javascript is that when you do this, the this variable is special. Its value isn't passed as part of the closure (the variables captured from the scope where a function is defined), but rather is set when the function is called.

(The technical term for this is that this is dynamically scoped but everywhere else in Javascript things are lexically scoped.)

This difference led to lots of confusion and hacks where you'd have to write myFunction.bind(this) to create a new version of the function where this had a particular value no matter who calls it.

The solution to this was to create a new way of writing inline functions, usually called "arrow functions" because of the => operator you use to define them. They look like this:

const printBirthday = ({name, age}) => {
  console.log(name, 'is', age, 'years old');

printBirthday(michael); // prints 'Michael is 34 years old'

This does the same thing as an example earlier in the section on destructuring. It creates an arrow function, then assigns it to a variable printBirthday which can then be called just like any function. If you use it inside a class, then the function will have the this value you expect.

Promises and async / await

I'm saving the best for last here because Promises, and the async and await syntax that came with them, are probably the single most significant change in Javascript. But first I have to go back a bit and talk about how the Javascript runtime works.

Aside: How the Javascript runtime works.

Javascript runs in a single threaded environment. It can only be running one line of code at a time – anything else needs to wait. There's no way for a web page to be running two pieces of Javascript code at the same time.

(For the pedantic readers: I'm ignoring WebWorkers here, which is a whole other topic that you can read about if you're interested.)

This has the great benefit of avoiding all the problems that you run into when you have a multithreaded environment – where two processors might be running different code in the same memory at the same time – meaning your code can be much simpler and easier to reason about. But the drawback is that if you want to stay responsive you can never block the main thread for more than a few milliseconds. Web browsers want to run at 60 frames per second, which gives you 16 milliseconds (1000 / 16) to work before you start delaying the next frame. If this happens then the interface will start to feel unresponsive or "janky".

To make it appear like we can do more than one thing, we have to interleave pieces of work – instead of waiting, we need to be able to give up the processor to let something else run. You can see this interleaving if you open the "Performance" tab in your browser's development tools. If you click around a page you'll see something like this: See how the top bar is sliced up with many different bars? These are pieces of work being scheduled on the main thread. This interleaving is done with *event handlers – *callback functions that get invoked when interesting events happen. If you want to know when a button gets clicked, you can set a click handler.

<!-- in your HTML -->
    <button onClick={onButtonClicked}>Click me</button>
// in your Javascript
function onButtonClicked() {
    console.log("You clicked the button!");

This means that you don't need to write some loop to like while (!button.clicked()) { ... } to wait for a click. You can set the handler, then give up the processor for other work. If a click happens, the browser knows how to let you know so you can respond to it.

The same is true for network requests, by the way. When you make a request, it may take a long time (up to a few seconds) for a result to come back. Rather than waiting for a response, you can set an event handler on the request to say "call this function when a result comes back". Then you can give up the main thread so other work can happen.

Callback hell

This all works great for simple cases. The problem is what happens when you start building larger applications. Sometimes you want to build code that does a sequence of steps. You want to do something, then wait for an event, then do another thing. You also need error handlers for lots of asynchronous operations.

Very quickly, you end up with very deeply nested code that looks like this:

doOneThing(function() {
  doSecondThing(function() {
    console.log("All done!");
  }, function() {
    console.log("An error happened on the second thing!");
}, function() {
  console.log("An error happened on the first thing!");

Imagine this being several layers deep. It becomes an unmanageable, tangled mess of callback functions. People called this "callback hell".

How promises work

Promises help with callback hell by giving you a way of representing some work you want to happen in the future once a value is available.

A promise is just a Javascript object that you can call .then(...) on giving it a function. It will call that function when the promise is resolved if the work completes successfully. You can also call .catch(...) on it to give it a function you want called if the promise is rejected because the work failed.

Both of these functions – then and catch – each return promises themselves. This means that they can be chained:

  .then(({products}) => console.log("These are the products in your store:", products))
  .then(() => console.log("All done!"))
  .catch((error) => console.log("An error occurred:", error));

This lets you flatten out what would be nested nested callbacks into a sequence of steps. It's also composable – you can combine together promises in a natural sort of way. Let's say we wanted to hide the error handling logic in the catch step. You could do this by writing a new function that will fetch the products, and also handles errors if it fails:

function fetchProducts() {
  return fetch('/products.json')
    .then((result) => { console.log("All done!"); return result; })
    .catch((error) => console.log("An error occurred fetching the products:", error));

This function returns a promise that resolves once the fetch occurs – and it also prints a message and handles the error. Now someone can call it and chain on their own work to use the products that were just fetched:

  .then(({products}) => console.log("These are the products in your store:", products));

Being able to create higher level operations like this is one of the most powerful parts of promises. It was very hard to do with just callbacks.

async and await

As great as promises are all by themselves, writing the list of then and catch clauses is still awkward and hard to read, especially if you need to do loops. You need to be sure that you chain promises together properly and never leave them 'dangling'. The next step was to add syntax to Javascript to make promises a core part of the language so asynchronous code is easier to write and reads more like synchronous code.

The async and await keywords work together. You put async on a function where you want to use await. Inside an async function, you can use await to essentially wrap everything after that line in a then clause. You can also use a try/catch block to catch rejected promises the same way as any other exception in Javascript.

Let's rewrite the fetchProducts function above again to use this:

async function fetchProducts() {
  try {
    const result = await fetch('/products.json');
    console.log("All done!");
    return result;
  } catch(error) {
    console.log("An error occurred fetching the products:", error);

Then you can use it like this:

const {products} = await fetchProducts();
console.log("These are the products in your store:", products);

In this way you can almost pretend that you're writing regular code and forget that it will run asynchronously. But it will. This code won't actually wait for the result to come back. It will break and give up the processor and let other things use the processor. Then, when a result eventually comes back from the server (or an error occurs) your code will pick up where it left off.

The await statement works properly even in the case of loops, if conditions, and so on.

More reading

I hope this article was helpful to you!

I certainly haven't covered all the new features in Javascript. This is just a few I think are the most important but there's more, including classes, the module system, template variables, and many more! If you want to learn more about them I suggest going to es6-features.org where you'll find a list of all the features.

If you want more details about any of the jargon terms I've used in this article, I suggest googling for mdn <term>. This will point you to the Mozilla Developer Network's documentation, which is a great resource that has all the details about every part of Javascript.

Thanks for reading

If you need an expert at building awesome products for the web, I want to hear from you! I'm available for freelance development and consulting. More information on the services page.

Created by Michael Melanson
Header image by @Chelsey Faucher/Unsplash
Post image by Sushobhan Badhai