In recent years, JavaScript has grown considerably in size. This blog post explores what’s still missing.
Notes:
For more thoughts on the first two issues, see the section on language design.
At the moment, JavaScript only compares primitive values such as strings by value (by looking at their contents):
> 'abc' === 'abc'
true
In contrast, objects are compared by identity (each object has a unique identity and is only strictly equal to itself):
> {x: 1, y: 4} === {x: 1, y: 4}
false
> ['a', 'b'] === ['a', 'b']
false
> const obj = {x: 1, y: 4};
> obj === obj
true
It would be nice if there were a way to create objects that are compared by value:
> #{x: 1, y: 4} === #{x: 1, y: 4}
true
> #['a', 'b'] === #['a', 'b']
true
Another possibility is to introduce a new kind of class (with the exact details to be determined):
@[ValueType]
class Point {
// ···
}
Aside: The decorator-like syntax for marking the class as a value type is based on a draft proposal.
As objects are compared by identity, it rarely makes sense to put them into (non-weak) ECMAScript data structures such as Maps:
const m = new Map();
m.set({x: 1, y: 4}, 1);
m.set({x: 1, y: 4}, 2);
assert.equal(m.size, 2);
This problem can be fixed via custom value types. Alternatively, the management of Set elements and Map keys could become customizable. For example:
Map via hash table: requires one operation for checking equality and another operation for creating hash codes. If you work with hash codes, you want your objects to be immutable. Otherwise, it’s too easy to break the data structure.
Map via sorted tree: requires an operation for comparing two values, to manage the values it stores.
JavaScript numbers are always 64-bit (double), which gives you 53 bits plus sign for integers. That means that beyond 53 bits, you can’t represent every number, anymore:
> 2 ** 53
9007199254740992
> (2 ** 53) + 1 // can’t be represented
9007199254740992
> (2 ** 53) + 2
9007199254740994
This is a considerable restriction for some use cases. There is now a proposal for BigInts, real integers whose precision grows as necessary:
> 2n ** 53n
9007199254740992n
> (2n ** 53n) + 1n
9007199254740993n
BigInts also support casting, which gives you values with a fixed number of bits:
const int64a = BigInt.asUintN(64, 12345n);
const int64b = BigInt.asUintN(64, 67890n);
const result = BigInt.asUintN(64, int64a * int64b);
JavaScript’s numbers are 64-bit floating point numbers (doubles), based on the IEEE 754 standard. Given that their representation is base-2, you can get rounding errors when dealing with decimal fractions:
> 0.1 + 0.2
0.30000000000000004
That is especially a problem in scientific computing and financial technology (fintech). A proposal for base-10 numbers is currently at stage 0. They may end up being used like this (note the suffix m
for decimal numbers):
> 0.1m + 0.2m
0.3m
At the moment, categorizing values is quite cumbersome in JavaScript:
typeof
or instanceof
.typeof
has the well-known quirk of categorizing null
as 'object'
. I’d also consider functions being categorized as 'function'
a quirk.> typeof null
'object'
> typeof function () {}
'function'
> typeof []
'object'
instanceof
does not work for objects from other realms (frames etc.).It may be possible to fix this via a library (I’ll create a proof of concept, once I have time).
C-style languages make an unfortunate distinction between expressions and statements:
// Conditional expression
let str1 = someBool ? 'yes' : 'no';
// Conditional statement
let str2;
if (someBool) {
str2 = 'yes';
} else {
str2 = 'no';
}
Especially in functional languages, everything is an expression. Do-expressions let you use statements in all expression contexts:
let str = do {
if (someBool) {
'yes'
} else {
'no'
}
};
switch
works inside do-expressions, too:
const n = 3;
let str = do {
switch (n) {
case 1:
'one';
break;
case 2:
'two';
break;
case 3:
'three';
break;
}
};
assert.equal(str, 'three');
Do-expressions help eliminate the last main use case for Immediately Invoked Function Expressions (IIFEs): Attaching “static” data to a function.
As an example, this is code that uses an IIFE to do so:
const func = (() => { // open IIFE
let cache;
return () => {
if (cache === undefined) {
cache = compute();
}
return cache;
}
})(); // close IIFE
With a do-expression, you don’t need an IIFE:
const func = do {
let cache;
() => {
if (cache === undefined) {
cache = compute();
}
return cache;
};
};
switch
JavaScript makes it easy to work directly with objects. However, there is no built-in way of switching over cases, based on the structure of an object. That could look as follows (example from proposal):
const resource = await fetch(jsonService);
case (resource) {
when {status: 200, headers: {'Content-Length': s}} -> {
console.log(`size is ${s}`);
}
when {status: 404} -> {
console.log('JSON not found');
}
when {status} if (status >= 400) -> {
throw new RequestError(res);
}
}
As you can see, the new case
statement is similar to switch
in some ways, but uses destructuring to pick cases. This kind of functionality is useful whenever one works with nested data structures (e.g. in compilers). The proposal for pattern matching is currently at stage 1.
There are currently two competing proposals for the pipeline operator. Here, we are looking at Smart Pipelines (the other proposal is called F# Pipelines).
The basic idea of the pipeline operator is as follows. Consider the following nested function calls.
const y = h(g(f(x)));
However, this notation usually does not reflect how we think about the computational steps. Intuitively, we’d describe them as:
x
.f()
to it.g()
to the result.h()
to the result.y
.The pipeline operator lets us express this intuition better:
const y = x |> f |> g |> h;
In other words, the following two expressions are equivalent.
f(123)
123 |> f
Additionally, the pipeline operator supports partial application (similar to the method .bind()
of functions): The following two expressions are equivalent.
123 |> f('a', #, 'b')
123 |> (x => f('a', x, 'b'))
One important benefit of the pipeline operator is that you can use functions as if they were methods – without changing any prototypes:
import {filter, map} from 'array-tools';
const result = arr
|> filter(#, x => x >= 0)
|> map(#, x => x * 2)
;
JavaScript has always had limited support for concurrency. The de-facto standard for concurrent processes is the Worker API, which is available in web browsers and Node.js (without a flag in v11.7 and later).
Using it from Node.js looks as follows.
const {
Worker, isMainThread, parentPort, workerData
} = require('worker_threads');
if (isMainThread) {
const worker = new Worker(__filename, {
workerData: 'the-data.json'
});
worker.on('message', result => console.log(result));
worker.on('error', err => console.error(err));
worker.on('exit', code => {
if (code !== 0) {
console.error('ERROR: ' + code);
}
});
} else {
const {readFileSync} = require('fs');
const fileName = workerData;
const text = readFileSync(fileName, {encoding: 'utf8'});
const json = JSON.parse(text);
parentPort.postMessage(json);
}
Alas, Workers are relatively heavyweight – each one comes with its own realm (global variables etc.). I’d like to see a more lightweight construct in the future.
One area where JavaScript is still clearly behind other languages is its standard library. It does make sense to keep it minimal, as external libraries are easier to evolve and adapt. However, there are a few core features that would be useful.
JavaScript’s standard library was created before the language had modules. Therefore, functions were put in namespace objects such as Math
, JSON
, Object
and Reflect
:
Math.max()
JSON.parse()
Object.keys()
Reflect.ownKeys()
It would be great if this functionality could be put in modules. It would have to be accessed via special URLs, e.g. with the pseudo-protocol std
:
// New:
import {max} from 'std:math';
assert.equal(
max(-1, 5), 5);
// Old:
assert.equal(
Math.max(-1, 5), 5);
The benefits are:
Benefits of iterables include on-demand computation of values and support for many data sources. However, JavaScript currently comes with very few tools for working with iterables. For example, if you want to filter, map or reduce an iterable, you have to convert it to an Array:
const iterable = new Set([-1, 0, -2, 3]);
const filteredArray = [...iterable].filter(x => x >= 0);
assert.deepEqual(filteredArray, [0, 3]);
If JavaScript had tool functions for iterables, you could filter iterables directly:
const filteredIterable = filter(iterable, x => x >= 0);
assert.deepEqual(
// Only convert iterable to Array to check what’s in it
[...filteredIterable], [0, 3]);
These are a few more examples of tool functions for iterables:
// Count elements in an iterable
assert.equal(count(iterable), 4);
// Create an iterable over a part of an existing iterable
assert.deepEqual(
[...slice(iterable, 2)],
[-1, 0]);
// Number the elements of an iterable
// (producing another – possibly infinite – iterable)
for (const [i,x] of zip(range(0), iterable)) {
console.log(i, x);
}
// Output:
// 0, -1
// 1, 0
// 2, -2
// 3, 3
Notes:
It would be nice to have more support for non-destructively transforming data. Two relevant libraries are:
JavaScript’s built-in support for date times has many quirks. That’s why the current recommendation is to use libraries for all but the most basic tasks.
Thankfully, work on temporal
, a better date time API, is ongoing:
const dateTime = new CivilDateTime(2000, 12, 31, 23, 59);
const instantInChicago = dateTime.withZone('America/Chicago');
One proposed feature that is relatively popular is optional chaining. The following two expressions are equivalent.
obj?.prop
(obj === undefined || obj === null) ? undefined : obj.prop
This feature is especially convenient for chains of properties:
obj?.foo?.bar?.baz
However, this feature also has downsides:
An alternative to optional chaining is to extract the information once, in a single location:
With either approach, it is possible to perform checks and to fail early if there are problems.
Further reading:
Early work is currently being done for operator overloading, but infix function application may be enough (there currently is no proposal for it, though):
import {BigDecimal, plus} from 'big-decimal';
const bd1 = new BigDecimal('0.1');
const bd2 = new BigDecimal('0.2');
const bd3 = bd1 @plus bd2; // plus(bd1, bd2)
The benefits of infix function application are:
This is an example of a nested expression:
a @plus b @minus c @times d
times(minus(plus(a, b), c), d)
Interestingly, the pipeline operator also helps with readability:
plus(a, b)
|> minus(#, c)
|> times(#, d)
These are a few things that I’m occasionally missing, but that I don’t consider as essential as what I’ve mentioned previously:
Chained exceptions: enable you to catch an error, wrap additional information around it and throw it again.
new ChainedError(msg, origError)
Composable regular expressions:
const regex = re`/^${RE_YEAR}-${RE_MONTH}-${RE_DAY}$/u`;
Escaping text for regular expressions (important for .replace()
):
> const re = new RegExp(RegExp.escape(':-)'), 'ug');
> ':-) :-) :-)'.replace(re, '🙂')
'🙂 🙂 🙂'
Array.prototype.item()
that supports negative indices (proposal):
> ['a', 'b'].item(-1)
'b'
As-patterns for matching and destructuring (proposal by Kat Marchán):
function f(...[x, y] as args) {
if (args.length !== 2) {
throw new Error();
}
// ···
}
Checking deep equality for objects (maybe: optionally parameterize with a predicate, to support custom data structures):
assert.equal(
{foo: ['a', 'b']} === {foo: ['a', 'b']},
false);
assert.equal(
deepEqual({foo: ['a', 'b']}, {foo: ['a', 'b']}),
true);
Enums: One benefit of adding enums to JavaScript is that that would close a gap with TypeScript – which already has enums. There are currently two draft proposals (which aren’t at a formal stage, yet). One is by Rick Waldron, the other one is by Ron Buckton. In both proposals, the simplest syntax looks like this:
enum WeekendDay {
Saturday, Sunday
}
const day = WeekendDay.Sunday;
Tagged collection literals (proposed – and withdrawn – by Kat Marchán): allow you to create Maps and Sets as follows:
const myMap = Map!{1: 2, three: 4, [[5]]: 6}
// new Map([1,2], ['three',4], [[5],6])
const mySet = Set!['a', 'b', 'c'];
// new Set(['a', 'b', 'c'])
Not anytime soon! The current separation between static typing at development time (via TypeScript or Flow) and pure JavaScript at runtime, works well. So there is no immediate reason to change anything.
A key requirement for the web is to never break backward compatibility:
It is still possible to fix some mistakes, by introducing better versions of existing features.
For more information on this topic, consult “JavaScript for impatient programmers”.
As a language designer, no matter what you do, you will always make some people happy and some people sad. Therefore, the main challenge for designing future JavaScript features is not to make everyone happy, but to keep the language as consistent as possible.
However, there is also disagreement on what “consistent” means. So, the best we can probably do is to establish a consistent “style”, conceived and enforced by a small group of people (up to three). That does not preclude them being advised and helped by many others, but they should set the general tone.
Quoting Fred Brooks:
A little retrospection shows that although many fine, useful software systems have been designed by committees and built as part of multipart projects, those software systems that have excited passionate fans are those that are the products of one or a few designing minds, great designers.
An important duty of these core designers would be to say “no” to features, to prevent JavaScript from becoming too big.
They would also need a robust support system, as language designers tend to be exposed to considerable abuse (because people care and don’t like to hear “no”). One recent example is Guido van Rossum quitting his job as chief Python language designer, due to the abuse he received.
These are ideas that may also help design and document JavaScript:
Creating a roadmap that describes a vision for what’s ahead for JavaScript. Such a roadmap can tell a story and connect many separate pieces into a coherent whole. The last such roadmap, that I’m aware of, is “Harmony Of My Dreams” by Brendan Eich.
Documenting design rationales. Right now, the ECMAScript specification documents how things work, but not why. One example: What is the purpose of enumerability?
A canonical interpreter. The semi-formal parts of the specification are already almost executable. It’d be great if they could be treated and run like a programming language. (You’d probably need a convention to distinguish normative code from non-normative helper functions.)
Acknowledgement: Thanks to Daniel Ehrenberg for his feedback on this blog post!