This blog post describes the new number and Math
features of ECMAScript 6.
You can now specify integers in binary and octal notation:
> 0xFF // ES5: hexadecimal
255
> 0b11 // ES6: binary
3
> 0o10 // ES6: octal
8
The global object Number
gained a few new properties. Among others:
Number.EPSILON
for comparing floating point numbers with a tolerance for rounding errors.ECMAScript 5 already has literals for hexadecimal integers:
> 0x9
9
> 0xA
10
> 0x10
16
> 0xFF
255
ECMAScript 6 brings two new kinds of integer literals:
Binary literals have the prefix 0b
or 0B
:
> 0b11
3
> 0b100
4
Octal literals have the prefix 0o
or 0O
(yes, that’s a zero followed by the capital letter O; you’ll be fine if you use the first variant):
> 0o7
7
> 0o10
8
Remember that the method Number.prototype.toString(radix)
can be used to convert numbers back:
> (255).toString(16)
'ff'
> (4).toString(2)
'100'
> (8).toString(8)
'10'
In the Node.js file system module, several functions have the parameter mode
. Its value is used to specify file permissions, via an encoding that is a holdover from Unix:
That means that permissions can be represented by 9 bits (3 categories with 3 permissions each):
User | Group | All | |
---|---|---|---|
Permissions | r, w, x | r, w, x | r, w, x |
Bit | 8, 7, 6 | 5, 4, 3 | 2, 1, 0 |
The permissions of a single category of users are stored in 3 bits:
Bits | Permissions | Octal digit |
---|---|---|
000 | ––– | 0 |
001 | ––x | 1 |
010 | –w– | 2 |
011 | –wx | 3 |
100 | r–– | 4 |
101 | r–x | 5 |
110 | rw– | 6 |
111 | rwx | 7 |
That means that octal numbers are a compact representation of all permissions, you only need 3 digits, one digit per category of users. Two examples:
parseInt()
and the new integer literals parseInt()
has the following signature:
parseInt(string, radix?)
It provides special support for the hexadecimal literal notation – the prefix 0x
(or 0X
) of string
is removed if:
radix
is missing or 0. Then radix
is set to 16.radix
is already 16.For example:
> parseInt('0xFF')
255
> parseInt('0xFF', 0)
255
> parseInt('0xFF', 16)
255
In all other cases, digits are only parsed until the first non-digit:
> parseInt('0xFF', 10)
0
> parseInt('0xFF', 17)
0
parseInt()
does not have special support for binary or octal literals!
> parseInt('0b111')
0
> parseInt('0b111', 2)
0
> parseInt('111', 2)
7
> parseInt('0o10')
0
> parseInt('0o10', 8)
0
> parseInt('10', 8)
8
If you want to parse these kinds of literals, you need to use Number()
:
> Number('0b111')
7
> Number('0o10')
8
Alternatively, you can also remove the prefix and use parseInt()
with the appropriate radix:
> parseInt('111', 2)
7
> parseInt('10', 8)
8
Number
constructor properties This section describes new properties that the constructor Number
has picked up in ECMAScript 6.
Four number-related functions are already available as global functions and have been added (with no or little modifications) to Number
, as methods: isFinite
, isNaN
, parseFloat
and parseInt
.
Number.isFinite(number)
Is number
an actual number (neither Infinity
nor -Infinity
nor NaN
)?
> Number.isFinite(Infinity)
false
> Number.isFinite(-Infinity)
false
> Number.isFinite(NaN)
false
> Number.isFinite(123)
true
The advantage of this method is that it does not coerce its parameter to number (whereas the global function does):
> Number.isFinite('123')
false
> isFinite('123')
true
Number.isNaN(number)
Is number
the value NaN
? Making this check via ===
is hacky. NaN
is the only value that is not equal to itself:
> let x = NaN;
> x === NaN
false
Therefore, this expression is used to check for it
> x !== x
true
Using Number.isNaN()
is more self-descriptive:
> Number.isNaN(x)
true
Number.isNan()
also has the advantage of not coercing its parameter to number (whereas the global function does):
> Number.isNaN('???')
false
> isNaN('???')
true
Number.parseFloat
and Number.parseInt
The following two methods work exactly like the global functions with the same names. They were added to Number
for completeness sake; now all number-related functions are available there.
Number.parseFloat(string)
Number.parseInt(string, radix)
Especially with decimal fractions, rounding errors can become a problem in JavaScript. For example, 0.1 and 0.2 can’t be represented precisely, which you notice if you add them and compare them to 0.3 (which can’t be represented precisely, either).
> 0.1 + 0.2 === 0.3
false
Number.EPSILON
specifies a reasonable margin of error when comparing floating point numbers. It provides a better way to compare floating point values, as demonstrated by the following function.
function epsEqu(x, y) {
return Math.abs(x - y) < Number.EPSILON;
}
console.log(epsEqu(0.1+0.2, 0.3)); // true
Number.isInteger(number)
JavaScript has only floating point numbers (doubles). Accordingly, integers are simply floating point numbers without a decimal fraction.
Number.isInteger(number)
returns true
if number
is a number and does not have a decimal fraction.
> Number.isInteger(-17)
true
> Number.isInteger(33)
true
> Number.isInteger(33.1)
false
> Number.isInteger('33')
false
> Number.isInteger(NaN)
false
> Number.isInteger(Infinity)
false
JavaScript numbers have only enough storage space to represent 53 bit signed integers. That is, integers i in the range −2^53^ < i < 2^53^ are safe. What exactly that means is explained momentarily. The following properties help determine whether a JavaScript integer is safe:
Number.isSafeInteger(number)
Number.MIN_SAFE_INTEGER
Number.MAX_SAFE_INTEGER
The notion of safe integers centers on how mathematical integers are represented in JavaScript. In the range (−2^53^, 2^53^) (excluding the lower and upper bounds), JavaScript integers are safe: there is a one-to-one mapping between them and the mathematical integers they represent.
Beyond this range, JavaScript integers are unsafe: two or more mathematical integers are represented as the same JavaScript integer. For example, starting at 2^53^, JavaScript can represent only every second mathematical integer:
> Math.pow(2, 53)
9007199254740992
> 9007199254740992
9007199254740992
> 9007199254740993
9007199254740992
> 9007199254740994
9007199254740994
> 9007199254740995
9007199254740996
> 9007199254740996
9007199254740996
> 9007199254740997
9007199254740996
Therefore, a safe JavaScript integer is one that unambiguously represents a single mathematical integer.
Number
The two Number
properties specifying the lower and upper bound of safe integers could be defined as follows:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Number.isSafeInteger()
determines whether a JavaScript number is a safe integer and could be defined as follows:
Number.isSafeInteger = function (n) {
return (typeof n === 'number' &&
Math.round(n) === n &&
Number.MIN_SAFE_INTEGER <= n &&
n <= Number.MAX_SAFE_INTEGER);
}
For a given value n
, this function first checks whether n
is a number and an integer. If both checks succeed, n
is safe if it is greater than or equal to MIN_SAFE_INTEGER
and less than or equal to MAX_SAFE_INTEGER
.
How can we make sure that results of arithmetic computations are correct? For example, the following result is clearly not correct:
> 9007199254740990 + 3
9007199254740992
We have two safe operands, but an unsafe result:
> Number.isSafeInteger(9007199254740990)
true
> Number.isSafeInteger(3)
true
> Number.isSafeInteger(9007199254740992)
false
The following result is also incorrect:
> 9007199254740995 - 10
9007199254740986
This time, the result is safe, but one of the operands isn’t:
> Number.isSafeInteger(9007199254740995)
false
> Number.isSafeInteger(10)
true
> Number.isSafeInteger(9007199254740986)
true
Therefore, the result of applying an integer operator op
is guaranteed to be correct only if all operands and the result are safe. More formally:
isSafeInteger(a) && isSafeInteger(b) && isSafeInteger(a op b)
implies that a op b
is a correct result.
The global object Math
has several new methods in ECMAScript 6.
Math.sign(x)
Returns the sign of x
as -1
or +1
. Unless x
is either NaN
or zero; then x
is returned[1].
> Math.sign(-8)
-1
> Math.sign(3)
1
> Math.sign(0)
0
> Math.sign(NaN)
NaN
> Math.sign(-Infinity)
-1
> Math.sign(Infinity)
1
Math.trunc(x)
Removes the decimal fraction of x
.
> Math.trunc(3.1)
3
> Math.trunc(3.9)
3
> Math.trunc(-3.1)
-3
> Math.trunc(-3.9)
-3
Math.cbrt(x)
Returns the cube root of x
(∛x).
> Math.cbrt(8)
2
A small fraction can be represented more precisely if it comes after zero. I’ll demonstrate this with decimal fractions. (Internally, JavaScript’s floating point numbers are base 2, but externally you see them as base 10. The same basic principles w.r.t. precision apply in either case.) Floating point numbers with base 10 are represented as mantissa × 10^exponent^. If a zero comes before the dot then small fractions have less significant digits. For example:
Precision-wise, the exponent is not an issue here, the significant digits and the capacity of the mantissa are. That’s why (A) gives you higher precision than (B).
You can see this in the following interaction: The first number (1 × 10^−16^) registers as different from zero, while the same number added to 1 registers as 1.
> 1e-16 === 0
false
> 1 + 1e-16 === 1
true
Math.expm1(x)
Returns Math.exp(x)-1
. The inverse of Math.log1p()
.
Therefore, this method provides higher precision whenever Math.exp()
has results close to 1. You can see the difference between the two in the following interaction:
> Math.expm1(1e-10)
1.00000000005e-10
> Math.exp(1e-10)-1
1.000000082740371e-10
The former is the better result, which you can verify by using a library (such as decimal.js) for floating point numbers with arbitrary precision (“bigfloats”):
> var Decimal = require('decimal.js').config({precision:50});
> new Decimal(1e-10).exp().minus(1).toString()
'1.000000000050000000001666666666708333333e-10'
Math.log1p(x)
Returns Math.log(1 + x)
. The inverse of Math.expm1()
.
Therefore, this method lets you specify parameters that are close to 1 with a higher precision.
We have already established that 1 + 1e-16 === 1
. Therefore, it is no surprise that the following two calls of log()
produce the same result:
> Math.log(1 + 1e-16)
0
> Math.log(1 + 0)
0
In contrast, log1p()
produces different results:
> Math.log1p(1e-16)
1e-16
> Math.log1p(0)
0
Math.log2(x)
Computes the logarithm to base 2.
> Math.log2(8)
3
Math.log10(x)
Computes the logarithm to base 10.
> Math.log10(100)
2
Emscripten pioneered a coding style that was later picked up by asm.js: The operations of a virtual machine (think bytecode) are expressed in static subset of JavaScript. That subset can be executed efficiently by JavaScript engines: If it is the result of a compilation from C++, it runs at about 70% of native speed.
Math.fround(x)
Rounds x
to a 32 bit floating point value (float
). Used by asm.js to tell an engine to internally use a float
value.
Math.imul(x, y)
Multiplies the two 32 bit integers x
and y
and returns the lower 32 bits of the result. This is the only 32 bit basic math operation that can’t be simulated by using a JavaScript operator and coercing the result back to 32 bits. For example, idiv
could be implemented as follows:
function idiv(x, y) {
return (x / y) | 0;
}
In contrast, multiplying two large 32 bit integers may produce a double that is so large that lower bits are lost.
Math.clz32(x)
Counts the leading zero bits in the 32 bit integer x
.
> Math.clz32(0b01000000000000000000000000000000)
1
> Math.clz32(0b00100000000000000000000000000000)
2
> Math.clz32(2)
30
> Math.clz32(1)
31
Math.sinh(x)
Computes the hyperbolic sine of x
.
Math.cosh(x)
Computes the hyperbolic cosine of x
.
Math.tanh(x)
Computes the hyperbolic tangent of x
.
Math.asinh(x)
Computes the inverse hyperbolic sine of x
.
Math.acosh(x)
Computes the inverse hyperbolic cosine of x
.
Math.atanh(x)
Computes the inverse hyperbolic tangent of x
.
Math.hypot(...values)
Computes the square root of the sum of squares of its arguments.
While that is something that you normally don’t see, that means that -0
produces the result -0
and +0
produces the result +0
. ↩︎