Simplified Explanation for the JavaScript Number and BigInt Data Types
By: Chrysanthus Date Published: 16 Jan 2025
Introduction
The JavaScript Number Type Format is not separated into Integer and Float as is with other programming languages. An integer or a float number is of the 64-bit Floating Point International IEEE 754 Standard, in JavsScript. There are some documents concerning the 64-bit Floating Point International IEEE 754 Standard, online (Internet) and off-line. However, when many people (including computer programmers) read those documents, they do not really understand.
There are three objectives for this article. The first is to explain the 64-bit Floating Point International IEEE 754 Standard in a simplified way, so that many people (including any computer programmer) can understand. The second objective is to explain, still in a simplified way, how this standard is applicable to the JavaScript Number Data Type. The third objective is to do some explanation of the JavaScript BigInt data type. After reading this article, the reader will understand the nature and limits of the JavaScript Number Data Type, and will be able to prevent certain number errors. The reader will also be able to handle the BigInt data type.
Actually, JavaScript has eight different data types, which are: String, Number, BigInt, Boolean, Undefined, Null, Symbol, Object. The focus in this article is on the Number Type. The BigInt type will be addressed towards the end of this article.
The other online and offline documents on this topic are either not well explained, or do not go deep enough. This article overcomes all that.
After reading this article, the reader will have confidence in using the JavaScript Number BigInt type.
Floating Point Number Format
A number without a decimal part is an integer. The number 36 is an integer. 36.375 is not an integer. It is a decimal number with a decimal part. The decimal part, .375 is a fraction, which is less than 1. Such a fraction is called a proper fraction.
36.375 is interpreted in decimal form as:
=>
Now,
=>
So, 1001002 = 3610, which is the whole number part of 36.37510 .
Now,
=>
So, 0.0112 = 0.37510, which is the decimal number part of 36.37510.
Therefore 36.37510 = 100100.0112
Put another way,
100100.0112 = 36.37510
Numbers are represented in the computer in base 2 and not in base 10, everything being equal. Since a cell in a register in the microprocessor or a cell in memory, can only take 1 or 0, there is no room to store a decimal or binary point. This poses a problem. As resolution, there is the IEEE-754 single precision 32-bit floating point representation and the IEEE-754 double precision 64-bit floating point representation. The JavaScript Number Data Type uses the IEEE-754 double precision 64-bit floating point representation, for all its numbers (integer and float), except for the BigInt numbers (see below).
64-bit Floating Point Number Format
The number, 100100.0112 can be expressed as:
100100.0112 = 1.001000112 x 2+5
The right-hand-side of the = symbol, is referred to in mathematics, as the base two standard form, of the left-hand-side, 100100.0112.
Now, 00100011 of 1.001000112 on the right-hand-side of the = symbol, without the preceding "1." and without the 2 for the base, is called the explicit significand. Actually, in this case, the binary point has been taken five places to the left, in order to have the "1." . Do not confuse between decimal point and binary point. Binary point is for base 2, while decimal point is for base 10. The "1." followed by 00100011 on the right-hand-side of the = symbol, without the 2 for the base, forms the effective significand, for the IEEE-754 single precision 32-bit floating point representation and the IEEE-754 double precision 64-bit floating point representation. Note: 1.00100011 is called the implicit significand.
After the significand on the right-hand-side, is the expression, "x 2+5". With this expression, +5 is called the exponent. The plus sign means that the binary point has to be moved five places forward, in order to be at its normal original position. 2 is the base for the numbering. The above equation can be written in reverse, as:
1.001000112 x 2+5 = 100100.0112
With the 64-bit floating point representation, it is "1.001000112 x 2 +5" that is used and not just "100100.0112". The 2 for the base is not recorded. The 64-bit floating point representation for the number, "1.001000112 x 2+5", which is equal to 36.37510 = 100100.0112, is shown in the following table:
Number | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | - - - | 0 | 0 | 0 | 0 |
Bit Position | 63 | 62 | 52 | 51 | 44 | - - - | 3 | 2 | 1 | 0 | ||||||||||||||||||||||||||
Notes | S | Exponent | Significand |
There are 64-bit positions, numbered from the right end, beginning from 0. The first bit on the left end, is the sign bit. If the number is positive, then this bit is 0. If the number is negative, then this bit is 1 (-1 consists of two characters and cannot be put in any one cell). 1.001000112 x 2+5, which is equal to 36.37510, which is also equal to 100100.0112, is a positive number. So, the first bit is 0.
There are eleven bit positions for the exponent, beginning from position 62 to position 52, inclusive. However, the exponent written there is 100000001002 which is equal to 102810. The exponent of the number of interest, is actually +5 of the base of two. So, what happened? -
Now, in the 64-bit format, an exponent of 0 is written as 011111111112, which is equal to 102310 . +510 is +1012. So, in arriving at 100001002 in the exponent portion, in the table, 1012 was added to 011111111112 to have 100001002 ; correspondingly, it means 5 was added to 1023, to have 102810.
The significand, without "1.", took the positions, 51 down to 44, inclusive. Note that the 1 of "1." has not been indicated in the 64-bit string. It is never indicated – accept that. The rest of the cells down to position 0, are filled with zeros.
If the actual exponent was -5, then 5 would have been subtracted from 102310 to have 101810. This corresponds to subtracting 1012 from 011111111112 to have 011111110102 .
With all the above illustration, the number (integer) +1, which is equal to 1.0 x 20 = 1.0 x 1 = 1.0, is represented as:
0 01111111111 00000000000000000000 00000000000 0000000000000000000002
Note that the "1." of 1.0 x 20 has not been indicated in the format. It is never indicated. This is equivalent to the "1.", which has not been indicated in the string, multiplied by 2 raised to the power (index) 0 (with the exponent being 102310 = 011111111112 for 20 = 1). The string means 1.0 x 20. The next mixed fraction going positively, after 1.0 is:
0 01111111111 00000000000000000000 00000000000 0000000000000000000012
Notice the 1 at the right end. This is equivalent to the "1.", which has not been indicated in the string, followed by 51 zeros and then 1, multiplied by 2 raised to the power 0 (with the exponent being 102310 = 011111111112 for 20 = 1). This representation is the number:
+20 x (1 + 2-52) approximately 1.0000000000000002
The number of mixed fractions between two consecutive integers on the number line, is infinite. So, no format (e.g. 32-bit or 64-bit) can provide all the mixed fractions between any two consecutive integers (whole numbers). The smaller the gap (interval) between two consecutive integers, provided by a format (e.g. 32-bit or 64-bit), the greater the number of mixed fractions between the consecutive integers (for the number line).
The reasons why the 64-bit format is described as double or higher precision, compared to the 32-bit format, are that the interval between two consecutive mixed fractions, bounded by two consecutive integers, for the 64-bit format, is smaller than the corresponding 32-bit format interval; also, there are more possible mixed fractions between two bounded integers, for the 64-bit format, than there are correspondingly, for the 32-bit format.
Representing the number, 0.0 does not really follow the above arguments, because of the un-indicated "1." . The representation for 0.0 is declared and has to be learned as such. To represent 0.0, all the cells for the significand are 0 and all the cells for the exponent are also zero. The sign bit can only either be 0 or 1. Unfortunately this gives rise to positive 0 and negative 0, as follows:
+ve zero: 0 00000000000 00000000000000000000 00000000000 0000000000000000000002
-ve zero: 1 00000000000 00000000000000000000 00000000000 0000000000000000000002
In real life there is only one zero. Positive 0 and negative 0 do not exist. However, 0 is usually considered as positive, in practice. Positive 0 and negative 0 exist here, because of this particular format description. The number line can also have +0 and -0 in theory, but only one zero exists.
The reader has to know the conversion of a number from base 10 to base 2 and vice-versa. That comes next.
Conversion of Integer from Base 10 to Base 2
The conversion method is continuous division of the decimal number (in base 10), by 2; and then reading the result from the bottom, as the following table illustrates, for the decimal number (integer), 36:
Table 1.2 Converting from Base 10 to base 2 |
||
Base 2 |
Base 10 |
Remainder |
2 |
36 |
0 |
2 |
18 |
0 |
2 |
9 |
1 |
2 |
4 |
0 |
2 |
2 |
0 |
2 |
1 |
1 |
|
0 |
|
Read from the bottom, the answer is, 100100. For any division step, there is the dividend divided by the divisor, to give the quotient. The quotient always has a whole number and a remainder. The remainder may be zero. In the table, the remainder for a quotient, is one row higher than the whole number. When converting to base 2, the last quotient is always, zero remainder 1.
Converting Decimal Part (fraction) of Decimal Number to Binary Part
36.375 is a decimal number with the decimal part, ".375". The decimal part ".375" is a fraction between zero and one. 0.5 in base ten is the same in value, as 1/2 in base two. 0.510 expressed with the base two expansion is:
It is not 0.1012, which means 0.62510. The decimal part of a decimal number has its equivalent binary part for the corresponding binary number. So, to convert a decimal number like 36.37510, to base two, convert 36 to binary and then convert .375 also to binary. Then join both results with the binary point. The methods to convert the two sections are different. How to convert a decimal integer to base 2, has been explained above.
To convert the decimal fraction to binary fraction, follow the following steps:
Multiply the decimal fraction (decimal part) by 2. The integer resulting from this is the first binary digit.
Repeat the above step with the fractional decimal result, to get the next binary digit.
Keep repeating the above step until the decimal fractional result is .0000 - - - .
Example: Convert the fractional part of 36.37510, to the equivalent fractional part in base two.
Solution:
.375 x 2 |
= |
0.750 |
first bit is |
0 |
.750 x 2 |
= |
1.500 |
second bit is |
1 |
.500 x 2 |
= |
1.000 |
third bit is |
1 |
Note that in the third step, .500 was multiplied by 2 and not 1.500 (with the whole number). The binary corresponding fraction, is read at the last column, from the top. And so,
.37510 = .0112
Converting an Integer (whole number Part) from Base 2 to Base 10
How to convert a number from base 2 to base 10 is shown in this section. The computer works basically in base 2.
Since everybody appreciates the value of a number in base 10, this section explains the conversion of a base 2 number, to base 10. To convert a base 2 number to base 10, multiply each digit in the base 2 number, by the base 2, raised to the index of its position, then add the answers.
Each digit for any number in any base, has an index position, beginning from 0 and from the right end of the number, moving leftwards. The following table show the digit index positions of 1001002
-
Index - >
5
4
3
2
1
0
Digit ->
1
0
0
1
0
02
Converting 1001002 to base 10 is as follows:
1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 0 x 20
Note: any digit raised to the index 0, becomes 1. Also,
23 = 2 x 2 x 2;
22 = 2 x 2
21 = 2
20 = 1
Note as well in mathematics, that =>, means "this implies that" and ∴ means, therefore.
In a mathematical expression, all the multiplications must be done first before addition; this is from the sequence, BODMAS (Brackets first, followed by Of, which is still multiplication, followed by Division, followed by Multiplication, followed by Addition, and followed by Subtraction). So,
1x25+0x24 +0x23+1x22+0x21+0x20 = 1x2x2x2x2x2 + 0x2x2x2x2 + 0x2x2x2 + 1x2x2 + 0x2 + 0x1
=> 1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 0 x 20 = 1 x 32 + 0 x 16 + 0 x 8 + 1 x 4 + 0 x 2 + 0 x 1
=> 1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 0 x 20 = 32 + 0 + 0 + 4 + 0 + 0
=> 1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 0 x 20 = 32 + 4
=> 1 x 25 + 0 x 24 + 0 x 23 + 1 x 22 + 0 x 21 + 0 x 20 = 36 (as expected)
Converting Binary Part (fraction) of Binary Number to Decimal Part
To achieve this, expand the binary fraction in reciprocal powers of 2.
Example: Convert the fractional part of 100100.0112, to the equivalent fractional part in base ten.
Solution:
=>
=>
Therefore
Positive and Negative Zeros in Base 2 Standard Form
The 64-bit format for the positive and negative zeros again are:
+ve zero: 0 00000000000 00000000000000000000 00000000000 0000000000000000000002
-ve zero: 1 00000000000 00000000000000000000 00000000000 0000000000000000000002
Note that either of these zeros (+ve or -ve) is equal to
0.0 x 2-1023
where the index is the decimal equivalent of 000000000002 - 011111111112 = -11111111112 = -(1x29+1x28+1x27+1x26+1x25+1x24+1x23+1x22+1x21+1x20) = -(512+256+128+64+32+16+8+4+2+1) = -102310 . Neither of them is, 1.0 x 2-1023 . The preceding "1." is omitted in this case, as it has to (be omitted). So do not imagine any 1 hidden in the 52nd position of the 64-bit format of bits.
Highest Possible Exponent and Highest and Lowest Numbers
Among the eleven bits for the exponent, the highest exponent declared for the 64-bit specification is,
111111111102 = 204610 corresponding to the power of 22046 -1023 = 21023 .
As a number (64-bit format), the highest declared exponent is:
0 11111111110 0000000000000000000000000000000000000000000000000000 = 1 x 21023
Remember that 20 is 1.
So the highest positive number for the 64-bit format is:
0 11111111110 11111111111111111111111111111111111111111111111111112 approximately +1.7976931348623157 X 10308
The lowest negative number for the 64-bit format is:
1 11111111110 11111111111111111111111111111111111111111111111111112 approximately -1.7976931348623157 X 10308
Each of these numbers is an integer (see proof below).
Maximum and Minimum Safe Integers
JavaScript does not have two separate data types for integers and floats. A float is a number with a decimal point. An integer or float is made using this 64-bit Floating Point International IEEE 754 Standard Format, explained above. From above, the maximum number is:
0 11111111110 11111111111111111111111111111111111111111111111111112 , which is equal to:
+1.11111111111111111111111111111111111111111111111111112 x 21023 (sign and exponent not indicated, but "1." indicated)
The preceding 1. is the 1. that is not indicated in the normal 64-bit format. The number of binary places here is 52. Note that " x 21023 " means take the binary point 1023 places to the right, and 1023 places is far greater than 52 places. So the maximum positive number is an integer. The preceding positive number due to the 64-bit specification is:
+1.11111111111111111111111111111111111111111111111111102 x 21023
This is also an integer. Note the 0 at the end of the binary part. Between the above two largest integers, are other integers that cannot be represented by the 64-bit specification. These integers that cannot be represented by the 64-bit format are unsafe integers. The preceding positive number to the previous number, due to the 64-bit specification is:
+1.11111111111111111111111111111111111111111111111111012 x 21023
There are still 52 binary places here. Note the 01 at the end of the binary part. Between the above two preceding (second and third) largest integers, are integers that cannot be represented by the 64-bit specification. These are also unsafe integers. During calculations, if the result is an unsafe integer, the unsafe integer has to be approximated (rounded) to the nearest representable integer.
Note: All representable integers and the unsafe integers among them, are considered as unsafe integers in JavaScript.
There are other unsafe integers going downwards. Going downwards, the least representable positive integer is:
0 00000000010 11111111111111111111111111111111111111111111111111112 , which is equal to:
+1.11111111111111111111111111111111111111111111111111112 x 252
And this is the maximum safe integer, written in decimal as 253 - 1 (remember 21 x 252 = 253).
The analysis here is similarly applied to the negative numbers. With that, the maximum positive safe integer is +253 - 1 and the minimum safe negative integer is -(253 - 1) .
Infinity and NaN
Infinity
The highest exponent (highest index for base 2) declared for the 64-bit specification is 111111111102 . However, the 64-bit format allows the higher (and possibly highest) exponent of 111111111112 . With this higher exponent and all the possible values for the significand, there are many extra numbers. Some of these extra numbers are used for numbers that are neither integers nor floats. Examples are: infinity and NaN (see below).
And so, positive infinity is declared as:
0 11111111111 00000000000000000000000000000000000000000000000000002 = +1.0 x 22047 = +infinity (positive infinity)
and negative infinity is:
1 11111111111 00000000000000000000000000000000000000000000000000002 = -1.0 x 22047 = −infinity (negative infinity)
NaN
NaN stands for Not-a-Number. Consider the following JavaScript statement:
let x = 100 / "orange";
The result of dividing an integer and a string is Not-a-Number (NaN).
The 64-bit Floating Point International IEEE 754 Standard has the following NaN values:
0 11111111111 00000000000000000000000000000000000000000000000000012 = NaN (sNaN on most processors, such as x86 and ARM)
0 11111111111 10000000000000000000000000000000000000000000000000012 = NaN (qNaN on most processors, such as x86 and ARM)
0 11111111111 11111111111111111111111111111111111111111111111111112 = NaN (an alternative encoding of NaN)
sNaN stands for "Signalling Not A Number" and qNaN stands for "Quiet Not A Number"; and they are not further addressed in this article. With ECMAScript (JavaScript) default coding, all NaN values are indistinguishable from each other.
Minimum Subnormal Positive Double Number
The minimum subnormal positive double number is the next number (proper fraction) that is greater than 0, in the 64-bit format. Positive zero is:
0 00000000000 00000000000000000000 00000000000 0000000000000000000002
So the next number greater than 0 is:
0 00000000000 00000000000000000000000000000000000000000000000000012 = +2-1022 X 2-52 = 2-1074 approximately 4.9406564584124654 X 10-324 .
This is the minimum subnormal positive double number.
Epsilon
Epsilon is the difference between the smallest floating point number (double) greater than 1 and 1 . This difference (absolute value) is approximately 2.2204460492503130808472633361816 X 10-16 .
It is obtained from,
0
01111111111 00000000000000000000000000000000000000000000000000012
minus
0
01111111111 00000000000000000000000000000000000000000000000000002
The JavaScript BigInt Number Type
BigInt Numbers are integers (positive and negative). As seen from above, the maximum safe integer is: 253 - 1. This number is not big enough for all modern applications. So JavaScript introduced the BigInt number type.
The BigInt type represents an integer value. The value may be any size and is not limited to a particular bit-width. Generally, where not otherwise noted, operations are designed to return exact mathematically-based answers. For binary operations, BigInts act as two's complement binary strings, with negative numbers treated as having bits set infinitely to the left.
A normal JavaScript integer, is actually a float ending without a suffix. A BigInt integer ends with a suffix. So 555 is a normal JavaScript integer and 555n is a BigInt number (integer).
Conclusion
JavaScript has eight different data types, which are: String, Number, BigInt, Boolean, Undefined, Null, Symbol, Object. Now, Number and BigInt are the only numeric types. JavaScript does not have sub numeric types like, short int, long int, double float, etc.
Normal or relatively small integer numbers are from the 64-bit Floating Point International IEEE 754 Standard numbers, without the decimal (proper fractional) part. The safest maximum integer is 253 - 1. Any integer above this value should be coded as BigInt by the programmer.
References:
- ECMAScript ® 2024 Language Specification: ECMA-262, 15th edition, June 2024
- Wikipedia: Double-Precision Floating-point Format - https://en.wikipedia.org/wiki/Double-precision_floating-point_format#JSON
- IEEE Standard for Floating-point Arithmetic, Approved 13th June 2019