When converting Number values to strings in JavaScript, the default is to use just enough digits to uniquely distinguish the Number value https://stackoverflow.com/a/49386744/298225.1 This means that when a number is displayed as “0.1”, that does not mean it is exactly 0.1, just that it is closer to 0.1 than any other Number value is, so displaying just “0.1” tells you it is this unique Number value, which is 0.1000000000000000055511151231257827021181583404541015625. We can write this in hexadecimal floating-point notation as 0x1.999999999999ap-4. (The p-4
means to multiply the preceding hexadecimal numeral, by two the power of −4, so mathematicians would write it as 1.99999999999916 • 2−4.)
这是您编写时产生的值0.1
, 0.2
, and 0.3
在源代码中,它们被转换为 JavaScript 的数字格式:
- 0.1 → 0x1.999999999999ap-4 = 0.1000000000000000055511151231257827021181583404541015625。
- 0.2 → 0x1.999999999999ap-3 = 0.200000000000000011102230246251565404236316680908203125。
- .3 → 0x1.3333333333333p-2 = 0.299999999999999988897769753748434595763683319091796875。
When we evaluate 0.1 + 0.2
, we are adding 0x1.999999999999ap-4 and 0x1.999999999999ap-3. To do that manually, we can first adjust latter by multiplying its significand (fraction part) by 2 and subtracting one from its exponent, producing 0x3.3333333333334p-4. (You have to do this arithmetic in hexadecimal. A16 • 2 = 1416, so the last digit is 4, and the 1 is carried. Then 916 • 2 = 1216, and the carried 1 makes it 1316. That produces a 3 digit and a 1 carry.) Now we have 0x1.999999999999ap-4 and 0x3.3333333333334p-4, and we can add them. This produces 4.ccccccccccccep-4. That is the exact mathematical result, but it has too many bits for the Number format. We can only have 53 bits in the significand. There are 3 bits in the 4 (1002) and 4 bits in each of the trailing 13 digits, so that is 55 bits total. The computer has to remove 2 bits and round the result. The last digit, E16, is 11102, so the 10 bits have to go. These bits are exactly ½ of the previous bit, so it is a tie between rounding up or down. The rule for breaking ties says to round so the last bit is even, so we round up to make the 11 bits become 100. The E16 becomes 1016, causing a carry to the next digit. The result is 4.cccccccccccd0p-4, which equals 0.3000000000000000444089209850062616169452667236328125.
Now we can see why printing .1 + .2
shows “0.30000000000000004” instead of “0.3”. For the Number value 0.299999999999999988897769753748434595763683319091796875, JavaScript shows “0.3”, because that Number is closer to 0.3 than any other Number is. It differs from 0.3 by about 1.1 at the 17th digit after the decimal point, whereas the result of the addition we have differs from 0.3 by about 4.4 at the 17th digit. So:
- 源代码
0.3
产生 0.299999999999999988897769753748434595763683319091796875 并打印为“0.3”。
- 源代码
0.1 + 0.2
生成 0.3000000000000000444089209850062616169452667236328125 并打印为“0.30000000000000004”。
现在考虑0.2 + 0.2
。结果是 0.40000000000000002220446049250313080847263336181640625。这是最接近 0.4 的数字,因此 JavaScript 将其打印为“0.4”。
最后,考虑0.3 + 0.2
。我们添加 0x1.999999999999ap-3 和 0x1.3333333333333p-2。我们再次调整第二个操作数,生成 0x2.6666666666666p-3。然后相加得到 0x4.0000000000000p-3,即 0x1p-1,即 1/2 或 0.5。所以它被打印为“0.5”。
另一种看待它的方式:
- 源代码的值
0.1
and 0.2
分别略高于 0.1 和 0.2,将它们相加会产生高于 0.3 的数字,并且错误会加剧,因此总错误足以将结果推离 0.3,足以使 JavaScript 显示错误。
- 添加时
0.2 + 0.2
,没有引入新的错误。
- 源代码的价值
0.3
略低于0.3。当添加到0.2
,略高于 0.2,误差被消除,结果正好是 0.5。
Footnote
1 This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification.