Why are floating point numbers inaccurate?

Why do some numbers lose accuracy when stored as floating point numbers?

For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:

32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875

How can such an apparently simple number be “too big” to express in 64 bits of memory?

5 Answers
5

Leave a Comment