Friday, May 18, 2012

What is the difference between Decimal, Float and Double in C#?


What is the difference between Decimal , Float and Double in C#?



When would someone use one of these?


Source: Tips4all

8 comments:

  1. float and double are floating binary point types. In other words, they represent a number like this:

    10001.10010110011


    The binary number and the location of the binary point are both encoded within the value.

    decimal is a floating decimal point type. In other words, they represent a number like this:

    12345.65789


    Again, the number and the location of the decimal point are both encoded within the value - that's what makes decimal still a floating point type instead of a fixed point type.

    The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations. Not all decimal numbers are exactly representable in binary floating point - 0.1, for example - so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well - the result of dividing 1 by 3 can't be exactly represented, for example.

    As for what to use when:


    For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
    For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

    ReplyDelete
  2. Precision is the main difference.

    Float - 7 digits (32 bit)

    Double-15-16 digits (64 bit)

    Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

    Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    ReplyDelete
  3. float is a single precision (32 bit) floating point data type as defined by IEEE 754 (it is used mostly in graphic libraries).

    double is a double precision (64 bit) floating point data type as defined by IEEE 754 (probably the most normally used data type for real values).

    decimal is a 128-bit floating point data type, it should be used where precision is of extreme importance (monetary calculations).

    ReplyDelete
  4. The thing to keep in mind is that both float and double are considered "approximations" of a floating point number. Some floating point numbers cannot be accurately represented by floats or doubles, and you can get weird rouding errors out at the extreme precisions.

    Decimal doesn't use IEEE floating point representation, it uses a decimal representation that is 100% accurate by doing decimal based math rather than base 2 based math.

    What this means is that you can trust math to within the accuracy of decimal precision whereas you can't fully trust floats or doubles unless you are very careful.

    ReplyDelete
  5. The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:


    A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
    Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
    Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

    ReplyDelete
  6. float 7 digits of precision

    double has about 15 digits of precision

    decimal has about 28 digits of precision

    If you need better accuracy (eg: in accounting applications), use double instead of float.
    In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

    I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

    ReplyDelete
  7. Double and float can be divided by integer zero without an exception at both compilation and run time.
    Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

    ReplyDelete
  8. This has been an interesting thread of me, as today, we've just had a nasty little bug, concerning "decimal" having less precision than a "float".

    In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service, to save into a SQL Server database.

    Microsoft.Office.Interop.Excel.Range cell = ...
    object cellValue = cell.Value2;
    if (cellValue != null)
    {
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);
    }


    Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using "decimal.TryParse" lost the value completely. One such example:


    cellValue = 0.00006317592
    Decimal.TryParse(cellValue.ToString(), out value); would return 0


    The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal.

    Microsoft.Office.Interop.Excel.Range cell = ...
    object cellValue = cell.Value2;
    if (cellValue != null)
    {
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal)valueDouble;
    ...
    }


    Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, "double.TryParse" was actually able to retrieve such small numbers, whereas "decimal.TryParse" would set them to zero.

    Odd. Very odd.

    ReplyDelete