We all know that these data types have different precision. Decimal numbers have 28-29 significant digits. A decimal number has 128-bits. A double number has 64 bits, and 15-16 significant digits. A float number has 32 bits, and 7 significant digits.
Decimal numbers are good for financial and monetary calculations, not only because of the extra precision, but also because unlike double and float, it does not use binary representation.
For example, 1/10 in decimal representation is 0.1, which has only 1 digit. But in binary representation, it becomes a number with repeating digits: .001100110011..., where the pattern "0011" repeats indefinitely. This means that there will be rounding errors when performing calculations involving 0.1 in binary representation. But that is exactly what double and float data types would do. They use binary representation.
Besides the fact that decimal numbers are more precise when dealing with numbers with base 10, such as 0.1 and 0.01, dimes and pennies, their default behaviors when converting to text strings are different. If you run the following code:
decimal test = 0.00M;
test = 0.000M;
double test2 = 0.00;
test2 = 0.000;
what you see on the screen are "0.00", "0.000", "0", and "0".
Interesting, isn't it?