The decimal keyword denotes a 128-bit data type. Compared to
floating-point types, the decimal type has a greater precision and a
smaller range, which makes it suitable for financial and monetary
calculations. Precision is the main difference where double is a double
precision (64 bit) floating point data type and decimal is a 128-bit
floating point data type.
Double - 64 bit (15-16 digits)
Decimal - 128 bit (28-29 significant digits)
So Decimals have much higher precision and are usually used within
monetary (financial) applications that require a high degree of
accuracy. But in performance wise Decimals are slower than double and
float types. Double Types are probably the most normally used data type
for real values, except handling money. In general, the double type is
going to offer at least as great precision and definitely greater speed
for arbitrary real numbers. More about...Double vs Decimal
The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations. Precision is the main difference where double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
Double - 64 bit (15-16 digits)
Decimal - 128 bit (28-29 significant digits)
So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types. Double Types are probably the most normally used data type for real values, except handling money. In general, the double type is going to offer at least as great precision and definitely greater speed for arbitrary real numbers. More about...Double vs Decimal
Ling