0
Answered
super quick reversion to double
Is there a quick & simple but revertible way to convert an extensive Matlab program, developed in multiprecision, back to double precision? For example, mp.Digits(16) ?
This question arises because code is sometimes developed in multiprecision for the purpose of eliminating quantization artifacts (finite precision) as a possible error source. A natural question arises as to the exact tradeoff of speed versus precision. This information is required for coherently answering one's boss during an entire development phase, preparing talks & papers, and otherwise impressing people at Mathworks enough for them to integrate MCT into Matlab.
Customer support service by UserEcho
It is possible to write precision-independent code, so that it can run with "double" or "mp" scalars. Please check the page for more details: https://www.advanpix.com/2016/07/21/how-to-write-precision-independent-code-in-matlab/
The mp.Digits(16) is not equivalent to "double" since we are using wider exponent range and our arithmetic/basic math. functions are done with guaranteed accuracy. In other words, mp.Digits(16) is delivering more accurate results compared to native "double".
I agree, the extended precision must be applied carefully, to the parts where ill-conditioning is observed (cancellation, accumulating rounding errors, etc.). Or to verify the results and study asymptotic properties of the algorithms.
Yes, true. But that method is not super quick. It requires more coding.
Presently, mp.Digits(34) is a special case that instructs MCT to use hardware microinstructions.
Why cannot mp.Digits(16) be similarly reserved to invoke double precision hardware microinstructions?
Presently, mp.Digits(16) is more precise than double precision. But the same is not true for mp.Digits(34); i.e, had you not invoked hardware microinstructions in that case, then mp.Digits(34) would be more precise than it is now (but slower).
In summary, I think mp.Digits(16) should invoke hardware microinstructions just as mp.Digits(34) does.
Simple switching to using hardware "double" instructions is impossible. MATLAB, Intel, etc. have been working on implementing "double" precision math functions for decades. To make it efficient, fast, with small memory footprint, to utilize CPU cores, etc. etc. This is enormous work, very difficult to replicate.
We are focusing on doing the same for arbitrary precision computations - to make them as fast as possible on modern CPU. This requires a lot of efforts from deriving new algorithms capable of doing extended precision to low-level software optimizations. The algorithms and code are inherently created for arbitrary precision. No simple switching to "double" is possible. We have been working on this for more than 10 years with long todo list for the next 10 years :)
No special hardware instructions exist for quadruple precision. It is all emulated in software using 64-bit integer CPU instructions. We created it exactly because quadruple is not implemented in hardware.
Double precision is already implemented in hardware, and its functionality has been polished very well. So that, for adequate comparison it is better to use existing "double" precision libraries, provided in MATLAB, MKL, etc.