Is there a quick & simple but revertible way to convert an extensive Matlab program, developed in multiprecision, back to double precision? For example, mp.Digits(16) ?
This question arises because code is sometimes developed in multiprecision for the purpose of eliminating quantization artifacts (finite precision) as a possible error source. A natural question arises as to the exact tradeoff of speed versus precision. This information is required for coherently answering one's boss during an entire development phase, preparing talks & papers, and otherwise impressing people at Mathworks enough for them to integrate MCT into Matlab.
Customer support service by UserEcho