Your comments

Thank you very much for the bug report!

Since R2023b the 'validateFinDiffRelStep' has been updated to accept 4 arguments instead of 3 (as it was before).

Indeed this incompatibility has slipped through our tests.

The toolbox has been updated with the incompatibility fixed. Please re-download & update the toolbox.

We haven't changed the toolbox version number (it is still 5.2.8.15537) but relevant files have been replaced and updated.

We still have the same beta version of fmincon (can send it by email for tests).

The fmincon is the part of the optimization toolbox, and its M-code depends heavily on the Mathworks compiled dll/so libraries (hardcoded to double precision). This makes it difficult to port to "mp" and include all the special cases and plethora of  functionality it provides.

What many of our users do - is they implement exactly the algorithm they need for particular problem.
This is what original topic starter did (we helped her along the way).

Not to my knowledge (toolbox doesn't provide it). There is a chance that the built-in code can be ported to use the "mp" type. Or at least the needed part of the code can be enabled with the "mp"

mp.Digits(N) returns current precision (before changing it to N) in order to be used in a constructs like this:

prevN=mp.Digits(newN)
% Compute with precision "newN"
...
mp.Digits(prevN) % restore the precision 


Our documentation explains it as following:

"Setup the default precision. All subsequent multiprecision calculations will be conducted using the newly specified number of decimal digits. The function returns the previously used default precision. If called without argument, the function simply returns the current default precision in use."

Let me know if it is too vague or not accurate - will try to re-word it in some way. 

Thank you very much for the bug report and for the details on your work, very interesting!

Let me know if you need my help with anything else. I am marking the bug as fixed.

I have fixed the issue (see email). It was related to the same mix of indices/values in spdiags code (in different place).

The new spdiags will be included in the next toolbox update. Please use the file I sent you until then.

Overall, it is pretty common for M-language to mix the integers and floating point values in the same array.

Please let me know if you will find similar situations. 


The low precision of 1 decimal digit is quite an extreme case, you are the first user we know who uses it.

Could you share what problem are you solving?

Thank you for the report!


It is an interesting and tricky bug. Usual spdiags uses double arrays to store both - the indices and the numeric values.

If we convert the code to support mp-type, then mp array is used to store the indices and numeric values too.


Everything is Ok as long as precision is capable of storing the indices accurately. 

In the case of mp.Digits(1) this is no longer valid - and indices becomes inaccurate after hitting two digits.


I have sent you the fixed mpspdiags code via email.


Simple switching to using hardware "double" instructions is impossible. MATLAB, Intel, etc. have been working on implementing "double" precision math functions for decades. To make it efficient, fast, with small memory footprint, to utilize CPU cores, etc. etc. This is enormous work, very difficult to replicate.


We are focusing on doing the same for arbitrary precision computations - to make them as fast as possible on modern CPU. This requires a lot of efforts from deriving new algorithms capable of doing extended precision to low-level software optimizations. The algorithms and code are inherently created for arbitrary precision. No simple switching to "double" is possible. We have been working on this for more than 10 years with long todo list for the next 10 years :)

 

No special hardware instructions exist for quadruple precision. It is all emulated in software using 64-bit integer CPU instructions. We created it exactly because quadruple is not implemented in hardware.

   

Double precision is already implemented in hardware, and its functionality has been polished very well. So that, for adequate comparison it is better to use existing "double" precision libraries, provided in MATLAB, MKL, etc.

It is possible to write precision-independent code, so that it can run with "double" or "mp" scalars. Please check the page for more details: https://www.advanpix.com/2016/07/21/how-to-write-precision-independent-code-in-matlab/


The mp.Digits(16) is not equivalent to "double" since we are using wider exponent range and our arithmetic/basic math. functions are done with guaranteed accuracy. In other words, mp.Digits(16) is delivering more accurate results compared to native "double".


I agree, the extended precision must be applied carefully, to the parts where ill-conditioning is observed (cancellation, accumulating rounding errors, etc.). Or to verify the results and study asymptotic properties of the algorithms.