Ваши комментарии
mp.Digits(N) returns current precision (before changing it to N) in order to be used in a constructs like this:
prevN=mp.Digits(newN)
% Compute with precision "newN"
...
mp.Digits(prevN) % restore the precision
Our documentation explains it as following:
"Setup the default precision. All subsequent multiprecision calculations will be conducted using the newly specified number of decimal digits. The function returns the previously used default precision. If called without argument, the function simply returns the current default precision in use."
Let me know if it is too vague or not accurate - will try to re-word it in some way.
Thank you very much for the bug report and for the details on your work, very interesting!
Let me know if you need my help with anything else. I am marking the bug as fixed.
I have fixed the issue (see email). It was related to the same mix of indices/values in spdiags code (in different place).
The new spdiags will be included in the next toolbox update. Please use the file I sent you until then.
Overall, it is pretty common for M-language to mix the integers and floating point values in the same array.
Please let me know if you will find similar situations.
The low precision of 1 decimal digit is quite an extreme case, you are the first user we know who uses it.
Could you share what problem are you solving?
Thank you for the report!
It is an interesting and tricky bug. Usual spdiags uses double arrays to store both - the indices and the numeric values.
If we convert the code to support mp-type, then mp array is used to store the indices and numeric values too.
Everything is Ok as long as precision is capable of storing the indices accurately.
In the case of mp.Digits(1) this is no longer valid - and indices becomes inaccurate after hitting two digits.
I have sent you the fixed mpspdiags code via email.
Simple switching to using hardware "double" instructions is impossible. MATLAB, Intel, etc. have been working on implementing "double" precision math functions for decades. To make it efficient, fast, with small memory footprint, to utilize CPU cores, etc. etc. This is enormous work, very difficult to replicate.
We are focusing on doing the same for arbitrary precision computations - to make them as fast as possible on modern CPU. This requires a lot of efforts from deriving new algorithms capable of doing extended precision to low-level software optimizations. The algorithms and code are inherently created for arbitrary precision. No simple switching to "double" is possible. We have been working on this for more than 10 years with long todo list for the next 10 years :)
No special hardware instructions exist for quadruple precision. It is all emulated in software using 64-bit integer CPU instructions. We created it exactly because quadruple is not implemented in hardware.
Double precision is already implemented in hardware, and its functionality has been polished very well. So that, for adequate comparison it is better to use existing "double" precision libraries, provided in MATLAB, MKL, etc.
It is possible to write precision-independent code, so that it can run with "double" or "mp" scalars. Please check the page for more details: https://www.advanpix.com/2016/07/21/how-to-write-precision-independent-code-in-matlab/
The mp.Digits(16) is not equivalent to "double" since we are using wider exponent range and our arithmetic/basic math. functions are done with guaranteed accuracy. In other words, mp.Digits(16) is delivering more accurate results compared to native "double".
I agree, the extended precision must be applied carefully, to the parts where ill-conditioning is observed (cancellation, accumulating rounding errors, etc.). Or to verify the results and study asymptotic properties of the algorithms.
Hello Michal,
Good timing! We have just finished all the features we planned for major release, please download 4.9.2.
It includes all the optimization work for (full) matrix computations we have been working on lately.
In this regard the 4.9.1 was transitional with partial results integrated (basic matrix operations only).
Please use the 4.9.2.
Jon,
Thank you very much for the excellent question.
The main overhead in such loops are indexed access to the elements of array.
MATLAB makes this operation slow for third-party toolboxes. In short, it requires the toolbox to make a copy of the whole array, even if only one element is requested (MEX API restrictions).
That is why we always suggest vectorizing code as much as possible (avoid loops and process data as a whole).
However application of IIR filter cannot be easily vectorized. We have been planning to add it to the toolbox core with all possible optimizations for a long time. Your request motivated us to prioritize this. Thanks!
Please download a new version of the toolbox (4.9.0.14753) from our website. Among other changes it includes the function 'filter' which implements IIR filter.
Thus your example becomes:
tic r1 = filter([0], [1 -cosom], y1, [y1(1)]); r2 = filter([0], [1 -cosom], y2, [y2(1)]); toc
However, if you need to apply same filter on several arrays simultaneously, we would suggest to process them altogether:
y = [y1 y2]; tic r = filter([0], [1 -cosom], y, [y1(1) y2(1)]); toc
This will allow the toolbox process each array (column) in parallel, using different CPU cores.
Сервис поддержки клиентов работает на платформе UserEcho
Not to my knowledge (toolbox doesn't provide it). There is a chance that the built-in code can be ported to use the "mp" type. Or at least the needed part of the code can be enabled with the "mp"